Beruflich Dokumente
Kultur Dokumente
The Instructors Resource folder on the course website has a number of Baldrige video clips
which give an inside view of organizations that have received the Baldrige award. A couple of
those, that are especially appropriate for this chapter, have scenes that show how statistical
thinking and concepts can enhance an organizations quest for world-class quality.
ANSWERS TO QUALITY IN PRACTICE QUESTIONS
Improving Quality of a Wave Soldering Process Through the Design of
Experiments
1.
The first experimental design at the HP plant did not achieve the true
optimum combination of factors, because not all combinations were
tested. It is theoretically possible that a better combination of factors
exists among those that were not tested. Thus, the ones that were
tested could be considered a random sampling of all of the
possibilities. It is also likely that some interaction effects were at
work, so some of the combinations that produced a higher number of
defects had to be eliminated.
2.
Statistical thinking is a philosophy of learning and taking action based on the principles
of: a) all work occurs in a system of interconnected processes; b) variation exists in
these processes; and c) understanding and reducing variation are the keys to success in
improving processes. It is important to both managers and workers because without it,
consistent, predictable processes cannot be established or improved upon.
2.
Common causes of variation occur as a natural part of the process and are difficult to
change without making a major change in the system of which they are a part. Special
causes of variation arise from sources outside the system and can generally be traced
back to a specific change that has occurred and needs correction. For example, a process
may be stable and running well until the supplier of a critical material is changed. The
new vendor's material causes the process to go out of control (becomes unstable), so the
"solution" to the special cause is to have the vendor correct the deficiency, or return to
the previous supplier for materials.
3.
The two fundamental mistakes which managers can make in attempting to improve a
process are: (1) To treat as a special (or outside) cause any fault, complaint, mistake,
breakdown, accident, or shortage which is actually due to common causes, and (2) to
attribute to common causes any fault, complaint, mistake, breakdown, accident, or
shortage which is actually due to a special cause. In the first case, tampering with a
stable system can increase the variation in the system. In the second case, the
opportunity to reduce variation is missed because the amount of variation is mistakenly
assumed to be uncontrollable.
4.
The Red Bead experiment emphasizes that little, if anything, can improve quality in a
poorly-managed production system. In the experiment, managers control incoming
material (white and red beads) and work procedures so rigidly that there is little room
for change. It is their mistake that there is an input of red bead production material the
workers cannot stop the red beads from coming. Management inspects the beads only
after they (and the mistakes involved) have been made. No amount of encouragement,
threats, or promises of rewards will improve quality production when it is inevitable, by
the nature of the process, that red beads will be produced. Furthermore, the managers
have mistakenly believed that the variables in the process are controllable, and therefore
that the workers are simply not trying hard enough in their labors. The final point of the
Red Bead experiment is that all factors of a process must be examined to locate and
correct negative variations.
The Funnel experiment is designed to show how people can and do affect the outcome
of a process and create unwanted variation by "tampering" with the process, or
indiscriminately trying to remove common causes of variation. The system of dropping
the ball through the funnel towards the target is damaged by the variation of each
participant moving the funnel around to "get a better aim" at the target. The lesson is
that once a plan or process is determined to be correct and is set in motion, no
components of the process should be tampered with. The process should be adjusted
only if the entire process has been thoroughly examined and found to be in need of
change in some way.
5.
The methods for the efficient collection, organization, and description of data are called
descriptive statistics. Statistical inference is the process of drawing conclusions about
unknown characteristics of a population from which the data were taken. Predictive
statistics is used to develop predictions of future values based on historical data. The
three differ in approach, purpose, and outcomes. Descriptive statistics simply summarize
and report on existing conditions, inference helps to make decisions about population
characteristics based on sample data. Predictive statistics attempt to look into the future
and state what will be the results, if certain assumptions hold. All three of these can be
important to a manager who is trying to describe the current characteristics of a process,
or make inferences about whether a process is in control, or predict future values of
instrument readings in order to determine whether it is properly calibrated.
6.
Discrete variables are used to measure whether tangible or intangible output from a
process is acceptable or not acceptable (good or bad; defective, or not defective).
Discrete variables are often used to classify the quality level of customer service. Was
the patient in the hospital satisfied or dissatisfied; customer compliments versus
complaints for a tour firm; did the marketing research firm accurately or inaccurately
8.
The standard error of the mean is the (estimated) standard deviation of the population
divided by n ( / n ). The standard deviation is, of course, a measure of variability
within a population, where the standard error of the mean is a measure of the standard
deviation within the sampling distribution.
9.
The central limit theorem is extremely useful in that it states (approximately) that a
sampling distribution can be defined as the distribution obtained by taking a large
number of samples of size n from any population with a mean and a standard
deviation, , and calculating their means. The mean of the sample means for this
probability distribution will approach , and the standard deviation of the distribution
will be / n , as larger and larger sample sizes are taken. The CLT is extremely
important in any SQC techniques that require sampling.
10.
The two factors that influence sampling procedure are the method of selecting the
sample and the sample size. Methods include simple random sampling, stratified
sampling, systematic sampling and cluster sampling. Sample size is dependent on the
amount of variation in the population (measured by ) and the amount of error that the
decision maker can tolerate at a specified confidence level.
11.
would sue the company for injury suffered in taking the drug, or for the lack of any
substantial benefit coming from having taken an ineffective drug. In addition, the firm's
reputation could be permanently damaged.
12.
13.
Systematic errors in sampling can come from bias, non-comparable data, uncritical
projection of trends, causation, and improper sampling. They may be avoided by
approaches discussed in the chapter. Basically, careful planning of the sampling study,
awareness of possible systematic error causes, and careful execution of the study can
help to avoid most of the common errors listed above.
14.
The purpose of design of experiments is to set up a test or series of tests to enable the
analyst to compare two or more methods to determine which is better, or to determine
levels of controllable factors to optimize the yield of a process, or minimize variability of
a response variable.
15.
16.
Simple factorial experiments often require many iterations of the experiment to be run
before reliable results are obtained. ANOVA permits testing multiple combinations of
factors, simultaneously. Therefore it is often faster and cheaper to perform ANOVA than
using simple factorial experiments. Conclusions regarding interaction effects can also be
tested more rigorously using analysis of variance.
1.
Use the data for Twenty First Century Laundry for the weights of loads of clothes
processed through their washing department in a week. (See Prob. 10-1 in C10Data.xls
on the Premier website). Apply the Descriptive Statistics and Histogram tools in Excel
to compute the mean, standard deviation, and other relevant statistics, as well as a
frequency distribution and histogram for the following data. From what type of
distribution might you suspect the data are drawn?
Answer
1.
The following results were obtained from the Twenty First Century Laundry data
Column1
Mean
Standard Error
Median
Mode
Standard Deviation
Sample Variance
Kurtosis
Skewness
Range
Minimum
Maximum
Sum
Count
32.920
2.590
25.500
14.000
25.899
670.741
0.233
0.994
106.000
1.000
107.000
3292.000
100.000
107.000
1.000
5.139
The conclusion that can be reached from looking at the summary statistics and the
histogram is that these data are exponentially distributed, with descending frequencies.
These data may show that small, low-weight batches are most frequently processed, as
represented by the histogram. This has implications for the number and size of washers
and dryers that should be installed in the laundry. More detailed data can be found in the
Excel solution spreadsheet coded as P10-1&2.xls.
2.
The times for carrying out a blood test at Rivervalley Labs were studied in order to
learn about process characteristics. Apply the Descriptive Statistics and Histogram
analysis tools in Excel to compute the mean, standard deviation, and other relevant
statistics, as well as a frequency distribution and histogram, for the data taken from 100
tests and found in the Prob. 10-2, Excel data set. From what type of distribution might
you suspect the data are drawn?
Answer
2.
One of the advantages of using Excel spreadsheets is that a great deal of analysis can
be done easily. The summary statistics follow. Also shown is the histogram constructed
by using Excels Data Analysis tools (found under the Tools heading on the
spreadsheet). For best results in constructing the histogram, it is suggested that students
set up their own bins so as to provide 7 to 10 approximately equal sized class intervals
for the data. Note that if the program finds that the classes shown in the bins do not
extend over the upper or lower range of the data, it will automatically compensate by
adding a Less or More category for the outliers.
Descriptive Statistics
Column1
Mean
Standard Error
Median
Mode
Standard Deviation
Sample Variance
Kurtosis
Skewness
Range
Minimum
Maximum
Sum
Count
Largest(1)
Smallest(1)
Confidence Level(95.0%)
3.578
0.081
3.600
3.600
0.812
0.660
-0.267
-0.223
3.600
1.700
5.300
357.800
100.000
5.300
1.700
0.161
10
The data (Prob. 10-3 in C10Data.xls found on the Premier website for this textbook)
represent the weight of castings (in kilograms) from a production line in the Fillmore
Metalwork foundry. Based on this sample of 100 castings, compute the mean, standard
deviation, and other relevant statistics, as well as a frequency distribution and histogram.
What do you conclude from your analysis?
Answer
3.
Descriptive statistics for the Fillmore Metalwork foundry are shown below.
Descriptive Statistics
Column1
Mean
Standard Error
Median
Mode
Standard Deviation
Sample Variance
Kurtosis
Skewness
Range
Minimum
Maximum
Sum
Count
Largest(1)
Smallest(1)
Confidence Level(95.0%)
38.654
0.031
38.600
38.600
0.306
0.094
4.112
-0.324
2.300
37.300
39.600
3865.400
100.000
39.600
37.300
0.061
11
The conclusion that can be reached from looking at the summary statistics and the
histogram is that these data are not strongly normally distributed, due to significant
skewing to the right. More detailed data can be found in the Excel solution
spreadsheet P10-3&4.xls.
4.
The data (Prob. 10-4 in C10Data.xls found on the Premier website for this textbook)
show the weight of castings (in kilograms) being made in the Fillmore Metalwork
foundry and were taken from another production line. Compute the mean, standard
deviation, and other relevant statistics, as well as a frequency distribution and histogram.
Based on this sample of 100 castings, what do you conclude from your analysis?
Answer
4.
38.6270
0.0474
38.6000
38.7000
12
0.4737
0.2244
2.6000
37.3000
39.9000
3862.7000
100.0000
The conclusion that can be reached from looking at the summary statistics and the
histogram is that these data are not normally distributed, due to pronounced skewing to
the right. More detailed data can be found in the Excel solution spreadsheet P103&4.xls.
5.
San Juan Green Tea is sold in 1/2 liter (500 milliliter) bottles. The standard deviation for
the filling process is 10 milliliters. If the process requires a 1 percent, or smaller,
probability of over-filling, defined as over 495 milliliters, what must the target mean for
the process be?
Answer
5.
For San Juan Green Teas bottling process, the values for the 1% cutoff and the standard
deviation are:
13
x = 495 ml; = 10 ml
For a total probability of 1% for overfilling:
x
= 0.5000 - .4900 = 0.01
10
= 471.7 ml
The process mean should be 471.7 ml., so that there is only a 1% probability of
overfilling.
6.
Texas Punch was made by Frutayuda, Inc. and sold in 12-ounce cans to benefit victims
of Hurricane Ike. The mean number of ounces placed in a can by an automatic fill pump
is 11.8 with a standard deviation of 0.12 ounce. Assuming a normal distribution, what is
the probability that the filling pump will cause an overflow in a can, that is, the
probability that more than 12 ounces will be released by the pump and overflow the can?
Answer
6.
For cans of Texas Punch the mean, = 11.8; the standard deviation, = 0.12
x-
12 11.8
z = --------- = --------------- = 1.667 = 1.67
0.12
P(x > 12) = 0.5000 P ( 0 < z < 1.67 )
P(z > 12) = 0.5000 - 0.4525 = 0.0475
Thus, there is a 4.75% probability of an overflow.
14
Kiwi Blend is sold in 900 milliliter (ml) cans. The mean volume of juice placed in a can is
876 ml with a standard deviation of 12 ml. Assuming a normal distribution, what is the
probability that the filling machine will cause an overflow in a can, that is, the probability
that more than 900 ml will be placed in the can?
Answer
7.
The mean, for the Kiwi Blend product is = 876; the standard deviation, = 12, x =
900.
x-
z = --------- =
900 876
---------------- = 2.0
12
Wayback Beer bottles have been found to have a standard deviation of 5 ml. If 95
percent of the bottles contain more than 250 ml, what is the average filling volume of the
bottles?
Answer
8.
Given that the standard deviation for Wayback Beer, as = 5 ml., x = 250, and P (z >
x) = 0.95
Since z = - 1.645 for x > 0.95
250 -
-1.645 = ----------- ;
5
= 241.775 ml.
15
The mean filling weight of salt containers processed by the Piedra Salt Co. is 15.5
ounces. If 2 percent of the containers contain more than 16 ounces, what is the standard
deviation of the filling weight of the containers?
Answer
9.
Given that the process mean filling weight is = 15.5 oz. for the
Piedra salt containers,
By looking up 0.5000 - 0.0200 = 0.48, we find z = 2.05
16 15.5
z = 2.05 = ------------
= 0.2439 oz.
(Results are based on the Standard Normal Distribution Table,
Appendix A)
10.
In filling bottles of E&L Cola, the average amount of over- and under-filling has not yet
been precisely defined, but should be kept as low as possible. If the mean fill volume is
11.9 ounces and the standard deviation is 0.05 ounce:
a) What percentage of bottles will have more than 12 ounces (overflow)?
b) Between 11.9 and 11.95?
c) Less than 11.83 ounces?
Answer
10.
The mean value for E&L Cola, in ounces is: = 11.9; the standard deviation, = 0.05
a) P(x > 12.0) = 0.5000 - P ( 11.9 > x > 12.0)
16
(12.0 11.9)
----------------- = 2.0
0.05
(11.95 11.9)
----------------- = 1.0
0.05
0.05
P(x < 11.83) = 0.5000 - P ( -1.4 < z < 0) = 0.5000 0.4192 = 0.0808
Thus, 8.08 % will be filled to less than 11.83 ounces.
(Results are based on the Standard Normal Table, Appendix A)
11.
The frequency table for Prob. 10-11 found in C10Data.xls on the Premier website shows
the weight of castings (in kilograms) being made in the Fillmore Metalwork foundry
(also see Prob 10-3, in C10Data.xls, for raw data).
a. Based on this sample of 100 castings, find the mean and standard deviation of
the sample. (Note: If only the given data are used, it will be necessary to
research formulae for calculating the mean and standard deviations using
grouped data from a statistics text.)
17
b. Prepare and use an Excel spreadsheet, if not already done for Problem 10-3,
to plot the histogram for the data.
c. Plot the data on normal probability paper to determine whether the
distribution of the data is approximately normal. (Note: The Regression tool in
Excel has a normal probability plot that may be used here.)
Answer
11.
Answer
From the Excel statistical package, we get:
Cell
Cell
Cell
Cell
Cell
Cell
Cell
Cell
1
2
3
4
5
6
7
8
Cumulativ
e
Upper Cell
Boundarie
s
Frequencie
s
37.5
37.8
38.1
38.4
38.7
39.0
39.3
39.6
6
10
38
27
10
6
1
2
%
6.0%
16.0%
54.0%
81.0%
91.0%
97.0%
98.0%
100.0%
18
a)
x
s=
1
2
3
4
5
6
7
8
38.1
38.3
38.5
38.7
38.9
39.1
39.3
39.5
fx
n =
Frequencies
6
4
20
41
16
7
4
2
100
fx
fx^2
228.60
153.20
770.00
1586.70
622.40
273.70
157.20
79.00
3870.80
8709.66
5867.56
29645.00
61405.29
24211.36
10701.67
6177.96
3120.50
149839.00
3870.80
100 = 38.708 (vs. 38.654 from the data in spreadsheet 10-3)
2
2
fx ( fx) / n =
n 1
n 1
2
149839.00 (3870.80) / 100 = 0.2856 (versus
99
99
19
The formula for grouped data gives a close approximation of the statistics from
Excel, and from the actual data.
The normal probability plot and the histogram show that these data are
approximately normally distributed, with an R-squared value of 0.825.
12.
The frequency table for Prob. 10-12 found in C10Data.xls on the Premier website shows
the weight of another set of castings (in kilograms) being made in the Fillmore
Metalwork foundry (also see Prob. 10-4, in C10Data.xls, for raw data).
a. Based on this sample of 100 castings, find the mean and standard deviation of the
sample. (Note: If only the given data are used, it will be necessary to research formulae
for calculating the mean and standard deviations using grouped data from a statistics
text.)
b. Prepare and use an Excel spreadsheet, if not already done for Problem 10-4, to plot
the histogram for the data.
c. Plot the data on normal probability paper to determine whether the distribution of the
data is approximately normal. (Note: The Regression tool in Excel has a normal
probability plot that may be used here.)
20
Answer
12.
Cell
Cell
Cell
Cell
Cell
Cell
Cell
Cell
Cell
Cumulativ
e
Upper Cell
Boundarie
s
Frequencie
s
37.5
37.8
38.1
38.4
38.7
39.0
39.3
39.6
39.9
1
3
10
24
27
16
13
4
2
1.00%
4.00%
14.00%
38.00%
65.00%
81.00%
94.00%
98.00%
100.00%
1
2
3
4
5
6
7
8
9
When calculated using a hand calculator, you may obtain the following, using the
grouped data formula:
Adj. Cell
Midpoints
Cell 1
Cell 2
Cell 3
Cell 4
Cell 5
Cell 6
Cell 7
Cell 8
Cell 9
Cell 10
37.35
37.65
37.95
38.25
38.55
38.85
39.15
39.45
39.75
40.05
Frequencie
s
fx
1
1
6
11
26
24
18
9
2
2
100
37.35
37.65
227.70
420.75
1002.30
932.40
704.70
355.05
79.50
80.10
3877.50
fx^2
1395.02
1417.52
8641.22
16093.69
38638.67
36223.74
27589.01
14006.72
3160.13
3208.01
150373.71
21
a)
x
s=
fx
n =
3877.5
100 = 38.775 (vs. 38.627 from the actual data in
spreadsheet 10-4)
2
2
fx ( fx) / n =
n 1
n 1
data in
from the
spreadsheet 10-4)
The formula for grouped data gives a close approximation of the statistics from
Excel, and from the actual data.
The normal probability plot and the histogram show that these data are
approximately normally distributed, with an R-squared value of 0.947.
13.
In a filling line at A & C Foods, Ltd., the mean fill volume for rice bubbles is 325 grams
and the standard deviation is 20 grams. What percentage of containers will have less
than 295 grams? More than 345 grams (assuming no overflow)?
22
Answer
13.
For the A & C Foods, Ltd. rice bubble filling process, the mean is = 325 grams and the
standard deviation, = 20
P(x < 450) = 0.5000 - P ( 450 < x < 475)
x-
z = --------- =
(295 - 325)
----------------- = - 1.5
20
(345 - 325)
---------------- = 1.0
20
Tessler Electric utility requires service operators to answer telephone calls from
customers in an average time of 0.1 minute or less. A sample of 30 actual operator times
was drawn, and the results are given in the following table. In addition, operators are
expected to determine customer needs and either respond to them or refer the customer
to the proper department within 0.5 minute. Another sample of 30 times was taken for
this job component and is also given in the table. If these variables can be considered to
23
Mean Time
0.1023
0.5290
Standard Deviation
0.0183
0.0902
Answer
14.
0.1023, s1 = 0.0183
t1
s/ n
0.1023 0.10
0.0183 / 30
0.0023
0.697 , t29, .05 = 1.699
0.0033
= 0.5290, s2 = 0.0902
Because t29, .05 = 1.699, we cannot reject the null hypothesis for t1, but we can reject the
hypothesis for t2 . Therefore, there is no statistical evidence that the mean response time
exceeds 0.10 for the answer component, but the statistical evidence does support the
service component.
Note: Problems 1518 address sample size determination and refer to theory covered in
the Bonus Material folder for this chapter as contained on the Premier website.
15.
24
You are asked by the owner of the Moonbow Motel to develop a customer satisfaction
survey to determine the percentage of customers who are dissatisfied with service. In the
past year, 10,000 customers were serviced. She desires a 95 percent level of confidence
with an allowable statistical error of 0.01. From past estimates, the manager believes
that about 3.5 percent of customers have expressed dissatisfaction. What sample size
should you use for this survey?
Answer
15.
The size of the population is irrelevant to this customer satisfaction survey, although it is
good to know that it is sizable. Therefore, make the following calculations:
n = (z /2)2 p(1-p) / E2 = (1.96)2 (0.035)(0.965) / (0.01)2 = 1297.5, use 1298
16.
Determine the appropriate sample size needed to estimate the proportion of sorting
errors at the Puxatawney post office at a 99 percent confidence level. Historically, the
sorting error rate is 0.015, and you wish to have an allowable statistical error of 0.02.
Answer
16.
The sample size for the proportion of sorting errors at a post office, using a 99%
confidence level is:
n = (z /2)2 p(1-p) / E2 = (2.33)2 (0.015)(0.985) / (0.02)2 = 200.53, use 201
17.
A management engineer at Country Squire Hospital determined that she needs to make a
work sampling study to see whether the proportion of idle time in the diagnostic imaging
department had changed since being measured in a previous study several years ago. At
that time, the percentage of idle time was 8 percent. If the engineer can only take a
sample of 850 observations due to cost factors, and can tolerate an allowable error of
0.02, what percent confidence level can be obtained from the study?
25
Answer
17.
Using the formula: n = (z /2)2 p(1-p) / E2 , the engineer at the Country Squire Hospital
can solve for z /2 as follows:
850 = (z /2 )2 (0.08)(0.92) / (0.02)2
850 = (z /2 )2 (184)
(z /2 )2 = 850 / 184
z
/2
4.620
Answer
18.
__19__ = 0.095
200
26
Thus, Localtel can be more than 90% confident of their results by using a sample size of
200. Thus, there is no need to take more samples in order to meet the required sample
size.
19.
Using the Discovery Sampling table found in the Bonus materials on the Premier
website, suppose that a population consists of 2,000 units. The critical rate of
occurrence is 1 percent, and you wish to be 99 percent confident of finding at least one
nonconformity. What sample size should you select?
Answer
19.
Using the Discovery Sampling table (in the Sample Size Determination file) in the Bonus
materials on the Premier website, for a population of 2,000, and reading the sample size
required for a critical 1% rate, with a 99% confidence level yields the solution for the
samples size of approximately 400 (use 98.9% confidence, with a critical rate of 1%).
20.
The process engineer at Sival Electronics was trying to determine whether three
suppliers would be equally capable of supplying the mounting boards for the new gold
plated components that she was testing. The table found in Prob. 10-20 on C10Data.xls
on the Premier website shows the coded defect levels for the suppliers, according to the
finishes that were tested. Lower defect levels are preferable to higher levels. Using oneway ANOVA, analyze these results. What conclusion can be reached, based on these
data?
Answer
20.
The process engineer at Sival Electronics can develop a one-way ANOVA spreadsheet
(see spreadsheet 10-20 for details) that shows:
Finish 1
Finish 2
Finish 3
Finish 4
Finish 5
Supplier 1
11.9
10.3
9.5
8.7
14.2
Supplier 2
6.8
5.9
8.1
7.2
7.6
Supplier 3
13.5
10.9
12.3
14.5
12.9
SUMMARY
Groups
Supplier 1
Supplier 2
Supplier 3
Count
5
5
5
Sum
54.6
35.6
64.1
Average
10.92
7.12
12.82
Variance
4.762
0.697
1.812
ANOVA
Source of Variation
Between Groups
Within Groups
SS
84.233
29.084
df
2
12
MS
42.117
2.424
F
17.377
Total
113.317
14
27
P-value
0.00029
F crit
3.885
According to the F-test and probability of P < .05, there is a significant difference
between suppliers
21.
The process engineer at Sival Electronics is also trying to determine whether a newer,
more costly design involving a gold alloy in a computer chip is more effective than the
present, less expensive silicon design. She wants to obtain an effective output voltage at
both high and low temperatures, when tested with high and low signal strength. She
hypothesizes that high signal strength will result in higher voltage output, low
temperature will result in higher output, and the gold alloy will result in higher output
than the silicon material. She hopes that the main and interaction effects with the
expensive gold will be minimal. The data found in Prob. 10-21 on C10Data.xls on the
Premier website were gathered in testing of all 2n combinations. What recommendation
would you make, based on these data?
Answer
21.
Signal
High
High
High
High
Low
Low
Low
Low
Material
Gold
Gold
Silicon
Silicon
Gold
Gold
Silicon
Silicon
Temperature
Low
High
Low
High
Low
High
Low
High
28
Output Voltage
18
12
16
10
8
11
7
14
the process engineer at Sival Electronics can calculate the main effects as follows:
Signal
High (18 + 12 + 16 + 10)/ 4 = 14
Low ( 8 + 11 + 7 + 14)/ 4 = 10
High - Low = 4
Material
Gold (18 + 12 + 8 + 11)/ 4 = 12.25
Silicon (16 + 10 + 7 + 14)/ 4 = 11.75
Gold - Silicon = 12.25 - 11.75 = 0.5
Temperature
Low (18 + 16 + 8 + 7)/ 4 = 12.25
High (12 + 10 + 11 + 14)/ 4 = 11.75
Low - High = 12.25 - 11.75 = 0.5
The main effects of the signal (high or low) far outweigh the effects
of material and temperature, indicating that those factors are
insignificant. So, interaction effects will be negligible.
SUGGESTIONS FOR PROJECTS, ETC.
1.
This project will show that there are different ways to interpret
Demings Red Bead experiment, statistically.
2.
3.
29
This case study shows Deming's "Red Bead" Experiment in action. Drivers are being
blamed for conditions that are not under their control. The problem could be addressed
by process measurement, eliminating "special causes" and reducing common causes.
2.
A run chart would appear to be a way to begin to understand the process and to
determine if it is in control or not. Based on the available data, we have:
Center Line (average) for the chart = 240/40 = 6.0 mistakes
The data show that 12 drivers have exceeded the average. Also, 6 drivers had "no
defects". A Pareto chart (see spreadsheet disccase.xls for details) shows that only 8
drivers have more than 10 errors. The characteristics of drivers who are having difficulty
should be examined to explain their higher error rates. Are their errors far above normal,
or just a little above? Are they well trained? Are they overworked, with more than the
average number of difficult orders? Do they have poor equipment? A useful control chart
cannot be established, unless "special causes" are dealt with. Analysis should also be
done to determine what the good drivers are doing right. Are they more experienced,
drive newer cars, have better hearing and vision, etc.?
After corrections, a new chart (called a "c-chart" and discussed in Chapter 12) on the
stable process can be set up. Then those who consistently do well can be rewarded and
the performance of those who have an unsatisfactory level of errors can be improved.
Package Errors by Driver
25
15
Errors
Avg.
10
5
Driver Number
39
37
35
33
31
29
27
25
23
21
19
17
15
13
11
0
1
Errors
20
30
31
32
Understanding processes provides the context for determining the effects of variation
and the proper type of managerial action to be taken. By viewing work as a process, we
can apply statistical tools to establish consistent, predictable processes, study them, and
improve them. While variation exists everywhere, many business decisions do not often
account for it, and managers frequently confuse common and special causes of variation.
Those inside and outside the pharmacy must understand the nature of variation, before
they can focus on reducing it.
The complex interactions of these variations in drugs, equipment, computer systems,
professional, clerical, and technical staff, and the environment are not easily understood.
Variation due to any of these individual sources could be random; individual sources may
not be identified or explainable. However their combined effect in the pharmaceutical
system is probably stable and might be predicted statistically. These common causes of
variation that are present as a natural part of the process need to be understood before
special causes can be separated and eliminated.
To address the problem, Dover should consider using the following steps: