Beruflich Dokumente
Kultur Dokumente
ANOVA stands for ‘Analysis of Variance’. It actually means analysis of variation in means
of different groups of a population or different populations. It is an advanced version of t –
test. While t-test is used to compare two means, ANOVA can be used for more than two
means.
What does an ANOVA do?
It studies whether the variation between group-means is due to an effect/treatment of it is
just a chance variation. It checks the ‘Between Group Variation’ and ‘Within Group
Variation’. If the treatment has a significant effect, then ‘Between Group Variation’ will be
significantly higher than ‘Within Group Variation’.
How to do an ANOVA test?
Assume an educational institute wants to check whether different modes of education like:
Visual aided teaching, practical learning, Self-learning through library & internet, have an
impact on the students’ performance. The management decided to assign 20 students to
each of the teaching methods. Their performance will be evaluated with an examination at
the end of treatment. The scores are collected and the mean scores of each of the methods
are also arrived.
ANOVA method is used to find out, if there is a difference between the mean values of the
three groups.
Like in all other Hypothesis Testing, the hypothesis of ANOVA is like:
Null hypothesis: Mean of all the three methods are equal
Alternate Hypothesis: There is a significant variation in mean in at least one of the
methods
1. Calculate the Sum of Squares of ‘Between the Group’ SS B.
2. Calculate the Sum of Squares of ‘Within the Group’ SS W.
3. Find the degrees of freedom of ‘Between the Group’: df 1 = Number of groups -1
4. Find the degrees of freedom of ‘Within the Group’: df 2 = Number of groups X (Number of
observations -1)
5. Calculate the Mean Square value for ‘Between the Group’. MS B= SSB/ df1
6. Calculate the Mean Square value for ‘Between the Group’. MS W = SSW/ df2
7. Calculate F value. FActual = MSB/ MSW
8. Find from F-Table, the FExpected value for the given degrees of freedom.
9. Find out the significance value ‘p’ value.
The below table will explain how the calculations are performed and interpreted:
Favorite
Most fairly accurate descriptions of equipment and/or process lifetimes assume that failure rates follow a three
period I II III “bathtub-curve pattern” where failures/errors:
II – Remain relatively constant and at their lowest levels during the normal equipment or process operating period.
Scientific studies of limit based natural or complex growth patterns also suggest that many processes are
The values of R in Table 2 are obtained by scaling the R values of Table I by 1/Vm = 1/9. For example, R = 2/9
= .222 is the super-stable growth factor and Rcr = 3.24/9 =.36 is the critical factor. In the case of Poisson-
distributed processes, the expected number of occurrences C = NP (large N, small fraction P of occurrence) is both
the variance and mean of the distribution. A conditional Poisson process that conformed to this simple non -linear
model has the variance Ct + 1 = RCt (Cm -Ct) and would be stable in growth rate range 1/Cm < R < 3/Cm where
Cm is the specified maximum number of occurrences. When Ct = Cm/2 and R = 2/Cm the process is super -
stable1 and ideally Poisson because the expected number of occurrences Co = C1 = C2….= Ct = Ct + 1 remain
constant and are time independent over the operating lifetime of the process. This condition of super-stability is
analogous to “States of Equilibrium” in Statistical Mechanics 2 and is illustrated by the Ct + 1 = Ct intersecting line of
above Figure I quadratic map.
The hypothetical model is suggestive of an ideal, super-stable six sigma process with an expected Poisson failure
no of C = 1.7 PPM (N= 106, P= 1.7 x 10-6), maximum failure number of Cm = 3.4 PPM and growth factor that has
the value R = .60.
A real-world stable process would of course exhibit random fluctuations in variance which would not be strictly
deterministic. However, as it ages or deteriorates and becomes unstable some deterministic chaos may be present
and evident by an oscillatory pattern of variance (e.g., machine tool wear). If a process is stable with a relatively
constant variance and it meets requirements (in my opinion) it does not need to be fixed.
From <https://www.isixsigma.com/tools-templates/variation/simple-model-variance-stable-process/>
Favorite
For several years, a fully-automated plastic drinking cup production line used excessive amounts of raw materials
(plastic PET pellets) due to a wide distribution in the weight of the formed cups. When process operators and
engineers had tried to reduce the plastic pellet usage by reducing the average formed cup weight, many cups –
because of the wide variation – fell below the customer-specified minimum weight. The process thus had to be
reset to a higher weight target in order to avoid those out-of-specification cups. A previous process improvement
team attempted to find the sources of variation through some data collection and a couple of two -factor/two-level
full-factorial experiments. They were unsuccessful, however, as the factors used in the experiments did not
explain the response variation.
The Problem
The automated cup line has an average cup weight of 24.5 grams, which is 1 gram higher than the target of 23
grams (also the lower specification limit [LSL]) for an individual cup’s weight. To avoid low -weight cup failures, the
operators usually raise the target cup weight average, increasing the amount of resin use. An additional 260,198
pounds of resin is used annually with a cost of poor quality (COPQ) of $195,148. Figure 1 shows the current
output of cup weights over 30 days (3 shifts per day).
As part of the Define phase, a SIPOC (suppliers, input, process, output, customers) map was created (Figure
2).
• A plastic-pellet extruder fed with virgin PET resin pellets, colorant and regrinds (scraped plastic cups that are
reground and fed back into the process). The extruder mixes all of them and supplies a constant plastic paste.
• A chilled stainless steel hard-chromed roller system that creates a wide plastic sheet.
• A beta-ray scanner that continuously monitors the thickness of the plastic sheet and also provides a closed-loop
control to the extruder and roller system.
• A wide, flat infrared oven that reheats the plastic sheet to specific target temperature.
• A 72-cavity thermoforming mold that receives the heated plastic sheet and stamps out 72 cups at each press
stroke (also known as a mold shot).
• A 72-position puncher that cuts the cups from the formed plastic sheet (called webbing) that presents the
separated cups in stacks to a conveyor system.
• An automatic box filler that takes the stacks of cups from the conveyor and fills up boxes with stacks of 20 cups.
During the early brainstorming sessions of the project team, changes such as mold temperature increases and
mold cavity (plug assist) replacements were suggested – and implemented – but cup-weight distribution remained
the same. The team decided to get back to basics, and a multivary study was initiated.
Multivary Studies
Multivary studies make no changes to the process being studied; they do, however, require the use of detailed
process and product data in order to distinguish, by categories, the source or sources of variation. Graphical tools,
multivary studies help identify the where or when of the biggest source of variation. The variation categories can
be grouped as: time to time, lot to lot, piece to piece, within piece, shift to shift, operator to operator, etc.
Data collection is designed to include all the suspected sources of variation, graphed against the output variable Y.
The graph below shown in Figure 3 is an example of a multivary study with group-to-group, A, B or C, variation.
In the example of the plastic cups, the analysis of the sampled data showed that high variation was always present
with no correlation to time-related categories. Next, the team looked toward positional variation, a particular type of
multivary study.
After a few samples, it was clear that the cavity position inside the mold was the highest source of variation, with
product that come from one side of the mold running consistently below the average shot.
By creating a surface map using the average weights produced in each individual cavity, the row-to-row differences
across the mold were clear. As displayed in Figure 7, rows 1 through 4 have higher average weights than rows 7
through 9.
Team members were left to examine the infrared oven and the mold itself. Previous work included looking at the
thermoforming electrical heaters and thermocouples, but they had shown no critical issue. This time, the team
decided to disassemble and inspect the infrared oven entirely, looking for any clues as to the uneven weight
distribution.
Upon inspecting the electrical components, team members found nothing wrong. The physical review, however,
found that a section of the oven had a gap between the oven and the mold interface, which allowed heat to
escape.
Figure 11: Cup Gram Weight (with Cp) After Flushing Vacuum Lines
A standard three-factor full-factorial design of experiments (DOE) was set up with the factors of oven
temperature, plastic sheet thickness and plastic pellet regrind. The experiment data analysis resulted in a
good R2 square of 97.87 percent with sheet thickness and regrind set point as strong contributors to the overall
variation.
Further DOE work focused on fixing the oven temperatures, and working with sheet thickness and regrind levels
allowed the team to establish optimal input control parameters. The average weight was reduced to the 24 -gram
target with none (or very few) cups going under the 23-gram LSL. The original project goal of reducing raw material
usage was achieved with a savings of $100,000. Accordingly, the process Cp was increased to greater than the
targeted minimum of 1.5.
Today cup-weight surface mapping is more even across the mold with a tighter distribution. The low points, located
at the front corners (Figure 12), cannot be improved without redesigning the mold and reducing the size of the
mold from 72 to 60 cavities. This change was considered, but it would have resulted in a-14 percent reduction in
productivity; it was not pursued.
From <https://www.isixsigma.com/tools-templates/variation/reduce-special-cause-variation-before-experimentation/>