Beruflich Dokumente
Kultur Dokumente
INDEX
Chapter
Chapter Particulars Page Number
Number
Literature Review 2
1 Introduction 4
1.1 Definitions of Quality 5
1.2 Taguchi Definition of Quality 6
2 Taguchi Loss Function 7
2.1 Taguchi‟s Quality Philosophy 8
3 Objective of Taguchi Methods 11
3.1 Eight steps in Taguchi Methodology 11
4 Quadratic Loss Function 12
4.1 Variation of Quadratic Loss Function 14
5 Signal to Noise Ratio 16
6 Case Study I 18
7 Orthogonal Arrays 25
8 Case Study II 28
9 Conclusion and Future Scope 33
9.1 Conclusion 33
9.2 Future Scope 33
10 References 34
LITERATURE REVIEW
K. Sudhakar
5. Robust Design P. M. Mujumdar 2002
Table/ Graph
Title of Table/ Graph Page Number
Number
G1 Traditional Quality Definition 6
G8 Asymmetric 15
T3 Orthogonal Array 26
T4 Orthogonal Arrays 27
T5 23 Factorial Design 27
T6 24 Factorial Design 27
1. INTRODUCTION
When Japan began its reconstruction efforts after World War II, Japan faced an acute
shortage of good quality raw material, high quality manufacturing equipment and skilled
engineers. The challenge was to produce high quality products and continue to improve
quality product under these circumstances.
The task of developing a methodology to meet the challenge was assigned to Dr. Genichi
Taguchi, who at that time was manager in charge of developing certain
telecommunication products at the electrical communication laboratories (ECL) of
Nippon Telecom and Telegraph Company (NTT).
Through his research in the early 1960‟s, Dr Taguchi developed the foundation of
Orthogonal Arrays and Robust design and validated his basics philosophies by applying
them in the development of many products. He developed the concept of the Quality Loss
Function in the 1970‟s. He also published the third and most current edition of his book
on experimental designs. Genichi Taguchi made many important contributions during his
lifetime in the field of quality control. The term Taguchi Methods is coined after him. Dr
Taguchi received the individual DEMING AWARD in 1962, which is one of the highest
recognition in the quality field.
When the need is to have a process so robust that the process may not stay always within
the specification but needs to stay centred at the target, Taguchi‟s Techniques are very
useful. The underlying belief in this process is that every time a process moves away from
the target there is loss to customer even if the process is within the specifications. This
method focuses not only on the controllable factors as in conventional design of
experiments but also considering the noise factors.
Although these definitions are all different, some common threads run through them:
Taguchi Definition:
G2
Where, k = A / 𝑑 2
And
5. The final quality and cost of manufactured product are determined to a large
extent by the engineering designs of the product and its manufacturing process. This is so
simple, and so true. The future belongs to companies, which, once they understand the
variability‟s of their manufacturing processes using statistical process control, move their
quality improvement efforts upstream to product and process design.
The objective of Taguchi‟s efforts is process and product-design improvement through the
identifications of easily controllable factors and their settings, which minimize the
variation in product response while keeping the mean response on target. By setting those
factors at their optimal levels, the product can be made robust to changes in operating and
environmental conditions. Thus, more stable and higher-quality products can be obtained,
and this is achieved during Taguchi‟ parameter-design stage by removing the bad effect of
the cause rather than the cause of the bad effect. Furthermore, since the method is applied
in a systematic way at a pre-production stage (off-line), it can greatly reduce the number
of time-consuming tests needed to determine cost-effective process conditions, thus
saving in costs and wasted products.
Since the financial loss is at a minimum at the point, the first derivative of the loss
function with respect to y at this point should also be zero. Therefore, one obtains the
following equation:
If one expand the loss function L(y) through a Taylor series expansion around the target
value m and takes Equations (2.1) and (2.2) into consideration, one will get the following
equation:
=L΄΄(y-m)²/2! +………
Result is obtained because the constant term L(m) and the first derivative term L‟(m) are
both zero. In addition, the third order and higher order term are assumed to be negligible.
Thus, one can express the loss function as a squared term multiplied by constant k:
When the deviation of the object characteristic from the target value increases, the
corresponding quality loss will also increase. When magnitude of deviation is outside of
the tolerance specifications, this product should be considered defective.
Let the cost due to defective product be A and the corresponding magnitude of the
deviation from the target value be. Taking into right hand side of Equation (2.3), one can
determine the value for constant k by following Equation:
Consider the visitor to the BHEL Heavy Electric Equipment Company in India who was
told, “In our company, only one unit of product needs to be made for our nuclear power
plant. In fact, it is not really necessary for us to make another unit of product. Since the
sample size is only one, the variance is zero. Consequently, the quality loss is zero and it
is not really necessary for us to apply statistical approaches to reduce the variance of our
product.”
However, the quality loss function [L = k(y – m) ²] is defined as the mean square
deviation of objective characteristics from their target value, not the variance of products.
Therefore, even when only one product is made, the corresponding quality loss can still be
calculated by Equation (2.3). Generally, the mean square deviation of objective
characteristics from their target values can be applied to estimate the mean value of
quality loss in Equation (2.3). One can calculate the mean square deviation from target σ²
(σ² in this equation is not variance) by the following equation (the term σ² is also called
the mean square error or the variance of products):
G5
Colour density of a television set and the output voltage of a power supply circuit are
examples of the nominal-the-best type quality characteristic.
Smaller-the-better type:
Some characteristic, such as radiation leakage from a microwave oven, can never take
negative values. Also, their ideal value is equal to zero, and as their value increases, the
performance becomes progressively worse. Such characteristic are called smaller-the-
better type quality characteristics. The response time of a computer, leakage current in
electronic circuits, and pollution from an automobile are additional examples of this type
of characteristic. The quality loss in such situation can be approximated by the following
function, which is obtain from equation by substituting m = 0
L(y) =ky²
This is one side loss function because y cannot take negative values.
G6
Larger-the-better type:
Some characteristics, such as the bond strength of adhesives, also do not take negative
values. But, zero is there worst value, and as their value becomes larger, the performance
becomes progressively better-that is, the quality loss becomes progressively smaller. Their
ideal value is infinity and at that point the loss is zero. Such characteristics are called
larger-the-better type characteristics. It is clear that the reciprocal of such a characteristics
has the same qualitative behaviour as a smaller-the-better type characteristic. Thus we
approximate the loss function for a larger-the-better type characteristic by substituting 1/y
for y in
L(y) = k [1/y²]
G7
Asymmetric loss function:
In certain situations, deviation of the quality characteristic in one direction is much more
harmful than in the other direction. In such cases, one can use a different coefficient k for
the two directions. Thus, the quality loss would be approximated by the following
asymmetric loss function:
k ( y – m ) ²,y > m
L(y) = k(y-m) ², y ≤ m
G8
In Taguchi designs, a measure of robustness used to identify control factors that reduce
variability in a product or process by minimizing the effects of uncontrollable factors
(noise factors). Control factors are those design and process parameters that can be
controlled. Noise factors cannot be controlled during production or product use, but can
be controlled during experimentation. In a Taguchi designed experiment, you manipulate
noise factors to force variability to occur and from the results, identify optimal control
factor settings that make the process or product robust, or resistant to variation from the
noise factors. Higher values of the signal-to-noise ratio (S/N) identify control factor
settings that minimize the effects of the noise factors.
Taguchi experiments often use a 2-step optimization process. In step 1 use the signal-to-
noise ratio to identify those control factors that reduce variability. In step 2, identify
control factors that move the mean to target and have a small or no effect on the signal-to-
noise ratio.
The signal-to-noise ratio measures how the response varies relative to the nominal or
target value under different noise conditions. You can choose from different signal-to-
noise ratios, depending on the goal of your experiment. For static designs, Minitab offers
four signal-to-noise ratios as shown in figure.
The Nominal is Best (default) signal-to-noise ratio is useful for analyzing or identifying
scaling factors, which are factors in which the mean and standard deviation vary
proportionally. Scaling factors can be used to adjust the mean on target without affecting
signal-to-noise ratios.
6. CASE STUDY I
2
Loss (y) = 𝑘 𝑦 − 𝑚
2
L = 40,000 𝑦 − 0.0
Now the loss associated with any part can be computed depending on the value of its
diameter.
2
L = 40,000 0.003 − 0.0
L = $ 0.36
This is the loss per unit for each part shipped with an outer diameter of +0.003 in.
Similarly for a part diameter of -0.002 in which are 11 quantities in number the cost
incurred would be,
2
L ( - 0.002 ) = $ 40,000 −0.002 − 0.0
= $ 0.16 x 11
= $1.76
$10.56
𝐿=
100 𝑃𝑎𝑟𝑡𝑠
A second method of estimating average loss per part entails using the loss equation in a
slightly modified form. Mathematically, this calculation is equivalent to using the average
2
value of the 𝑦 − 𝑚 portion of the loss equation.
2 2
𝑘[ 𝑦1 − 𝑚 + 𝑦2 − 𝑚 + … + 𝑦𝑛 − 𝑚 2 ]
𝐿 =
𝑁
Where, N: number of parts sampled
If all the values of (y – m) values are squared, added together and divided by the number
of items, then the result is desired value. For a large number of parts, the average loss per
part is equivalent to
𝐿 = 𝑘 𝑆2 + 𝑦 − 𝑚 2
𝑦 = 0.0
𝐿 = 𝑘 𝑆2 + 𝑦 − 𝑚 2
This second method uses the general loss function for a nominal is best situation.
This example demonstrates the loss associated with a distribution that has a very good
process capability ratio and is centrally located on the target value of 0.0 in. Traditional
quality control would cause a situation where there would be a variation in size.
The loss function uses the average and variance for a group of parts to calculate loss. For
a group of parts where the distribution average is continually increasing, the variance is
2 𝑊2
𝑆𝑅𝑒𝑠𝑢𝑙𝑡𝑎𝑛𝑡 = 𝑆𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙 2 + 12
W = 0.010 in
0.010 2
𝑆𝑅 2 = 2.64 𝑥 10−6 + 12
= 1.10 x 10−5
𝐿 = 𝑘 𝑆2 + 𝑦 − 𝑚 2
= $ 0.44
It is clear that, Loss of centred distribution is less than that of a distribution which
traverses the specification range.
L1 < L2
But to obtain a low loss the machine would have to be adjusted after each part. For the
second situation, the machine would not have to be adjusted nearly as often. Therefore,
there must be compromise between loss function and adjustment costs.
The true optimisation cost is at 0.005 in tool wear span, but the additional loss of going up
to 0.006 in total span is very minimal ($ 0.003), which is beyond the accuracy of the loss
function. Adjustment width is 0.006 in, i.e. + 0.003 to –0.003 to get a centred nominal
value of 0.0 in as the optimum adjustment interval depends upon the scrap loss of a part.
This case study shows the economic penalty of allowing excessive variation in a product
or a process. Taguchi uses a very comprehensive economic approach to on-line
production quality control.
7. ORTHOGONAL ARRAYS
An Array‟s name indicates the number of rows and column it has, and also the number of
levels in each of the column. Thus the array L4 (2³) has four rows and three 2-level
column. Historically, related methods was developed for agriculture, largely in UK,
around the Second World War, Sir R.A.Fisher was particularly associated with this work.
Here the field area has been divided up into rows and column and four fertilizers (F1-F4)
and four irrigation levels are represented. Since all combination are Taken, sixteen „cells‟
or „plots‟ result.
The Fisher field experiment is a full factional experiment since all 4*4 =16 combinations
of the experiments factors, fertilizer and irrigation level, are included.
The number of combinations required may not be feasible or economic. To cut down on
the number of experimental combinations included, a Latin Square design of experiment
may be used. Here there are three fertilisers, three irrigation levels and three alternative
additives (A1-A2) but only nine instead of the 3*3*3 =27 combination, of the full
factorial are included.
There are „pivotal‟ combinations, however, that still allow the identification of the best
fertiliser, Irrigation level and additive provided that these no serious non-additives or
interactions in the relationship between yield and these control factors. The property of
Latin Squares that corresponds to this is that each of the labels A1, A2, A3 appears exactly
once in each row and column.
Orthogonality is represented as ∑xi. xj = 0 for all the pairs of levels, where i, j represent
high and low (+1, -1) levels.
ORTHOGONAL ARRAYS
8. CASE STUDY II
Consider a process, of producing steel springs, is generating considerable scrap due to
cracking after heat treatment. A study is planned to determine better operating conditions
to reduce the cracking problem.
Experiment: Four runs at each level of T with C and O at their low levels
Factorial Approach:
- Include all factors in a balanced design:
- To increase the generality of the conclusions, use a design that involves all eight
combinations of the three factors.
The treatments for the eight runs are given as under:
The main effects of each factor can be estimated by the difference between the average of
the responses at the high level and the average of the responses at the low level. For
example to calculate the O main effect:
The apparent conclusion is that changing the oil temperature from 70 to 120 has little
effect.
The use of factorial approach allows examination of two factor interactions. For example
we can estimate the effect of factor O at each level of T.
At T = 1450
Avg. of responses with O as 70 = 64.0 = (67 + 61) / 2
Avg. of responses with O as 120 = 55.5 = (59 + 52) / 2
So the effect of O is 55.5 – 64 = -8.5
At T = 1600
Avg. of responses with O as 70 = 77.0 = (79 + 75) / 2
Avg. of responses with O as 120 = 88.5 = (90 + 87) / 2
So the effect of O is 88.5 – 77 = 11.5
The conclusion is that at T = 1450, increasing O decreases the average response by 8.5
whereas at T = 1600, increasing O increases the average response by 11.5.
That is, O has a strong effect but the nature of the effect depends on the value of T.
This is called interaction between T and O in their effect on the response.
It is convenient to summarize the four averages corresponding to the four combinations of
T and O in a table:
When an interaction is present the lines on the plot will not be parallel. When an
interaction is present the effect of the two factors must be considered simultaneously.
The lines are added to the plot only to help with the interpretation. We cannot know that
the response will increase linearly.
Two way tables of averages and plots for the other factor pairs are:
Result:
Recommendations:
- Run the process with T and O at their high levels to produce about 90% crack free
product (further investigation at other levels might produce more improvement).
- Choose the level of C so that the lowest cost is realized.
9. 9.1 CONCLUSION
Taguchi method is a very popular offline quality method. Many Taguchi practitioners
have used engineering knowledge to resolve multi-response optimization problems.
However, it cannot solve the multi-response problem which occurs often in today‟s
environment. Research shows that the multi-response problem is still an issue with the
Taguchi method. In this work, two different approaches to overcome these problems are
presented. The proposed approaches use weight-based desirability method and weight
based grey relation analysis to optimize the parameter design. Three existing industrial
cases from literature demonstrate the effectiveness of the proposed approaches. The
proposed approaches posses the following merits.
They are applicable for solving both quantitative and qualitative parameters.
They are simple and easy to apply in real cases in manufacturing, especially when
dealing with multiple responses.
Researchers with limited knowledge in mathematical background can also apply these
approaches.
The Taguchi approach to quality engineering places a great deal of emphasis on
minimizing variation as the main means of improving quality. The idea is to design
products and processes whose performance is not affected by outside conditions and to
build this in during the development and design stage through the use of experimental
design. The method includes a set of tables that enable main variables and interactions to
be investigated in a minimum number of trials.
10. REFERENCES
3. Park, Sung H, “Robust Design and Analysis for Quality Engineering”, Chapman
& Hall, London, 1996.
7. Wei Chen, Kemper Lewis, “A Robust Design Approach for Achieving Flexibility
in Multidisciplinary Design”, http:/www.uic.edu/labs/ideal/pdf/Chen-Lewis.pdf
Luc Huyse, “Solving Problems of Optimization Under Uncertainty as Statistical
Decision Problems”, AIAA-2001-1519, 2001.
10. Brent A. Cullimore , “Reliability Engineering & Robust Design : New Methods
for Thermal / Fluid Engineering”, C & R White Paper, Revision 2, May 15, 2000
http://www.sindaworks.com/docs/papers/releng1.pdf
11. Luc Huyse, R. Michael Lewis, “Aerodynamic Shape Optimization of
Twodimensional Airfoil Under Uncertain Conditions”, NASA/CR-2001-210648
12. Timothy W. Simpson, Jesse Peplinski, Patric N. Koch, and Janet K. Allen, “On
the Use of Statistics in Design & the Implications for Deterministic Computer
Experiments”, ASME Design Engineering Technical Conference, California,
1997.