Sie sind auf Seite 1von 22

60

CHAPTER 4
TOOLS AND TECHNIQUES USED IN THE
PRESENT STUDY

4.1 INTRODUCTION
In the near-dry WEDM study, Taguchi approach and Response
Surface Method (RSM) have been used to conduct the systematic design of
experiments. The appropriate data collected from the experiments have been
analysed by statistical methods. Multi-Objective Evolutionary Algorithm
(MOEA) has been used to predict the optimum solutions by solving two
conflict objective functions obtained from the regression analysis. In this
chapter, the fundamentals of these three techniques have been discussed.
4.2 TAGUCHI METHOD
Genichi Taguchi (1980s) developed the fractional factorial design
concept to optimize the process of engineering experimentation. Taguchi
espoused an excellent philosophy for quality control in the manufacturing
industries. This philosophy was first applied in Ford Motor Company to train
the engineers for quality improvement. It was founded on three simple and
fundamental concepts (Ross 1988 and Roy 1990).
Quality should be designed for the product and not inspected
into it.


61
The cost of quality should be measured as a function of
deviation from the standard and the losses should be measured
system-wide.
Best quality is achieved by minimizing the deviations from the
target. The product or process should be so designed that it is
immune to uncontrollable environmental variables.
The above principles were the guidelines for developing the
systems, specifying the parameters and testing the factors affecting quality
-
improvement in place of an attempt to inspect the quality of a product on the
production line. He observed that poor quality cannot be improved by the
process of inspection, screening and recovering. Taguchi recommends a three
stage process in the engineering applications to achieve desirable product
quality by the design (Ross 1988 and Roy 1990).
System design is used to identify the working levels of the
design factor and parameter design seek to determine the
factor levels that produce the best performance of the
product/process under study.
Parametric design is used to find the optimum condition
which influences the uncontrolled factors cause the minimum
variation of system performance.
Tolerance design is used to fine tune the results of parameter
design by tightening the tolerance of the factors with
significant influence on the product.


62
The number of parameters can influence the quality characteristic or
the response of the product. The parameters can be classified into the
following two classes (Phadke 1989).
Control factors certain parameters can be specified freely by
the designer/operators. Multiple values of each control factor
can be called as level.
Noise factors certain parameters cannot be controlled by the
designer/operator. It is difficult to control and set their levels.
4.2.1 Loss Function
Taguchi defines the loss function which is proportional to the
deviation from the target quality characteristic. At zero deviation,
performance is focused on the target and loss is zero. The following equation
represents the quality loss function (Ross 1996 and Roy 2001).

2
0
) ( ) ( Y Y Y L (4.1)
where, (Y - Y
0
from the target
Y
0
is a constant which is dependent upon the cost structure of a
manufacturing process or an organization.
The graphical representation of loss function is shown in Figure 4.1.
When the quality characteristic of a product meets its target value, the loss
must be zero. The magnitude of loss increases rapidly as the quality
characteristics deviate from target values. The loss function must be a
continuous (second-order) function which is deviated from the target value
(Ross 1988, Phadke 1989 and Roy 1990).



63

Figure 4.1 Taguchi loss function (Ross 1988, Phadke 1989 and Roy 1990)
4.2.2 Signal to Noise Ratio
Taguchi created a transform function from the loss-function and
named as Signal-to-Noise (S/N) ratio (Phadke 1989 and Barker 1990). S/N
ratio was earlier specified as a concurrent statistic which is able to look at two
more characteristics of a distribution and roll these characteristics into a
single number. It combines both the parameters (the mean level of the quality
characteristic and variance of the mean) into a single metric (Barker 1990).
Thus, S/N ratio consolidates several replications (at least two data points are
required) into a value. A high value of S/N indicates that the signal is much
higher than the effects of noise factors. The equation for calculating S/N


dB MSD N
HB HB 10
log 10 S/ (4.2)

dB MSD N
NB NB 10
log 10 S/
(4.3)


dB MSD N
LB LB 10
log 10 S/
(4.4)



64
and

n
i i
HB
Y n
MSD
1
2
1 1


) (
1
0
1
2
Y Y
n
MSD
n
i
i NB


n
i
i LB
Y
n
MSD
1
2
1

where, MSD Mean Squared Deviation,
Y
i
Response value of i
th
experiment,
Y
0
Target response value,
N Number of replications.
MSD is a statistical quantity that reveals the deviation of each value
from the target value. The expressions for MSD are varying with respect to
the quality expectation of response characteristics. In order to minimize the
deviations, MSD
characteristics, the standard deviation is used as MSD
-
of each large value becomes a small value and the un-stated target value is
zero. Thus for all three expressions, the smallest magnitude of MSD is being
expected (Ross 1996).
4.2.3 Steps of Taguchi Design of Experiments
In the present investigation, L27 orthogonal array has been used to
conduct all near-dry WEDM experiments, and the raw data analysis and S/N
data analysis have been performed. The effects of the selected process


65
parameters on the selected quality characteristics have been investigated by
the plots of main effects based on raw data. The optimum condition for each
of quality characteristics has been established through S/N data analysis aided
by the raw data analysis. If no outer array has been used, the experiments
have been replicated three or more times at each experimental condition. The
flow diagram for the Taguchi experimental design and analysis are shown in
Figure 4.2.
4.2.4 Selection of Orthogonal Array
An appropriate orthogonal array has been selected based on the
number of parameters, their levels and desire to study of particular
interactions. The selection parameters and levels are very important for the
Taguchi design of experiment. Brainstorming, flowchart and cause-effect
methods were suggested by Taguchi to identify the parameters which
influence the output responses (Ross 1988 and Roy 1990). The levels of each
parameter have been selected based on the possible range of a parameter. In
this study, the possible parameters and their levels were selected from
exploratory experiments. The standard two and three level arrays are given
below (Phadke 1989).
Two level arrays : L4, L8, L12, L16, L32
Three level arrays : L9, L18, L27
In this study, three levels have been selected for all five parameters
with three levels from exploratory experiments. If the higher-order
polynomial relationship between the parameters and response is expected, at
least three levels for each parameter must be considered (Barker 1990). Thus,
L27 orthogonal array (Table 4.1) was also selected to conduct the near-dry
WEDM experiments as per the several literatures (Mahapatra and Patnaik
2007, Govindan and J oshi 2010 and Shah et al 2011). When a particular


66
orthogonal array is selected for an experiment, the following inequality must
also be satisfied (Ross 1988).



Figure 4.2 Steps of Taguchi design of experiment and analysis (Roy 1990)
Significant and Insignificant Parameters
contribution. Insignificant parameters are
pooled
Significant Parameters Identification
Predict the mean response
characteristics
Determine optimal range of responses
Based on the Confidence Interval (CI)
Results and Discussions
Confirmation Experiments
Based on the optimum level of the each
significant parameters
Using combination of optimum level of
the significant parameters
Number of Repetitions of Experiments
ANOVA test for the S/N Ratio
More than two repetitions / Based on
number of noise factors if considered
Identify process parameters which affect
the mean of response characteristics
ANOVA test for the Raw data
Identify control parameters which affect
mean and variation of the response
characteristics
Selection of Orthogonal Array (OA) Based on
Number of factors
Number of levels of each factor
Number of interactions of interest
Degrees Of Freedom (DOF)
Based on
Linear graphs / Triangular tables
Assignment of Parameters and
Interaction Parameters


67

Table 4.1 Taguchi's L27 standard orthogonal array
Column
No.
Trial
No.
1 2 3 4 5 6 7 8 9 10 11 12 13
1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 2 2 2 2 2 2 2 2 2
3 1 1 1 1 3 3 3 3 3 3 3 3 3
4 1 2 2 2 1 1 1 2 2 2 3 3 3
5 1 2 2 2 2 2 2 3 3 3 1 1 1
6 1 2 2 2 3 3 3 1 1 1 2 2 2
7 1 3 3 3 1 1 1 3 3 3 2 2 2
8 1 3 3 3 2 2 2 1 1 1 3 3 3
9 1 3 3 3 3 3 3 2 2 2 1 1 1
10 2 1 2 3 1 2 3 1 2 3 1 2 3
11 2 1 2 3 2 3 1 2 3 1 2 3 1
12 2 1 2 3 3 1 2 3 1 2 3 1 2
13 2 2 3 1 1 2 3 2 3 1 3 1 2
14 2 2 3 1 2 3 1 3 1 2 1 2 3
15 2 2 3 1 3 1 2 1 2 3 2 3 1
16 2 3 1 2 1 2 3 3 1 2 2 3 1
17 2 3 1 2 2 3 1 1 2 3 3 1 2
18 2 3 1 2 3 1 2 2 3 1 1 2 3
19 3 1 3 2 1 3 2 1 3 2 1 3 2
20 3 1 3 2 2 1 3 2 1 3 2 1 3
21 3 1 3 2 3 2 1 3 2 1 3 2 1
22 3 2 1 3 1 3 2 2 1 3 3 2 1
23 3 2 1 3 2 1 3 3 2 1 1 3 2
24 3 2 1 3 3 2 1 1 3 2 2 1 3
25 3 3 2 1 1 3 2 3 2 1 2 1 3
26 3 3 2 1 2 1 3 1 3 2 3 2 1
27 3 3 2 1 3 2 1 2 1 3 1 3 2



68
9
10
12
13
5
2
8, 11
1
6, 7
3, 4
L27 with Three Interactions
11
1 12, 13
5
8
2
3, 4
6, 7
9, 10
L27 with Four Interactions
Total degree of freedom (DOF) and DOF of each parameter are
depending upon the number of levels. While increasing the parameters and
levels, the number of trials of the experiment is also increased. As per
Taguchi experimental design concept, DOF of three levels assigned to each
process parameter is two. Total DOF is equal to the number of trial
experiment minus one (Ross 1988). Thus, total DOF is 26 (=27-1). For the
3 level parameter, DOF is 2 (=3-1). This gives a total of 10 DOF for five
process parameters selected in this study. DOF of two-factor interaction is
of three levels of two factors is 4 (=(3-1) (3-1)). Thus, L27 orthogonal
arrays were selected by satisfying above mentioned DOF criterion.
4.2.5 Assignment of Parameters to the Orthogonal Array
Assignment of parameters in the orthogonal array column is mainly
related with the number of parameters and desired interactions. Taguchi gave
two tools for the assignment of parameters and interactions in orthogonal
arrays (Ross 1988, Phadke 1989 and Roy 1990).
Triangular tables
Linear graphs
Figure 4.3 Linear graphs for L27 orthogonal array


69
Each OA has a particular set of linear graphs and a triangular table
associated with it. Linear graphs for L27 orthogonal array with three way and
four way interactions are given in Figure 4.3. In the linear graph, each node
indicates the allocation of the main parameters and each line indicates the
interaction terms. If the node is not assigned by the parameter, the column
corresponding nodes are not included for the experimentation and analysis.
4.2.6 Data Analysis
The appropriate response data have been collected during the
experimentation. Collected raw and S/N data have been analysed through
Analysis of Variance (ANOVA) test. The formulae of the ANOVA test for
the L27 design of experiments with five parameters and three interactions are
shown in Table 4.2 (Roy 1990).
A pictorial representation of each parameter and their level
influences on the responses are plotted based on the mean values. The
response changes with respect to the levels of each parameter can easily be
visualized from these curves.
The S/N ratio is used to measure of variation within replications
when noise factors present. ANOVA test of S/N ratio and raw data have been
conducted to identify significant process parameters which affect the variance
and mean of response characteristics (Ross 1988). The interaction graphs
were plotted to find the combined effect of two or more parameters on the
responses (Peace 1993). If there is any interaction effect on the response, the
lines of the plot are intersected, otherwise parallel. Residual plots are used to
validate the accuracy of observation data. The approximate symmetric nature
of the histogram and the straight line of the normal probability plot which
in the plots were randomly around zero, there is no error during data
collection.


70
Table 4.2 ANOVA formulae for L27 orthogonal array with
interaction (Roy 1990)
Parameter
Degree of freedom
(f)
Sequential Sum of Square (SS)
Variance or
Mean Sum of
Square (MS)
F-Test
A ) 1 (
L A
N f CF
N
A
N
A
N
A
SS
A A A
A
3
2
3
2
2
2
1
2
1

A
A
A
f
SS
MS

error
A
MS
MS

B ) 1 (
L B
N f CF
N
B
N
B
N
B
SS
B B B
B
3
2
3
2
2
2
1
2
1

B
B
A
f
SS
MS

error
B
MS
MS

C ) 1 (
L C
N f
CF
N
C
N
C
N
C
SS
C C C
C
3
2
3
2
2
2
1
2
1

C
C
C
f
SS
MS

error
C
MS
MS

D ) 1 (
L D
N f CF
N
D
N
D
N
D
SS
D D D
D
3
2
3
2
2
2
1
2
1

D
D
D
f
SS
MS

error
D
MS
MS

E ) 1 (
L E
N f CF
N
E
N
E
N
E
SS
E E E
E
3
2
3
2
2
2
1
2
1

E
E
E
f
SS
MS

error
E
MS
MS

Interaction
AB
B A E
f f f CF
N
B A
N
B A
SS
B A B A
E
2 ) (
2
2
1 ) (
2
1
) ( ) (

) ( B A
B A
E
f
SS
MS

error
B A
MS
MS
) (

Interaction
BC
C B E
f f f CF
N
C B
N
C B
SS
C B C B
E
2 ) (
2
2
1 ) (
2
1
) ( ) (

) ( C B
C B
E
f
SS
MS

error
C B
MS
MS
) (

Interaction
AC
C A E
f f f CF
N
C A
N
C A
SS
C A C A
E
2 ) (
2
2
1 ) (
2
1
) ( ) (

) ( C A
C A
E
f
SS
MS

error
C A
MS
MS
) (

Error
)
(
C A C B B A
E D C
B A T error
f f f
f f f
f f f f

C A C B B A
E D C
B A T error
SS SS SS
SS SS SS
SS SS SS SS (

error
error
errore
f
SS
MS

-
Total 1 N f
T
CF Y SS
i
i T
27
1

- -
A,B,C,D andE
AB, BC and AC
N
N
A1
N
A2
N
A3
N
L
N
Y
i
Similarly,

Correction factor (CF)
Five Parameters,
Parameter Interactions,
Total number of experiments,
Number of experiments in first level of Parameter A, Number of
experiments in second level of factor A,
Number of experiments in third level of factor A,
Number of level of each factor,
Total Experiments,
Response value of i
th
experiment,
NB1; NB2; NB3; NC1; NC2; NC3
N
Y
i
i
27
1



71
4.2.7 Estimation of Optimum Values
The mean response at the optimum condition is predicted after
determination of optimum levels of parameters. The mean is estimated from
the significant parameters and interaction terms. For example, parameters A,
B and C are significant and A
1
B
3
and C
2
(first level of A =A
1
, third level of
B=B
3
and second level of C=C
2
) are the optimal treatment condition. Then,
the optimal value of response characteristic (
opt
) is estimated as (Phadke
1989 and Roy 1990)

) T - C ( + ) T - B ( + ) T - A ( + T = 2 3 1
opt
(4.5)
where, T

is the overall mean response,
1 A , 3 B and
2 C are the average response
value of the first, third and second level of parameters A, B and C
respectively.
4.2.8 Confidence Interval
The estimated optimal value of the response (
opt
) is only an opinion
based on average of results obtained from the experiment. Statistically, it
provides a 50% chance of the true average being greater than
opt
(Roy 1990).
It is therefore customary to represent the values of a statistical parameter as a
range within which it is likely to fall, for a given level of confidence (Ross
1988). This range is called as the Confidence Interval (CI). CI is a maximum
and minimum value between which the true average should fall at some stated
percentage of confidence (Ross, 1988). Following two types of the confidence
intervals are recommended by Taguchi to the estimated mean of optimal
treatment conditions (Ross 1988).



72
The confidence interval for the sample group (CI
E
):
Around the estimated average of a treatment condition used in
a confirmatory experiment to verify the predictions. It is used
for only a sample group made under the specified conditions.
The confidence interval for the population (CI
P
): Around
the estimated average of a treatment condition predicted from
experiment. It is used for entire population i.e., all parts ever
made under the specified conditions.
In this study, 95% of confidence level has been considered. The
95% confidence intervals of confirmation experiments (CI
E
) and population
(CI
p
) are calculated by using following equations (Roy 1990).

R n
V f
f
e e
1 1
) , 1 ( F = CI
E
(4.6)

f
e e
n
V f
1
) , 1 ( F = CI
P
(4.7)
and
response mean the of estimate in the with associated DOF + 1
=
N
n
f

where, N Total number of results,
R Sample size for confirmation experiment,
V
e
Error Variance,

e
f

Error degree of freedom,
) , 1 ( F
e
f

F-ratio at a confidence level of (1-
and
e
f

(from Table A


73
4.2.9 Confirmation Experiments
The final step of the Taguchi design is the confirmation of
experiments in which estimated optimum response has been evaluated using
significant parameters. More than two experiments have been conducted
under specified conditions. The average of the confirmation experiment result
compares with an anticipated average based on the parameters and levels
tested. The confirmation experiment is an important step and is highly
recommended to verify the predicted results by the Taguchi method (Ross
1988 and Roy 1990).
4.3 RESPONSE SURFACE METHOD
Response surface method (RSM) is a combination of statistical and
mathematical technique useful for modeling and optimization of the
engineering problems in which the relationship between several input
parameters and dependent response variables are obtained (Montgomery
2005).

input variables
k 2 1
.... , , x x x
y
independent variables are generally approximated by a lower-order
polynomial model which is shown in Equation (4.8).

k k 1 1 0
... + = y x x (4.8)
where,
0 1 n
are the unknown parameters which are estimated by
statistical analysis of a collection of data from the experiments. The residual
First-order
model is the approximate function which gives the linear relationship between
dependent and independent variables. However, the linear model may not fit
with response surface of the many engineering problems. The higher-order


74
polynomial model gives the better correlation between input and output
variables, and also fit with response surfaces. The second-order polynomial
model is widely used and shown in Equation (4.9) (Montgomery 2005).

2
2
2
1 1
0
j i
j i ij i
k
i
ii i
k
i
i
x x x x y (4.9)
contains linear, squared and cross product
terms of variables x
i
and x
j
. The number of experiments design techniques is
used to estimate the regression co-efficient (
0 i ii
and
ij
). In this present
work, second-order response surface model is used to build the correlation
and analysis the data.
4.3.1 Central Composite Design
The most popular class of second-order model of the response
surface method is a central composite rotatable design. The CCD design
comprises three essential elements are described below.
Two-level factorial points related to 2
k

design, where, k is the
number of parameters and 2 is the number of levels at which
the parameter is kept during experimentations and contribute
to estimation of the interaction terms.
Axial points are called star points positioned on the co-
ordinates axes to form a central composite design with arm
Number of axis points are 2k.
Central points are the few more points added to the center of
the design and used to estimate the pure error. A number of
central repetitive runs depend upon the number of parameters.
A higher number of central runs are preferable while
increasing number of parameters.


75
Table 4.3 Half-fractional central composite second-order rotatable
design for five parameters
Std. No. A B C D E Comment
1 -1 -1 -1 -1 1
F
a
c
t
o
r
i
a
l

P
o
i
n
t
s

2 1 -1 -1 -1 -1
3 -1 1 -1 -1 -1
4 1 1 -1 -1 1
5 -1 -1 1 -1 -1
6 1 -1 1 -1 1
7 -1 1 1 -1 1
8 1 1 1 -1 -1
9 -1 -1 -1 1 -1
10 1 -1 -1 1 1
11 -1 1 -1 1 1
12 1 1 -1 1 -1
13 -1 -1 1 1 1
14 1 -1 1 1 -1
15 -1 1 1 1 -1
16 1 1 1 1 1
17 -2 0 0 0 0
S
t
a
r

P
o
i
n
t
s

18 2 0 0 0 0
19 0 -2 0 0 0
20 0 2 0 0 0
21 0 0 -2 0 0
22 0 0 2 0 0
23 0 0 0 -2 0
24 0 0 0 2 0
25 0 0 0 0 -2
26 0 0 0 0 2
27 0 0 0 0 0
C
e
n
t
e
r

P
o
i
n
t
s

28 0 0 0 0 0
29 0 0 0 0 0
30 0 0 0 0 0
31 0 0 0 0 0
32 0 0 0 0 0




76

in the CCD. When numbers of factors are greater than or equal to five, the
experimental size can be reduced by using half replication of 2
k
factorial
design (Akhanazarova and Kafarov 1982). The design matrix for five
independent variables of half fractional central composite design is shown in
Table 4.3. In this study, totally five process parameters have been considered.
The estimation of the number of runs has been described as follows.
Star points position = = (2
(k-1)
)
1/4
=2
4/4
=2
Half factorial points = 2
k
/ 2 = 16
Star points = 2 5 = 10
Center points = n
c
= 06
Total number of points (experiments) in the five parameter half
factorial central composite design is 32. Thus, 16 corner points, 10 star points
and 6 center points at zero levels. Totally, 32 sets of experiments are
randomly conducted to study the oxygen-mist near-dry WEDM process.
Coded half-fractional central composite design used for this study is shown in
the Table 4.3.
4.3.2 Estimation of the Model Coefficients
The -order polynomial model
can be obtained by the least square method (Montgomery 2005). The second-
order response surface can be written in the matrix form.
] [ ] ][ [ ] [ X Y (4.10)



77
and

N
y
y
y
Y
2
1
] [
C
1
0
] [
N
2
1
] [

N N N kN N N
k
k
x x x x x x
x x x x x x
x x x x x x
X
2 1
2
1 2 1
22 12
2
12 2 31 11
21 11
2
11 1 21 11
1
1
1
] [

where,
N Total Number of experiments,
C Total number of coefficients,
[Y] (N1) vector of the response,
[X] (NC
X Transpose form X matrix, X
T
,
[ ] (C1) vector of regression coefficients,
[ ] (N1) vector of random errors, with C=k+1,
-efficient have been estimated by the least square method
as follows

] ][ ' [ ] [ ] [
1
Y X X X (4.11)


78
4.3.3 Analysis of Variance Test
Analysis of variance is done by the total sum of squares test which
consists of first and second-order terms, lack of fit test and estimation of
experimental error. In order to find the individual coefficients for
- f f freedom has been used and n
0
is
-
value is compared with the theoretical value at 95% of confidence level. If
- terms are
significant. Insignificant interaction and higher-order terms can be pooled
from the equations and remaining coefficients can be recalculated. Then,
- -
-
formula for co-efficient and regression model is given below.

2
2
0
) , 1 (
e
ii
i
S
c
n F
(4.12)

where,

) (
) 1 (
1
=
2
n
1 s
0
0
2
0
y y
n
S
s e
and
0
1 0
1
n
s
s
y
n
Y
S
e
Standard deviations of experimental error calculated by
replicating observation at the zero level,
c
ii
Element of the error matrix (X X)
-1
,

i
Regression model coefficient,
y
s
s
th
response at the center.


79
The sequential sum of square tests, lack of fit tests and model
summary statistics analysis has been performed to select the appropriate
model to be fitted. The linear, two-factor interaction, quadratic and cubic
models were compared to select the adequacy of the model. The complexity
of increasing the terms of the model has been estimated by the sequential sum
of square test of the model. The lack of fit test used to estimate the residual
error from replication of the experiments in central design points. ANOVA
- ermination
coefficients (R
2
), adjusted R
2
, predicted R
2
and the Adequate Precision (AP).
-
regression model of linear, quadratic, or interaction terms. Generally, the
confidence lev
than 0.05 then the regression models and their terms are considered to be
statistically significant. The R
2
is defined as the ratio of the explained
variation to total variation and used to measure the degree of fit. When R
2
is
unity, the response model is best to fit with the actual data. Adjusted R
2
is
usually used to measure the amount of variations from the mean explained by
a model which can be adjusted for the number of terms in the model. It is also
used to decrease the number of terms of the model. The AP value of the
model is used to compare the range of predicted value at the design point with
an average prediction error. Its value should be greater than 4, which presents
the suitable model discrimination regression model obtained from the CCD
has been done using Design Expert-7 software.
4.4 MULTI-OBJECTIVE OPTIMIZATION USING EVOLUTIONARY
ALGORITHM
Multi-objective optimization process is usually applied to optimize
the two or more response functions simultaneously. It has been applied to
many fields of science, including engineering, economics and logistics where


80
optimal decisions need to be taken into the presence of trade-offs between two
or more conflicting objectives (Coello and Toscano 2005). Generally, such
problems consist of conflicting objectives so that it is not possible to obtain an
individual solution which is optimized for all objectives. As an alternative of
a single optimal solution, a set of optimum solutions called the Pareto
optimal set exists within such cases. Members of the Pareto-set are such that
there is no solution in the set which is better than the others in all the
objectives and neither does a solution exist in the set which is worse than the
others in all the objectives. By using evolutionary algorithms, one can
simultaneously obtain the several Pareto optimal points with an even
distribution of the solutions. In this study, multi objective optimization of the
near-dry WEDM process has been done by considering maximization of
MRR and minimization of Ra as two distinct objectives.

stochastic optimization methods that simulate the process of natural evolution.
The origins of EAs can be traced back to the late 1950s and since the 1970s
several evolutionary methodologies have been proposed, mainly genetic
algorithms, evolutionary programming and evolution strategies (Back et al
1997 and Zhou et al 2011). The first implementation of the multi-objective
evolutionary optimization algorithm dates back to the middle of the 1980s
(Schaffer 1985). Many practical engineering optimization problems are multi-
objective functions which are often conflicting.
Multi-Objective Evolutionary Algorithms (MOEA) are particularly
adequate for solving these problems and work with a population of vectors or
solutions rather than single point (Schaffer 1985, Fonseca and Fleming 1993,
Srinivas and Deb 1994, Horn et al 1994, Deb et al 2002, Zitzler et al 2003,
Knowles and Corne 2000). This feature enables the creation of Pareto


81
frontiers representing the trade-off between the criteria, simultaneously
providing a link with the decision variables (Deb 2001).
An elitist multi-objective evolutionary algorithm based on the
concept of epsilon dominance of Genetic Algorithm (GA) has been used in
the near-dry WEDM process. It was observed from the literature, the Genetic
algorithm has well developed and traditional optimization technique than
other recently developed algorithm. It was proved from the literature that the
GA based MOEA gives better solution for the non-linear multi-objective
problem than other algorithms. The MATLAB code of MOEA developed by
Herrero et al (2007 and 2008) has been used to obtain the Pareto-set and
Pareto-front of MRR versus Ra. It tries to obtain a good approximation to the
Pareto-front in a smart distributed manner with limited memory resources. It
also adjusts the limits of the Pareto-front dynamically. The advantage of this
algorithm is less computation time required to solve the high number of
variables in the objective function.
4.5 SUMMARY
The fundamental concepts of Taguchi and Response Surface
Method have been discussed in this chapter. Near-dry WEDM experiments
have been conducted using Taguchi design of experiments to study the effect
process parameters on the output responses in Chapter 5.
The performance of oxygen-mist near-dry WEDM experiments has
been conducted using central composite design of response surface method
which is used to develop the quadratic models for the material removal rate
and surface roughness. In the Chapter 6, the multi-objective optimization of
oxygen-mist near-dry WEDM has been performed to solve the two conflict
objectives using MOEA.

Das könnte Ihnen auch gefallen