Sie sind auf Seite 1von 17

6.

Experimental Design

6.1 Purpose

Optimization is a common goal of many engineering studies. Improvements in products or processes are
best made using theory and experience supported by experimentation. For an improvement to make a
difference it must ultimately be put to the test. Proper analysis of such results involves Hypothesis Testing
and Analysis of Variance as described in Chapter 5 wherein claims are tested and conclusions drawn based
on the data. Experimental design is a tool used to make the most of experimental work by balancing the
number of experiments with the amount of detail required to support or refute statistical hypotheses.
Designed experiments can have different formats but all involve holding the factors of an experiment
constant at different levels while measuring various outcomes or responses of the study. Designed
experiments vary the factors over pre-defined levels in a way that makes data analysis easy and minimizes
the number of experiments required to draw claims about the hypothesis put forth for the experiment. In this
way meaningful conclusions can be drawn with minimal effort and expense.

6.2 Steps Involved

A typical experimental design involves several steps performed under the guise of limiting the number of
experiments required to achieve the purpose of the study. These steps include:

(i) Screening Experiments to identify main factors,


(ii) Factorial or full-factorial experiments at 2 or more levels of the main factors to develop response
surface maps,
(iii) Hypothesis testing of the optimal factor levels to verify the new product or process is indeed an
improvement over the status quo.

No matter the type of experimental design any given experimental program involves key steps that we will
illustrate using the following example. The Acme R&D staff wants to improve the strength of the metal sheet
sold by the Acme company. They have identified a list of factors that might impact the strength of their
metal sheets including the amount of chemical A, the amount of chemical B, the annealing temperature,
the time to anneal and the thickness of the sheet casting. The first step is to run a screening design to
identify the main factors influencing the metal sheet strength. Correspondingly, those factors that are not
important contributors to the metal sheet strength are eliminated from further study.

Screening Design Goal - identify main factors from multiple potential factors.
(1) Pick high and low levels for each factor,
(2) Generate a screening design using a generator like the Plackett-Burman generator,
(3) Perform experiment,
(4) Analyze data,
(5) Draw conclusions.
For arguments sake let us conclude that the levels of chemicals A and B are the main factors that survive
the screening design. To optimize the mechanical strength of the metal sheets it is important to understand
the relationship between the strength of the metal sheet and the amount of chemicals A and B in the
formula. Response-surface, factorial and full-factorial designs can be used to generate the experimental
design.

Response-Surface Design Goal: Identify the optimum level of Chemicals A and B that maximize the metal
sheet strength.
(1) Identify the levels of the amount of chemicals A and B to study. Use 2 levels for linear relationships and
3 or more levels for non-linear relationships,
(2) Generate the experimental design using RSD generators such as Box-Behnken or Central-Composite
generators or factorial or full-factorial methods,
(3) Run the experiment,
(4) Analyze the data using ANOVA (See Section 8),
(5) Draw conclusions and develop a model of the response = f(factors).

Once the optimum has been identified the R&D staff wants to verify, with a high level of confidence, that the
new, improved metal sheets have higher strength. Here they run experiments to support the alternate
hypothesis that the strength of the new, improved metal sheet is greater than the strength of the existing
metal sheet: H0: µ = µ 0 and Ha: µ > µ 0.

Hypothesis Testing Experiment Goal: Validate the claim that the new, improved metal sheet is stronger than
the current metal sheet.
(1) Run multiple experiments at the optimum levels of chemicals A and B,
(2) Compare the strength of the new metal sheet with multiple samples of the existing metal sheet,
(3) Run the Hypothesis Test and draw conclusions accordingly.

6.3 Screening Design

Screening Designs are used to identify a small number of dominant factors, often with the intent to conduct a
more extensive investigation later on. Table 6.1 is a representation of a 12 run Plackett-Burman screening
design. A rule of thumb for selecting the total number of runs required for a good screening design is 6
more than the number of factors. For a 6 factor study, 12 runs are needed. The first row is unique, rows 2
through 11 are taken from row 1 by moving column 1 of the previous row to the last column of the current
row (the + under Factor 1 of Row 1 moves to Factor 11 of Row 2 in Table 6.1). The remainder of Row 1 is
copied to Row 2 starting at Factor 1. Row 12, the last row, is all negative. The plus and minus represent
the high and low value of the level of each of the factors. Screening designs only look for linear
relationships between the response and the factor.
Table 6.1: 12 Run Plackett-Burman Screening Design.

Factor
Run 1 2 3 4 5 6 7 8 9 10 11
1 + + - + + + - - - + -
2 + - + + + - - - + - +
3 - + + + - - - + - + +
4 + + + - - - + - + + -
5 + + - - - + - + + - +
6 + - - - + - + + - + +
7 - - - + - + + - + + +
8 - - + - + + - + + + -
9 - + - + + - + + + - -
10 + - + + - + + + - - -
11 - + + - + + + - - - +
12 - - - - - - - - - - -

Example 6.1: Defining a Screening Design.


Consider a condensation polymerization screening design. The researchers believe the following factors
may be important in determining the cycle time to reach complete conversion of the acid functionality in the
reactor: (1) temperature, (2) catalyst type, (3) catalyst amount, (4) initial diol concentration, (5) initial diacid
concentration and (6) reactor design. Design a screening experiment to identify which of these factors are
significant.

Step 1: A 12 run design will be adequate as 6 + 6 factors = 12 runs needed for satisfactory results. Build
the design using Table 6.1:

Run 1 2 3 4 5 6 7 8 9 10 11 12

Factor 1 + + - + + + - - - + - -

2 + - + + + - - - + - + -

3 - + + + - - - + - + + -

4 + + + - - - + - + + - -

5 + + - - - + - + + - + -

6 + - - - + - + + - + + -

Step 2: Define + and – for each of the factors. For example, you may hold temperature at 180 ºC and 220
ºC as the two levels.

Step 3: Carry out the 12 experiments using the + and – grid shown and carry out ANOVA analysis to
complete the Hypothesis Test:
H o : α1 = α 2 = K = α i = 0
H a : at least one α i ≠ 0

Step 4: Construct an ANOVA table to complete the analysis:

Degrees of Sum of Mean


Source Freedom Squares Square f
a 1 SSa SSa/1 MSa/MSE
b 1 SSb SSb/1 MSb/MSE
c 1 SSc SSc/1 MSc/MSE
d 1 SSd SSd/1 MSd/MSE
e 1 SSe SSe/1 MSe/MSE
f 1 SSf SSf/1 MSf/MSE
error 5 SSE SSE/5
total 11 SST

6.4 Response Surface Designs – Full-Factorial with Replicates

This type of experimental design represents the classic experimental method with one factor varied over
several levels. The underlying concept is illustrated with the following example. In a field experiment, an
experimenter is interested in the amount of fertilizer needed for optimizing the yield of a certain crop. Five
amounts of fertilizer (in pounds per acre) are applied to different plots of ground. In this experiment, the
amount of fertilizer is the factor at five different levels with each plot of land being a separate experimental
unit. If one fertilizer amount is applied to each separate field then for every five fields there represents one
replicate of the experiment. Most experiments involve several observations on each treatment. When a
treatment is applied to more than one experimental unit, we say the treatments are replicated. The method
used for randomly assigning the treatments to the experimental units is called the randomization
procedure. This eliminates bias that could result by systematic assignment.

Example 6.2: Consider an experiment involving yield from fields planted with three varieties of wheat (the
treatments), labeled A, B and C. It was decided that the treatments should be replicated four times. Thus,
twelve fields (the experimental units) are needed to carry out the study shown below in random order:

Table 6.2: Completely Randomized Design


C B C A
B A C B
C A B A

There are several combinations possible, and the above table illustrates one such combination. This is an
example of randomization without restriction.
Now, suppose there are four different experimental stations located in three different states. Because of the
potential for differences in weather and soil between states, it is logical to insist that each variety is grown at
each station. This imposes a restriction on the randomization, namely that each variety must occur once and
only once in each column (since each column designates an experimental station). Such an experimental
design allows some systematic control on the variability due to known “nuisance” sources. This would tend
to minimize the experimental error. [I don’t understand how an experimental station can be spread over 3
different states. Do you mean to say 4 fields per state with each state having different climates. The
blocking would be to guarantee that A, B and C treatments are applied to each state in a random order].

A possible result of such a restricted randomization is shown below [I changed it per the discussion
above]:

Table 6.3: Randomized Block Design


Field 1 Field 2 Field 3 Field 4
State 1 A A B C
State 2 B C C A
State 3 C B A B

The columns in Table 6.3 are called blocks, and hence the name of the method. The randomized block
design is one of the most widely used experimental designs since blocking reduces the effective
experimental error due to one known source of variation. If an experiment such as the above is performed,
the response data recorded can be generalized in a table of k treatments assigned to b blocks as shown:

Table 6.3: (k. b) Array for Randomized Block Design

Treatme B1 B2 . Bj . b Tot Mea


nt al n
1 y11 y12 . y1j . y1b T1. −
y 1.

2 y21 y22 . y2j . y2b T2. −


y 2.

. . . . . . . .
i yi1 yi2 . yij . yib Ti. −
y i.

. . . . . . . .
k yk1 yk2 . ykj . ykb Tk. −
y k.

Total T.1 T.2 . T.j . T.b T..


Mean − − . − . − −
y .1 y .2 y. j y .b y ..

where
Ti. = sum of the observations for treatment i
T.j = sum of the observations in block j
T.. = sum of all (b.k) observations

y i. = mean of the observations for treatment i

y . j = mean of the observations in block j

y .. = mean of all (b.k) observations
Further, let µ i. be the average of the b population means for treatment i,
µ . j the average of the k population of block j,
and µ the average of the (b.k) population means

To determine if part of the variation in our observations is due to differences among the treatments, we
consider the test:

H0: µ1. = µ 2. = ... = µ k . = µ 6.1

Ha: The µ i. are not equal

Just as in Chapter 5, ANOVA analysis is carried out to determine if the treatment or block have a significant
influence over the response. The preferred form of the above equation, assuming that the treatment and the
block effects are additive is:

µ ij = µ + α i + β j + ε ij 6.2

where αi is the effect of treatment i, and βj is the effect of block j. The basic concept is much like that of

the one-way classification except that we must now account in the analysis for the additional effect due to
blocks since variation in two directions is being systematically controlled. If we impose the restrictions that:
k b

∑α
i =1
i =0 and ∑β
j =1
j =0 6.3

then
b k
∑ (µ + α
j =1
i + βj) ∑ (µ + α i + βj)
µ i. = = µ +αi and µ. j = i =1
= µ+βj 6.4
b k
The null hypothesis that the k treatment means µ i. = µ is now equivalent to testing the hypothesis:

H0: α 1 = α 2 = ... = α k = 0 6.5

Ha: At least one αi is not equal to the rest.

The analysis of variance is done as in Chapter 5:

Source of variation Sum of squares Degrees of Mean square Computed f


freedom
Treatments SSA k-1 SSA/(k-1) FA=(MSA/MSE)

Blocks SSB b-1 SSB/(b-1) FB=(MSB/MSE)

Error SSE (k-1)(b-1) SSE/(b-1)(k-1)

Total SST bk-1

Example 6.3: The performance of four different machines M1, M2, M3 and M4 are to be evaluated. It is
decided that the same product will be manufactured on these machines by six different machinists in a
randomized block experiment. The machines are assigned in a random order to each operator. Since
dexterity is involved, there will be a difference among the operators in the time needed to make the product.

The following table assembles the time in seconds to assemble the products:

Machine Operator Total


1 2 3 4 5 6
1 42.5 39.3 39.6 39.9 42.9 43.6 247.8
2 39.8 40.1 40.5 42.3 42.5 43.1 248.3
3 40.2 40.5 41.3 43.4 44.9 45.1 255.4
4 41.3 42.2 43.5 44.2 45.9 42.3 259.4
Total 163.8 162.1 164.9 169.8 176.2 174.1 1010.9

Test the hypothesis at the 0.05 level of significance that the performance of the machines is identical.

The ANOVA table is shown below:

Source of variation Sum of squares Degrees of Mean square Computed f


freedom
Machines 15.93 2 5.31 3.34
Operators 42.09 5 8.42 5.30
Error 23.84 15 1.59
Total 81.86 23

The value of f =3.34 is significant at P=0.048. One would conclude that the performance of the machines
cannot be taken to be similar.
Graphical diagnostic methods
Graphical display of data can provide useful diagnostic insights in ANOVA type of problems as well. For
example, a simple plotting of the raw observations around each treatment mean can provide a feel for
variability between sample means and within samples. Figure 6.1 depicts such plots for the above example.
Fig. 6.1(a) which is the residual plot for different treatment (i.e., machines) reveals that the error variance is
not the same for all machines. The same effect is also noted in Fig. 6.1(b) which shows the residuals for
different blocks (i.e. different operators). Even the model residual plot shown in Fig. 6.1(c) also shows two
outlier points. There seem to be two unusually large residuals which stand out, and it may be wise to go
back and study the experimental conditions which produced these results.

Figure 6.1 Graphical plots for example (Walpole, Myers and Myers, 1998)
(a) Residual plots for the four machines. * indicates multiple residuals
(b) Residual plots for the six operators
(c) Residuals plotted against fitted values

Interaction between blocks and treatments


An implicit and important assumption in the above model design is that the treatment and block effects are
additive. Using Example 6.2 above, it means that if Operator 3 is 0.5 sec faster on the average than
Operator 2 on machine 1, the same difference also holds for machines 2, 3, and 4. This pattern is depicted
in Fig. 6.2(a) where the mean responses of different blocks differ by the same amount from one treatment to
the next. In many experiments, this assumption of additivity does not hold, and the treatment and block
effects interact (see Fig. 6.2-b). For example, operator 2 is faster by 0.5 seconds on the average than
operator 2 when machine 1 is used, but he may be slower by 0.5 seconds on the average than operator 2
when machine 2 is used. The operators and the machines are now interacting.

Figure 6.2 Population means for (a) additive effects, and (b) interacting effects
(Walpole, Myers and Myers, 1998)

6.5 Incomplete Designs – Latin Squares

When multiple factors are studied at multiple levels the number of experiments required for full-factorial
design becomes large:

I
Number of Experiments = ∏ Levelsi
i=1

Consider a 3 factor experiment at 4 different levels each. The number of experiments required to map out
the entire experimental space is given by 4X4X4 = 64. This number quickly grows as the number of factors
and levels increase. The Latin square is a special design that allows for direct analysis of experimental
data. It is only applicable to studies where the number of levels for each factor are the same and this
analysis does not allow for inclusion of any 2, 3 or greater interaction terms. Consider the 3X3 Latin square
below:

C B
1 2 3
1 3 2 1
A 2 2 1 3
3 1 3 2

The unique feature of a Latin square is that every level of factor C appears once in each row and once in
each column. There are 12 possible 3X3 Latin squares. Latin square designs reduce the required number
3 2
of experiments from N to N thereby saving cost and time at the expense of interaction terms.
Suppose we are interested in the yields of 4 varieties of wheat using 4 different
fertilizers over a period of 4 years. The total number of treatment combinations for a
completely randomized design would be 64. By selecting the same number of
categories for all three criteria of classification, we may select a Latin square design
and perform the analysis of variance using the results of only 16 treatment
combinations. A typical Latin square, selected at random from all possible 4 x 4
squares, is given below:

Fertilizer Year
1 2 3 4
1 A B C D
2 D A B C
3 C D A B
4 B C D A

The four letters A, B, C and D represent the 4 varieties of wheat that are referred to as treatments. The
rows and columns represent the two sources of variation we wish to control. We note that in this design,
each treatment occurs exactly once in each row and in each column. Such a balanced arrangement allows
the effect of the fertilizer to be separated from that of the year. If interaction between the sources of variation
is present, the Latin square model cannot be used.

The sum of squares identity is:

SST = SSR + SSC + SSTr + SSE 6.6

where SSR and SSC are the row and column sum of squares respectively,
− − − −
SSR = r ∑ ( y i•• − y ••• ) 2 and SSC = j ∑ ( y • j• − y ••• ) 2
i j

− −
SSTr is the treatment sum of squares: SSTr = r ∑ ( y •• k − y ••• ) 2
k

and SSE is the error sum of squares.

Table 6.4. The analysis of variance for a (r x r) Latin square design


Source of variation Sum of squares Degrees of Mean square Computed f
freedom
Row SSR r-1 SSR/(r-1) FR=(MSR/MSE)

Column SSC r-1 SSC/(r-1) FC=(MSC/MSE)

Treatment SSTr r-1 SSTr/(r-1) FTr=(MSTr/MSE)

Error SSE (r-1)(r-2) SSE/(r-1)(r-2)

Total SST r2-1


Example 6.4: Let us re-consider an experiment with the four varieties of wheat denoted by A, B, C and D.
The following table summarized the data collected in such an experiment, where the data is the yield of
wheat is in kg/plot.

Fertilizer treatment 1981 1982 1983 1984


t1 A B C D
70 75 68 81
t2 D A B C
66 59 55 63
t3 C D A B
59 66 39 42
t4 B C D A
41 57 39 55

Assuming that the various sources of variation do not interact, test the hypothesis, at the 0.05 significance
level that there is no difference in the average yields of the four varieties of wheat.

The ANOVA table for this case is shown below:


Source of variation Sum of squares Degrees of Mean square Computed f
freedom
Fertilizer 1557 3 519.000 2.02
Year 418 3 139.333
Treatment 264 3 88.000
Error 261 6 43.500
Table 2500 15

For a computed f = 2.02 at 3 and 6 degrees of freedom, p=0.2. This is too large to conclude that wheat
varieties significantly affect wheat yield.

6.6 Factorial design

In this section, we shall extend the concepts relating to randomized design to factorial experiments in
general, with multiple factors as well as interaction effects present. Consider an experiment in which the
effects of two factors, A and B, on some response are being investigated. For example, in a biological
experiment, we would like to study the effect of drying time and temperature on the weight of solids left in
the samples of yeast. This is an example of a two-way classification or a two-factor experiment. The
experimental design may, as discussed previously, be:

(a) a completely randomized design, with the various treatment combinations assigned randomly to
all the experimental units, or
(b) randomized complete block design, with the factor combinations assigned randomly to blocks.
The main focus in this section is the use of completely randomized design with a factorial experiment, i.e.,
one involving experimental trials at all factor combinations). For example, in the temperature-drying time
example, we assume three levels of each and n=2 runs at each of the nine combinations, there would be 18
combinations of physical samples of material (or experimental units). This is an example of a two-factor-
factorial in a completely randomized design.

There is a difference between this case and the one-factor experiment, as demonstrated by looking at the
degrees of freedom. If the yeast experiment was taken as a one-factor problem with nine levels, we have:

Treatment combinations = 8; and Error = 9.

For the two-factor case,

Combinations = 8 (temperature = 2, Drying time= 2, and interaction = 4); and Error = 9

The simplest factorial design example is the two-factor analysis of variance. Consider the case of n
replications of the treatment combinations determined by a levels of factor A and b levels of factor B. The
observations can be classified by means of a rectangular array where the rows represent the levels of factor
A and the columns represent the levels of factor B. Each treatment combination defines a cell in the array.
Thus we have (a.b) cells, each cell containing n observations. We denote the k-th observation taken at the I-
th level of factor A and the j-th level of factor B by yijk.

Note that we will assume during this analysis that all (a.b) populations have the same variance.
Table 6.5. Array for two-factorial factorial design
A B
1 2 .. B Total Mean
1 y111 y121 y1b1 Y1.. −
y 1..
y112 y122 y1b2
. . .
y11n y12n y1bn
2 y211 y221 y2b1 Y2.. −
y 2..
y212 y222 y2b2
. . .
y21n y22n y2bn
. . .
a ya11 ya21 yab1 Ya.. −
y a ..
ya12 ya22 yab2
. . .
ya1n ya2n yabn

Total Y.1. Y.2. Y.b. Y…


Mean − − − −
y .1. y .2. y .b. y ...

Each observation in the table can be written as:

yijk = µ ij + ε ijk 6.9

where ε ijk measures the deviations of the observed yijk values in the (ij)th cell from the population
mean µ ij . If (αβ ) ij denotes the interaction effect of the i-th level of factor A and the j-th level of
factor B, α i the effect of the i-th level of factor A, β j the effect of the j-th level of factor B, and
µ the overall mean, we have
µ ij = µ + α i + β j + (αβ ) ij 6.10

on which we impose the restrictions:

a b a b

∑αi = 0
i =1
∑βj = 0
j =1
∑ (αβ )ij = 0
i =1
∑ (αβ )
j =1
ij =0 6.11

The three hypotheses to be tested are:

- At least one of the α i s is not equal to zero


- At least one of the β j s is not equal to zero
- At least one of the (αβ ) ij is not equal to zero.
Each of the tests will be based on a comparison of independent estimates of the variance provided
by splitting of the total sum of squares of the data into 4 components:

SST = SSA + SSB + SS(AB) + SSE 6.12


a − −
where SSA = sum of squares of the main effect of A = bn ∑ ( y i .. − y ... ) 2
i =1

b − −
SSB = sum of squares of the main effect of B = an ∑ ( y . j . − y ... ) 2
j =1

a b − − − −
SS(AB) = interaction sum of squares for A and B = n ∑∑ ( y
i =1 j =1
ij . − y i .. − y . j . + y ... )
2

Yij. = sum of the observations in the (ij)th cell


Yi.. = sum of the observations for the i-th level of factor A
Y.j. = sum of the observations for the j-th level of factor B
Y... = sum of all (a.b.n) observations

y ij . = mean of the observations in the (ij)th cell

y i.. = mean of the observations for the i-th level of factor A

y . j . = mean of the observations for the j-th level of factor B

y ... = mean of all (a.b.n) observations

Table 6.6. Analysis of Variance for the Two-factor experiment with n replications
Source of variation Sum of squares Degrees of Mean square Computed f
freedom
Main effect
A SSA a-1 SSA/(a-1) FA=MSA/MSE

B SSB b-1 SSB/(b-1) FB=MSB/MSE

Two-factor interactions
AB SS(AB) (a-1)(b-1) SS(AB)/(a-1)/(b-1) FAB=MSAB/MSE

Error SSE ab(n-1) SSE/ab(n-1)

Total SST abn-1

Example 6.5
An experiment is conducted to determine which of 3 different missile systems is preferable. The
propellant burning rate for 24 static firings was measured. Four different propellant types were
used. The experiment yielded duplicate observations of burning rates at each combination of the
treatments. The data, after coding, are given in the following table:

Missile system Propellant type


b1 b2 b3 b4
a1 34.0 30.1 29.8 29.0
32.7 32.8 26.7 28.9
a2 32.0 30.2 28.7 27.6
33.2 29.8 28.1 27.8
a3 28.4 27.3 29.7 28.8
29.3 28.9 27.3 29.1

The following hypotheses tests are to be studied:


(a) There is no difference in the mean propellant burning rates when different missile systems are

used, i.e., α1 = α 2 = α 3 = 0
(b) there is no difference in the mean propellant burning rates of the 4 propellant types, i.e

β1 = β 2 = β 3 = 0
(c) there is no interaction between the different missile systems and the different propellant types,

i.e., (αβ )11 = (αβ )12 = ... = (αβ ) 34 = 0


The results of the analysis of variance are tabulated below:

Source of Sum of Degrees of Mean square Computed P


variation squares freedom f value
Missile system 14.52 2 7.26 5.85 0.0170
Propellant type 40.08 3 13.36 10.77 0.0010
Interaction 22.17 6 3.70 2.98 0.0512
Error 14.91 12 1.24
Total 91.68 23

Note that the model ( 11 degrees of freedom) is initially tested and the system, type, and system by
type interaction are tested separately. The f-test on the model yields P=0.0030 which tests the
accumulation of the two main effects and the interaction.

We conclude that:

(a) Reject the null hypothesis since P=0.017. The different missile systems result in different mean
propellant burning rates.

(b) Reject the null hypothesis since P<0.0010. We conclude that the mean propellant burning rates
are not the same for the four propellant types.

(c) Interaction is barely insignificant at the 0.05 level since P=0.0512. One should consider
interaction effect to be perhaps present.

6.7 Response Surface Designs


A special class of statistically designed experiments are those aimed at developing fitted models
describing the relationship between a response and a set of factors. These models can be linear or
non-linear depending on the number of levels studied during experimentation. For example, if only
2 levels are considered for the different factors then only linear relationships can be developed. The
limitation is owed to the limited degrees of freedom left to the ANOVA analysis. A 3 level
experiment can account for curvature. A priori understanding of underlying theory can suggest the
proper form for the model to take including the nature of the non-linearity. The table below shows a
k
3 experimental design for a two factor experiment requiring a total of 9 experiments:

Factor A B
- -
- 0
- +
0 -
0 0
0 +
+ -
+ 0
+ +

For k = 3, 27 experiments and for k = 4, 81 experiments. The number quickly grows and 3k
becomes impractical as k gets much above 3. Consider instead a rotatable design where the fitted
models estimate the response with equal precision at all points in the factor space at equal
distance from the center of the design. These designs require fewer experiments:

Number of Runs = n = 2k + 2k + m, n < 3k

where m is the number of repeats at the center-point. For k = 3 and m = 3 then we have n = 18
k
experiments which is less than the 27 required for a 3 experiment. Central-Composite and Box-
Behnken are two types of rotatable designs allowing for developing Response Surface models with
fewer experiments. Figure 6.5 shows a 3 factor grid using the Box-Behnken design:

A B C
+/ - 1 +/- 1 0 B 0
+/- 1 0 +/- 1
1
0 +/- 1 +/- 1
0 C
0 0 0
-1
-1

-1 1
0
A

The three center-point repeats are not shown. Comparing the differences of the planar averages
for each of the three factors with the standard deviation of the center-point experiments is a rough
measure of the significance of any of the factors.

Das könnte Ihnen auch gefallen