Sie sind auf Seite 1von 19

Design of Experiments - An Introduction

Statistical design of experiments can be used very fruitfully in finding empirical solutions to industrial problems. The areas of application include research, product design, process design,

production troubleshooting, and production optimization. The response or objective in an industrial study is usually a function of many interrelated factors, so solving these problems is typically not straightforward. There are two broad categories of approaches to these problems; 1) finding solutions by invoking known theory or facts, including experience with similar problems or situations, and 2) finding solutions through trial and error or experimentation. Statistical design and analysis methods are very useful for the second approach, and they are much more effective than any traditional, no statistical method.

Classical Versus Statistical Approaches to Experimentation:


A typical strategy for an industrial study used by someone unfamiliar with statistical plans is the one-at-a-time design. In this approach, a solution is sought by methodically varying one factor at a time (usually performing numerous experiments at many different values of that factor), while all other factors are held constant at some reasonable values. This process is then repeated for each of the other factors, in turn. One-at-a-time is a very time consuming strategy, but it has at least two good points. First, it is simple, which is not a virtue "to be sneezed at". And second, it lends itself easily to a graphical display of results. Since people think best in terms of pictures, this is also a very important benefit. Unfortunately, one-at-a-time usually results in a less than optimal solution to a problem, despite the extra work and consequent expense. The reason is that one-at-a-time experimentation is only a good strategy under what we consider to be unusual circumstances: (1) It is a good strategy if the response is a complicated function of the factor, (perhaps multi-modal) which requires many levels of X to elucidate, and (2) It is a good strategy only if the effect of the factor being studied (and therefore it's optimum value) is not changed by the level of any of the other factors. That is, the one-at-a-time strategy works only if the effects are strictly additive, and there are no interactions. As was just stated, but worth repeating, these circumstances do not typically exist in the real world, and so one-at-a-time ends up being an exceedingly poor approach to problem solving.

A more typical set of circumstances characterize the assumptions made by statistical designs: (1) Over the experimental region, the response is smooth with, at most, some curvature but no sharp kinks or inflection points and (2) The effect of one factor can depend on the level of one or more of the other factors, In other words, there may be some interactions between the factors. If these two assumptions hold, the classical one-at-a-time approach could do very badly. If the first assumption holds, one-at-atime would require many more experiments to do the same job, because a smaller number of levels of a factor are needed to fit a smooth response than to fit a complicated one. And if the second assumption is true, one-at-a-time could lead to completely wrong conclusions. For example, if we look at the effect of time and temperature on yield in one-at-a-time fashion, we would get the results shown in Figure 7.1 (top and middle). We would think that the best we can do is about 86% yield. But that is, in fact, very far from the true optimum shown in Figure 7.1 (bottom). The reason for the failure of one-at-a-time experimentation is the interaction between time and temperature that exists. In other words, the best time depends on the temperature (or vice versa). It can't be repeated too often, that these interactions are the rule, not the exception. A statistical strategy (i.e; a factorial design) would not have led us astray. Incidentally, the potential for failing with a one-at-a-time strategy gets even stronger in the real world where experimental errors cloud the issue further.

Keeping time constant...


90 85

Time = 10 hrs

Yield

80 75 70 200 210 220 230 240 250

Temperature oC

Keeping temperature constant...


90 85

Temperature = 220oC

Yield

80 75 70 2 4 6 8 10 12

Time, hrs

Varying time and temp...


250

Yield
65 70 75 80 85 90 95

Temperature oC

240 230 220 210 200 2 4 6 8 10 12

Time, hrs

But one-at-a-time is still far better than another common strategy used in industrial research namely sheer guesswork. Using this strategy, researchers assume that they know the solution to a problem, as if they had the appropriate theory or experience to guide them. Then they run a few confirmatory experiments for validation. If the confirmatory experiments are run. The result is a very

haphazard approach. These poor strategies for studying many factors simultaneously in industrial research often deplete all research funds before a satisfactory solution can be reached. This, in turn, leads to less than optimal product designs, inefficient production processes, and the complete abandonment of many promising ideas. Studying the effects of many factors simultaneously in an efficient way, while at the same time allowing valid conclusions to be drawn, is the purpose of experimental design techniques.

Some Definitions:
Experiment (also called a Run)- an action in which the experimenter changes at least one of the factors being studied and then observes the effect of his/her action(s). collection of historical data is not experimentation. Experimental Unit- the item under study upon which something is changed. In a chemical Note that the passive

experiment it could be a batch of material that is made under certain conditions. The conditions would be changed from one unit (batch) to the next. In a mechanical experiment it could be a prototype model or a fabricated part. Factor (also called an Independent Variable and denoted by the symbol X)- one of the variables under study, which is being deliberately controlled at or near some target value during any given experiment. It's target value is being changed in some systematic way from run to run in order to determine what effect it has on the response(s). Background Variable (also called a Lurking Variable)- a variable of which the experimenter is unaware or cannot control, and which could have an effect on the outcome of an experiment. The effects of these lurking variables should be given a chance to "balance out" in the experimental pattern. Later in this book we will discuss techniques (specifically randomization and blocking) to help ensure that goal. Response (also called a Dependent Variable and denoted by the symbol, Y)- a characteristic of the experimental unit, which is measured during and/or after each run. The value of the response depends on the settings of the independent variables (X's). Experimental Design (also called Experimental Pattern)- the collection of experiments to be run. We have also been calling this the experimental strategy.

Experimental Error- the difference between any given observed response, Y, and the long run average (or 'true" value of Y) at those particular experimental conditions. This error is a fact of life. There is variability (or imprecision) in all experiments. The fact that it is called 'error" should not be construed as meaning it is due to a blunder or mistake. Experimental errors may be broadly classified into two types: bias errors and random errors. A bias error tends to remain constant or follow a consistent pattern over the course of the experimental design. Random errors, on the other hand, change value from one experiment to the next with an average value of zero. The principal tools for dealing with bias errors are blocking and randomization (of the order of the experimental runs). The tool to deal with random error is replication. All of these will be discussed in detail later.

Some of the terms are illustrated in Figure 7.2. It is a study of a chemical reaction (A+B P) consisting of nine runs to determine the effects of two factors (time & temperature) on one response (yield). One run (or experiment) consists of setting the temperature, allowing the reaction to go on for the specified time, and then measuring the yield. The design is the collection of all runs that will be (or were ) made.

FACTORS Time Minutes 75 150 300 60 120 240 45 90 180

RESPONSE Temperature Degrees C 220 220 220 230 230 230 240 240 240 Yield.% 61.3 75.4 81.3 71.6 79.6 86.4 77.2 85.3 87.4
EXPERIMENT

DESIGN

Figure 7.2 Example of an Experimental Study with Terminology Illustrated.

Why Experiment? (Analysis of Historical Data)


Frequently in industrial settings, there is a wealth of data already available about the process of interest in the form of process logs and more recently, computer databases. It is tempting, therefore, to abandon an (expensive) experimental study in favor of a (cheap) analysis of the historical data on hand. On the surface, this looks very reasonable. After all, there has certainly been variation in all of the factors of interest over the life of the process. And there is certainly much more data available from the logs than could ever be collected in an experimental program, so the precision should be excellent. Why then ever bother with experiments at the plant level? The reasons are many. AIL of them have to do with the inadequacies inherit inherent in unplanned data:

Correlation # Causation
If a significant correlation is found between the response and an independent variable (factor), it may be interesting, but it does not prove causation. For example, let us say that someone who had hip surgery years ago noticed that when their hip ached it was sure sign that it would rain within 12 hours. It may be a perfect correlation, but it does not mean that they should bruise their hip to make it ache if their lawn needs rain- Notice that there may be value in the correlation as a predictor of what will happen. But to use it to try to control the system is very likely folly.

Correlated Factors
Overtime, there may have been quite a bit of variation in the factors of interest, but often they move up and down together. It is then impossible to sort out which of the factors, if any, is having an impact on the response. For example, in the gasification of coal, it may be useful to know which impurities in the coal are deleterious to yield. The amounts of the impurities may change quite a bit, but if they stay in roughly the same proportions, it is impossible to sort out which impurity is the culprit, if any.

No Randomization- Bias
Since there is no randomization in the variations of the factors with time, the door is wide open for biases to influence the results. For example, if a plant switched from one supplier of raw materials is worse than the old one. How ever, another explanation is that normal drifting of the process due to other things changing gave a worse yield after the switch. So the raw material source had nothing tto do with the drop in yield. Note: A valid test would be to randomly switch back and for the several times between suppliers.

Tight Control of Factors


If an independent variable is known (or thought) then it will be controlled tightly if possible. Therefore, there will not be enough variation in that factor to get a measurable effect on the response. The few data points that may exist in which these factors do vary more widely will have occurred during plant "upsets". That is not the kind of data that one wants to use to draw sound conclusions.

Incomplete Data
Plants are quite different from laboratories and pilot plants, in that as few variables are measured and recorded as possible, consistent with being able to control the process. Therefore, even in the case when all-important variables are known, which is unusual, they are not usually recorded. And to make matters worse, in older plants where much of the data was/is recorded by hand, even the variables to be measured and recorded may be missing or suspect. Therefore, the analysis of historical data should be done with great care. This does not mean that it is a totally worthless exercise. Some interesting correlations may emerge. These can be used for predictive purposes, or they may point to some factors that were previously thought to be unimportant that should be studied in an experimental program. But, this analysis should be

undertaken with the knowledge that the chances of gleaning any worthwhile information from the data are quite low(<10%). And it must be kept in mind that an analysis of historical data is NEVER a replacement for an experimental program if one needs to determine causation between the factors and the responses(s). "To find out what happens to a system when you interfere with it, you have to interfere with it (not just passively observe it)".2

Diagnosing the Experimental Environment


There is no single experimental design that is best in all possible cases. The best design depends very much on the environment in which the experimental program will be carried out3. Figure 7.3 on the following page shows different possibilities.

Number of Factors
The single most important characteristic of the experimental environment is how many independent variables are to be studied. If the number is small (say three or less), then a design giving fairly complete information on all of them may be reasonable "right off the bat". However, if there are many variables, it is usually more reasonable to proceed in stages-first sifting out the variables of major importance, and then following up with more e fort on them.

Prior Knowledge
The amount of prior knowledge also shapes the experimental program to a very large degree. When the area to be studied is new, there are generally a large number of potential variables that may have an effect on the responses. However, when the area has been studied extensively in the past, the scope of the experiments is generally to further elucidate in detail the effects of a few of the key variables. If theory is available, a mechanistic model may be desirable and experiments can be set up for determining the unknown parameters in the model. The best experiments to run are specific to the model, but often-experimental designs used for empirical models are good as a starting point4.

Cost of an Experiment
The size of a reasonable experimental program is, of course, dictated by the cost of an experimental run versus the potential benefits. The cheaper an experiment is, the more

thoroughly we can study the effects of the independent variables for a reasonable total cost.

Precision
Generally speaking, the reason for experimenting is to be able to make predications about what will happen if you make similar actions in the future. For example, the reason for studying the effect of pH on the yield of a chemical reaction is to be able to say that the yield is 4% higher at a pH of 9 than it is at a pH of 7. The more precise you want your predictions to be, and the less precise your individual data are, the greater the number of experiments required

Iteration Possible
If the duration of an experiment is relatively short, it is usually reasonable to experiment in small bite sized and iterate toward your final go al. If, on the other hand, the time for an experiment to be completed is long (such as stability testing), it would be necessary to initially lay out a fairly extensive pattern of experiments. When it is possible to literate, the first stage of an experimental program should usually be a set of screening experiments. At this stage, all of the factors that could conceivably be important are examined. Since the cost of looking at extra five or ten variables is relatively low with a screening design, it is much cheaper in the long run to consider some extra variables at this stage than to find out later that you neglected an important variable. The next stage is generally a constrained optimization design in which the major variables are examined for interactions and better estimates are obtained of their linear effects. The minor variables are dropped from consideration after the screening stage. (i.e., they

are held at their most economical values). If it is necessary for further optimization, a full unconstrained optimization response surface) design may now be run to allow for curvature in the effects of the factors on the responses.

Generally, the constrained optimization design builds up on the screening design and the unconstrained optimization design just adds more points to the constrained optimization design, so that no points are wasted going from on stage to the next.

If it is necessary to go further and use a theoretical model, the full, unconstrained optimization design is usually a good starting point for the estimation of parameters in the model. Additional runs would have to design via computer. This topic is beyond the scope of this text, and so it will not be discussed any further.

Figure 7.3 The Objective of Experimentation is to Increase our Knowledge

Present

Goal

0%

Knowledge
Screening Constrained Optimization 3-6 Continuous and/or Discrete Unconstrained Optimization 2-4 Continuous only

100%

Objective:

Extrapolation or Optimization 1-5

No of Factors

5-20 Continuous and/or Discrete

Model:

Linear

Linear + Cross products (interactions)

Linear +Cross-products + Quadratics

Mechanistic Model

Information

Identify Important Variables Crude predictions of effects.

Good predictions of effects and interactions.

Good predictions of effects Interactions and Curvature.

Estimate parameters in theoretical model

Designs:

Fractional-Factorial Or Plackett-Burman

Two Level Factorial (+center points)

Central Composite or Box-Behnken

Special (computer generated)

Example of a Complete Experimental Program Chemical Process System:


An example may be useful at this point to illustrate the typical steps involved in solving a specific industrial problem experimentally. Let us say we have a chemical process under development. It consists of a batch reaction to make a product, and the objective is to find the conditions, which give the best yield. The process involves charging a stirred tank reactor with solvent, catalyst, and an expensive reactant (Reactant 1). A second reactant (Reactant.2, which is inexpensive, is added slowly. Reactant 2 is added in excess to ensure that all of Reactant 1 is consumed. Some yield is lost due to the formation of byproducts. After all Reactant 2 is added, the reaction is quenched by adding cold solvent, and then the product is separated by distillation. Several variables were thought to possibly have an important influence on yield. They are listed in Table 7.1 along with a reasonable range of values for each factor. Table 7.1 Variables to be studied for Chemical Reaction Example Variable Definition Label Range of Variables Low Level (-1) 75 4 8 C D E F Time to Add Reactant 2, min Agitation, rpm of stirrer Solvent : Reactant 1 Ratio Catalyst Concentration, mg/I X3 X4 X5 X6 10 20 100 200 1:1 2:1 20 40 High Level (+1 85 B Reactant 2, % excess X2

Temperature, C

X1

Step One: Screening


Since there were six factors to be investigated, the first step in the experimental program was to find out which variables were the most important and focus the most attention on them. It is simply too expensive to study every factor thoroughly, and it is a waste of resources to lavish attention on a variable that has a minor impact. Therefore, we start with screening experiments, which corresponds to the first step in increasing our knowledge. This step is shown in Figure 7.3 semi-graphically as the first column in the figure.

In our example, we used a twelve run Plackett-Burman design. The design is shown in Table 7.2 along with the yield data that was collected for each run. Some details in the analysis of the data are shown in the table, but will not be discussed here. The main object of the experiments was to find out which variables were important, which means which variables had the greatest impact on yield when the variables were changed. The measure of that importance is the effect of the variable, which is how much the yield changed (on average) when the variable was changed from its low value to its high value. These effects are shown near the bottom of Table 7.2. For example, when the reaction temperature, Factor A, was increased from 75O C to 85O C, the yield increased by 24.1% on average. When the variability of the measurements was taken into account, only three variables were found to be significant A,B and F. They are the reaction

temperature, the addition time of Reactant 2, and the catalyst concentration. Furthermore, they all had positive effects on the yield, which means that the higher the variable value, the higher the yield. The best yield obtained in this set of experiments can be seen to be at the high values of the three important factors (Run 10).

Table 7.2 Experiments run to determine which variables are important Screening Design - Plackett-Burman
Run No 1 2 3 4 5 6 7 8 9 10 11 12
Effects

A X1 1 1 -1 1 1 1 -1 -1 -1 1 -1 -1 24.1 7.56

B X2 1 -1 1 1 1 -1 -1 -1 1 -1 1 -1 -3.35 -1.05

C X3 -1 1 1 1 -1 -1 -1 1 -1 1 1 -1 32.08 10.08

D X4 1 1 1 -1 -1 -1 1 -1 1 1 -1 -1 -6.13 -1.93

E F X5 X6 1 1 1 -1 -1 -1 -1 -1 -1 1 1 -1 -1 1 1 1 1 -1 -1 1 1 1 -1 -1 6.85 23.88 2.152 7.502 t for unassigned

X7 -1 -1 -1 1 -1 1 1 -1 1 1 1 -1 0.859 0.27

Unassigned X8 X9 X10 -1 -1 1 -1 1 -1 1 -1 1 -1 1 1 1 1 -1 1 1 1 -1 1 1 1 1 1 1 1 -1 1 -1 -1 -1 -1 -1 -1 -1 -1 -1.37 0.974 6.6 sE = 3.183 -0.43 0.306 2.073

X11 -1 1 1 -1 1 1 1 -1 -1 -1 1 -1 6.237 1.959

Yield 62.7 74.9 44.9 72.1 61.3 54.1 43.2 79.8 8.6 84.2 77.4 10.8

t*(5) = 2.571 Important variables are X1, X3 AND X6 (A,C AND F)

The other important conclusion at the end of our screening experiments was that the remaining factors were not of major importance, and therefore they were set to some reasonable values and ignored for the rest of the experimental program. This means that factor B, the excess of Reactant 2 was set to 4% excess factor D, the speed of the agitator, was set to 100rpm,and factor E, the solvent Reactant 1 ratio, was set to 1:1 . All were the low values of the factors, which were picked to minimize cost.

Step two:

Crude Optimization

Follow-up experiments were then conducted on the three important factors to determine optimum operating conditions. The strategy used was based upon a constrained optimization design called a factorial design, which makes sue we are in the appropriate region. The factorial design is outlined in the second column of Figure 7.3. An important feature of this design is that it can be augmented easily to find an unconstrained optimum, which is our ultimate goal.

Table 7.3 Experiments Run to Optimize Important Variables (Factorial Design) Factorial Design
Run No 1 2 3 4 5 6 7 8 9 10 11 A 80 90 80 90 80 90 80 90 85 85 85 Effect t t*(2)=4.303 C 15 15 25 25 15 15 25 25 20 20 20 F 30 30 30 30 50 50 50 50 40 40 40 X1 -1 1 -1 1 -1 1 -1 1 0 0 0 -20.8 -26.4 X3 -1 -1 1 1 -1 -1 1 1 0 0 0 -6.42 -8.13 X6 -1 -1 -1 -1 1 1 1 1 0 0 0 -11.5 -14.6 X1X3 1 -1 -1 1 1 -1 -1 1 0 0 0 -11.7 -14.8 X1X6 1 -1 1 1 -1 -1 1 1 0 0 0 -11.7 -14.8 X3X6 1 1 -1 -1 -1 -1 1 1 0 0 0 -0.7 -0.89 80.0 18.01 23.82 SC=0.756 62.0 Curva ture 70.2 71.1 74.6 55.3 69.5 50.1 75.5 29.8 78.8 80.3 81.0 Yield

S=1.117

SE=0.79

The factorial design used in this study is shown in Table 7.3. It should be noted that the experiments were centered around the conditions found to be best up to that point: temperature (A) of 850C , addition time (C) of 20 minutes, and a catalyst concentration (F) of 40mg/I. The resulting yield data are also given in the table. An analysis of the data showed that all the variables continued to be important, and they also had some interactions. However, the main point that was learned from the data was that the response could not be described by a straight line model a quadratic equation was needed. This can be seen directly from the data; the average yield in the center of the experimental region was 80%, while the average response at the corners was only 62%. This difference is extremely significant, and it means that the yield is a curved function of the three important factors. It also means that we are in the vicinity of the optimum, and can move on to the final optimization phase.
Step Three: Final Optimization Curved (e.g., quadratic) functions cannot be elucidated by data from a factorial design alone, so the data was augmented by the extra points to form a complete central composite design, which is outlined in the third column of Figure 7.3. These extra points are shown in the bottom half of the design in Table 7.4 (called Block 2) along with the associated yield data. Notice that Block 1 consisted of the factorial design that was already in hand. This composite set of data was used to complete the optimization.

The procedure used was to fit a full quadratic equation to the twenty data points, and then i,e the equation to predict the best operating conditions. Fitting the equation is done using regression analysis, which is a tool included in spreadsheet programs as well as statistical software. Table. 7.5 shows the output from MINITAB , a common statistical software package.

Table 7.4 Experiment Run to Complete Optimization of Important Variables (Central Composite -Design)
Run No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 A 80 90 80 90 80 90 80 90 85 85 85 76.4 93.7 85 85 85 85 85 85 85 C 15 15 25 25 15 15 25 25 20 20 20 20 20 11.4 28.7 20 20 20 20 20 F 30 30 30 30 50 50 50 50 40 40 40 40 40 40 40 22.7 57.3 40 40 40 X1 -1 1 -1 1 -1 -1 -1 1 0 0 0 -1.73 1.73 0 0 0 0 0 0 0 X3 -1 -1 1 1 -1 -1 1 1 0 0 0 0 0 -1.73 1.73 0 0 0 0 0 X6 -1 -1 -1 -1 1 1 1 1 0 0 0 0 0 0 0 -1.73 1.73 0 0 0 Block 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 Yield 70.2 71.1 74.6 55.3 69.5 50.1 75.5 29.8 78.8 80.3 81 82.6 44.6 67 51.2 77.2 58.2 80.9 77.7 83.5

Table7.5

MINITAB

Regression analysis of Data from the Central Composite Design

Response surface Regression


The analysis was done using coded units Estimated Regression Coefficient for Yield

Term Constant Block A C F A*A C*C F*F

Coef 80.3670 0.5927 -10.6708 -3.7920 .5.6607 -5.8437 -7.3472 -4.4738

StDev 0.7726 0.4263 0.5060 0.5060 0.5060 0.4782 0.4782 0.4782

T 104.02 1.39 21.09 -7.49 -11.19 -12.22 -15.37 -9.36

P 0.000 0.198 0.000 0.000 0.000 0.000 0.000 0.000

A*C A*F C*F S=1.892

-5.8125 -5.8375 -0.3625

0.6691 0.6691 0.6691

-8.67 -8.72 -0.54

0.000 0.000 0.601

R-Sq=99.2%

R-Sq(adj)=98.4

The quadratic equation can then be used to find the maximum yield analytically, but a more common approach is to plot the equation, since people tend to think pictorially. Plots for this example are shown in Figure 7.4. The optimum conditions can be seen to be a reaction temperature (Variable a) of 80 0, an addition time for Reactant 2 (Variable C) of 20 minutes and a catalyst concentration (Variable F) of 40 mg/I.

Good Design Requirements


What should an ideal experimental design do for us? The following are some attributes of a good design:

Defined objectives
The experimenter should clearly set forth the objectives of the study before deciding on an experimental design and proceeding with the experiments. Generally this takes the form of what model will be fit to the data: a simple linear model. A linear model with interactions, or a full quadratic model. In addition, the desired precision of the conclusions needs to be specified. A good design must meet all of the objectives. Once a design is selected, the experimenter can and should detail ( for the sponsor of the work) not only what information will be obtained form the experimental data, but what information will not be learned, so that there are no misunderstandings.

Unobscured Effects
The effects of each of the factors in the experimental program should not be obscured by the other variables, as far as possible.

Free of Bias
As far as possible, the experimental results should be free of bias, conscious or unconscious. The first step in assuring this is to carefully review the experimental setup and procedure. However, some statistical tools are helpful in this regard: BLOCKING (planned grouping of runs lets us take some lurking variables in to account. RANDOMIZATION to the run order within each block enables us to minimize the confounding (biasing) the factor effect with background variables. REPLICATION aids randomization to do a better job, as well as giving more precision.

Variability Estimated
In order to be able to decide whether the effects of factors that were found in the experimental program are real or whether they could just be due to the variability in the data, the experimental design should provide for estimating the precision of the results. The only time this is not needed is when there is a well-known history of the process or system being studied, with quantitative estimates of the process standard deviation, o, from process capability studies. Replication provides the estimate of precision, while randomization assures that the estimate is valid.

Design Precision
The Precision of the total experimental program should be sufficient to meet the objectives. In other words, enough data should be taken so that effects that are large enough to have practical significance will statistically significant Greater precision can sometimes be achieved by refinements in experimental technique. Blocking can also some times help

improve precision a great deal. However, the main tool at our disposal to increase precision is replication. Courtesy: John Lawson, & John Erjavec, Modern Statistics for Engineering and Quality Improvement, Thomson Duxbury, 2000, Indian EPZ edition $9.00, Chapter 7, pp181-195. Conclusion: We have discussed the deficiencies of classical one at a time experiments, and why it is not enough to analyse historical data. An example of a Complete Experimental Program was given and the characteristics of a Good Design were also enumerated. This is only the tip of the iceberg and there is extensive literature available for perusal. With the increasing availability of Computer Software, application of Design of Experiments has become just a few mouse clicks away. I hope this brief introduction will motivate you to at least attempt to use this very powerful tool.

Das könnte Ihnen auch gefallen