Sie sind auf Seite 1von 105

BB Program

Improve Phase

Five Broad Improvement Approaches


Exit from Analyze Phase Check Availability of Useful Y = f (X) Optimize using Yes A1 graphical or quantitative Available? optimization techniques No Examine Status of the Xs Effect Established? No Examine Feasibility of Experimentation

A4 No

Adopt Data Based Innovative Approach

Feasible? No

Yes

A3

Yes Examine Nature of the Xs Control Factors? No A2 Adopt Innovation-Prioritization Approach Yes Examine Nature of the Solution

Conduct DOE

Known?

Yes

A5

Obvious Solution

Innovation-Prioritization Approach (A2)

Most popular. Frequently used for improving service processes


But the approach is risky, since we are dealing with a poorly characterized process.

So, even if the problem is solved successfully, usually


It becomes difficult to hold on to the gains Very little is learned about the process

Often requires sizeable capital investment or the solution involves control procedures which are difficult to follow on a routine basis

Innovative (A4) and Obvious (A5) Approaches

In many cases it is found that the root cause of the problem is a control factor and its optimal level is either known (specified standard) or obvious. Implementation of the known or obvious solution will solve the problem. Note however that even though the solution is obvious, identification of the root cause may not be so. But what to do when the root causes identified are found to be within their specified operating tolerances or if the problem remains unsolved even after maintaining the Xs within their operating tolerances? If the process is stable, then tightening operating tolerances may solve the problem. However, in case of an unstable process, tightening of tolerances is not likely to be effective. So what do we do? The answer to this question is discussed in the next section.

A Difficult Problem

Consider a highly unstable continuous chemical process. The team takes a 30000 ft view of the process, somehow estimates the approximate sigma level and proceeds to the analyze phase. The control factors are identified easily. However the team now finds it extremely difficult to establish the effects of the control factors. The team also observes that the levels of some of the control factors are adjusted quiet frequently in response to the unstable behavior of the process. Although the standard operating ranges are available for the control factors, at times these limits are violated. More surprisingly, such violations are found to have no bearing on process performance. In fact, in many cases the process performance did not improve even after making the necessary process adjustments. Given this background, what should be the improvement approach of the team?

A Difficult Problem (Contd.)

The above situation indicates presence of strong interactions in the system. So we have a situation where the root causes have been identified but their effects are yet to be established. Also, experimentation is not feasible since we are dealing with a continuous chemical process. The team needs to be innovative enough to take care of the interactions. Note that, there is no point in rejecting such problems on the ground that these are primarily control problems (as identified during the measure phase) and hence not ideal for breakthrough. This is because, one finds it hard to control such processes, particularly when the inputs vary a great deal due to their natural or agricultural origin.

Improve Phase Road Map


A1 Find Optimal Solution A2 Brainstorm to Generate Potential Solutions Evaluate Potential Solutions and Select the Optimum Conduct Confirmatory Trials Find Optimal Solution Formulate Innovative data Collection and Analysis Plan A4 Solution Confirmed? A3 Plan and Conduct DOE

Find Optimal Solution

No

Yes
Compare Expected Benefits and Project Goals B

Go back to appropriate step of the Analyze or Improve phase

Improve Phase Road Map (Contd.)


B Results Acceptable?

No

Yes
Set Operating Tolerances Assess Risks Risk Acceptable?

Go back to Define phase or close project and document failure

No

Yes
Pilot Solutions Results Satisfactory?

No

Yes
A5 Develop Implementation Plan

Go back to appropriate step of the Analyze or Improve phase

Exit to Control Phase

Learning and Experimentation


Chinese Proverb
I hear and I forget I see and I remember I do and I understand

Theory and Experiment

Experiments complement, supplement and stimulate theoretical research


Theory Propositions Hypotheses Observations (Data)
Conceptualization No sense of measurement Thought experiment Experimentation Creation of instruments Experimental set up (hardware and software) Measurement

Experiments may be triggered by a specific societal need. There may be no concern for or relevance to theory

Role of Experimentation

Three Broad Objectives

Exploration/Discovery

Investigation/Characterization/Optimization

Results are compared against the whole body of knowledge Comparison of a set of treatment results Results are compared against standard/known/predicted values

Verification/Confirmation/Demonstration

Black Box Model of Experimentation


Noise Factors
Z1 Z2 Z3 Z4 Z5 Z6 Z7 Z8 . . Zq X1 X2 Control X3 Factors X4

. .

PROCESS

Y1 Y2 Y3 Responses

. .

Xp

Yr

(Y1, Y2, ...., Yr) = f (X1, X2, ..., Xp; Z1, Z2, , Zq)

How to Experiment An Example

Consider a simple experiment to find the soaking temperature that will maximize hardness of a steel component. Assume that the experimenter selects three levels of soaking temperature 9500C, 10000C and 10500C for experimentation. Comparing with our Black Box model

Control factor: Soaking temperature Response: Hardness Noise factors: Soaking time, Chemical composition, Heating curve, Ambient condition, Measurement and a host of other factors

Even in case of such a simple experiment, it is necessary to plan the experiment carefully

Heat Treatment Experiment Planning Questions


How many trials at each soaking temperature? How many test pieces from each trial? How many measurements on each test piece? Should the number of trials / test pieces / measurements be same for each level of soaking temperature? How to deal with the potential noise factor like chemical composition? How to deal with the potential noise factors like soaking time and heating curve? How to deal with variation in ambient temperature? Assuming three trials per level, what should be the order of the nine trials?

Answers are by no means trivial!

Three Principles of Experimentation

Answers the planning questions

REPLICATION RANDOMIZATION BLOCKING OR LOCAL CONTROL


Common to all experiments whether single factor or multifactor Multifactor experiments raise additional design questions

Replication

Replication means repetition of a trial Why replicate? To obtain an estimate of experimental error Why estimate experimental error? To test statistical significance of the observed effects Why test? To gain confidence that the predicted effects will be realized in practice

Replication Primary, Secondary,


Heat Treatment Example
Soaking temperature Test piece Measurement TP1
Primary or Experimental Error

9500C TP2 TP3 TP4

9500C TP5 TP6

M1 M2 M3 M4 M5 M6

M7 M8 M9 M10 M11 M12

Similar set up for 10000C and 10500C

Genuine repetition of trials

T E S T

Source Soaking temperature Trial - Primary error (e1) Test piece - Secondary error (e2) Measurement- Tertiary error (e3) TOTAL

df 2 1x3=3 4 x 3 = 12 6 x 3 = 18 35

Randomization - 1
Heat Treatment Example
Trial # Temp.
1 2 3 4 5 6 950 950 1000 1000 1050 1050
M1 M2 M3 TP2 M4 M5 TP3 M6 TP1 Assume eighteen test pieces are cast

from the same heat How should we allocate the test pieces to the trials? Randomly! To protect against any unforeseen bias resulting from the condition of the test pieces

Having made the allocation, in what order should we conduct the trials?

Randomly! To protect against any unforeseen bias resulting from a host of uncontrolled factors like furnace and environmental condition

Randomization - 2
Heat Treatment Example
Randomized trial order and allocation of test pieces
TP10
M1 M2 M3 M4 M5 M6

How to randomize? Mechanical procedure Drawing of numbered chips from a lot Published random number table Mathematical RAND function in Excel

Trial # Temp.
4 2 5 1 6 3 950 950 1000 1000 1050 1050 TP7 TP16

Avoid mental randomization!

Randomization - 3
Gain and loss from randomization

Gain

Protection from unforeseen bias Valid estimate of experimental error if more than one replicate is available Higher the non-homogeneity of experimental material (e.g. one test piece from each heat), the larger is the gain Simplicity of logistics

Loss

Protection from systematic bias only - Despite our best effort, outliers may creep in!

Blocking Or Local Control - 1


Heat Treatment Example

How should we deal with the noise factors like soaking time and heating curve?

Of course, as far as possible these are to be kept fixed at their standard operating level. Care should be exercised, so that they do not vary from trial to trial. This is called local control objective is to block the effect of the noise factors. Note that, these two can be control factors in other experiments

Blocking Or Local Control - 2


Heat Treatment Example

How should we deal with the noise factor chemical composition?


We need 6 x 3 = 18 test pieces. How do we select them? All the eighteen test pieces from the same heat?

Local control of a noise factor should be avoided. Reproducibility of the results may be compromised All the test pieces may not be obtainable from a single heat Results may vary too much on account of variation in chemical composition itself. Effect of soaking temperature, the control factor of interest, may not be detected

One piece each from eighteen different heats?

Solution for both the above problems follows:

Blocking Or Local Control - 3


Heat Treatment Example Select three different heats (H1, H2, H3) covering as much material variation as possible Select six test pieces randomly from each heat and allocate these (TP11, .., TP16, , TP31, , TP36) to the trials as follows
Replicate 1
H1 950C TP12 H2 TP24 H3 TP36 950C

Replicate 2
H1 TP14 H2 TP23 H3 TP31

1000C
1050C

TP13
TP15

TP21
TP22

TP32
TP35

1000C
1050C

TP11
TP16

TP26
TP25

TP34
TP33

Fair comparison of temperature levels is possible - WHY? H1, H2 and H3 are called BLOCKS. Hence the name blocking

Blocking Or Local Control - 3


How does blocking affect data analysis? For simplicity, consider only one replicate and one measurement / test piece in our heat treatment example
H1 950C 1000C 1050C
(TP12) Y11 (TP13) Y12 (TP11) Y13

H2
(TP23) Y21 (TP21) Y22 (TP22) Y23

H3
(TP33) Y31 (TP32) Y32 (TP31) Y33

Separate heat treatment of each test piece and conducting the nine trials in random order Two-way ANOVA with one observation per cell

Occasionally the block factor impose a restriction on randomization of trial order For example, if day is chosen as a block factor then randomization of trial order is possible only within a day ANOVA should reflect such restriction on randomization

Blocking Or Local Control - 4


Gain and loss from blocking

Gain (compared to completely randomized expt.)


Lower experimental error Less contamination of treatment effect by block factor (noise) Usually, higher the block to block variation the larger is the gain Simplicity of logistics Error df (paired t test is a special case of blocking)

Loss

Protection from systematic bias only - Despite our best effort, outliers may

creep in! Strategy: Block the systematic sources of variation and randomize the rest

Types of Experiment

Observational

Past QC records or observation of the factor levels and the corresponding outcome without making any intervention (1) Limited blocking/local control, if any (2) No randomization of trial order hence no valid estimate of experimental error (3) Approximate replicates Correlation among the independent variables and autocorrelation of the dependent variables Even if a factor is found significant, one must be careful to infer cause and effect relationship Factor levels are changed as per plan and responses noted Follows the three principles of experimentation Should be preferred, wherever possible

Manipulative

Types of Observational Studies

Prospective

Sampling points are either predetermined randomly (Y yet to be observed) or determined randomly from all the available records Autocorrelation may be avoided by spacing out the sampling points. Blocking of the timeline and drawing samples randomly from each block is a good practice Regression analysis Y values are always available. Observations are classified depending on the value of Y, say defectives and non-defectives. Random samples are drawn from each group and then the X values of each group are compared to find the significant Xs, if any Rare event or the X values are generated after sampling Discriminant analysis

Retrospective

Manipulative Experiment

Henceforth by an experiment we shall mean a manipulative experiment Observational studies are usually conducted during the ANALYZE phase Usually, multifactor experiments play the most important role during the IMPROVE phase By a multifactor experiment we shall mean an experiment involving more than one control factors An experiment involving one control factor and one or more block factors is not a multifactor experiment Rest of the material is devoted solely to multifactor experiments

Multifactor Experiments An Example


An experiment to increase hardness of a die cast engine component Factor Code A B C D E Factor % Cu % Mg % Zn Water Cooling Air Cooling Level 1 0.10 0.05 0.03 On On Level 2 0.20 0.07 0.06 Off Off Response: Rockwell hardness (B scale) Qualitative factor

Which factors affect hardness? What is the rank of the factors with respect to their impact on hardness? What is the best factor level combination? How much improvement can we expect at the best factor level combination?

How to design the experiment?

Traditional Approach (One-factor-at-a-time)


Trial No. A B C D E Hardness
1 2 3 4 5 6 1 1 1 1 1 2 1 1 1 1 2 2 1 1 1 2 2 2 1 1 2 2 2 2 1 2 2 2 2 2 56 63 68 69 72 75 A2 is better B2 is better C1 is better D2 is better E1 is better Optimum Combination A2 B2 C1 D2 E1

Remark

To what extent are the results of this experiment reproducible? A2 is found better while the other factors are at B1C1D1E1. Is it
reasonable to assume that A2 will be better even at B2C1D2E1, the recommended optimum?

Main Effect and Interaction Effect


2 x 2 Experiment
A1 B1 B2 37 21 A2 47 35 Average 42 28 Main effect of A = Average response at A2 Average response at A1 = A2 A1 = 41 29 = 12 units Main effect of B = B2 B1 = 28 42 = -14 units

Average

29

41

Interaction effect of A and B = [Effect of A at B2 Effect of A at B1] = [A2B2 A1B2] [A2B1 A1B1] = [35 21] [47 37] = 7 5 = 2 units = [Effect of B at A2 Effect of B at A1] Effect of A at B1 Effect of A at B2 implies presence of interaction effect Graphical illustration follows

Response Graphs
Average response Average response B1 Average response A B AB A B B1 AB A B AB B1 B2

B2 A1 A2

B2 A1 A2 B AB B1 B2 A1 A2

A1 A A B B AB AB

A2

Average response

B1 B2 A1 A2

Average response

AB

? ? ?

Three level factors

Full Factorial Experiment

Consists of all possible factor-level combinations


General format: p q r s k factors, each at two levels: 2k design k factors, each at three levels: 3k design

In our previous die casting experiment, we have five two level factors. So we can conduct a 25 experiment
25 Design
Trial # 1 2 3 4 5 . ? A 1 1 1 1 ? . 2 B 1 1 1 1 ? . 2 C 1 1 1 1 ? . 2 D 1 1 2 2 ? . 2 E 1 2 1 2 ? . 2

Analysis of 2k Experiment An Example

Single replicate of a experiment

24

Response: Filtration rate of a chemical produced in a pressure vessel (to be maximized) Factors: Temperature (A), Pressure (B), Reactant concentration (C), Stirring rate (D). Pilot plant experiment Level codes: Low (1), High (2)

Standard order (1) a b ab c ac bc abc d ad bd abd cd acd bcd abcd

A 1 2 1 2 1 2 1 2 1 2 1 2 1 2 1 2

B 1 1 2 2 1 1 2 2 1 1 2 2 1 1 2 2

C 1 1 1 1 2 2 2 2 1 1 1 1 2 2 2 2

D 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2

Filtration rate (Y) 45 71 48 65 68 60 80 65 43 100 45 104 75 86 70 96

The 24 Experiment ANOVA Table


Source
A B C D AB BC CA AD BD CD ABC ABD ACD BCD ABCD Error TOTAL

df
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 15

SS
?

Form four one way tables

A1 Eight Y values corresponding to level 1 of A

A2 Eight Y values corresponding to level 2 of A

SSA = (Cell Total)2 / 8 - CF

Cell Total

Cell Total

Form six two way tables Form four three way tables

B1 B2

A1 A2 Four Y values Four Y values

Cell Total Cell Total

Cell Total Cell Total SSABC =

Four Y values Four Y values

SSAB = (Cell Total)2 / 4 CF SSA - SSB

(Cell Total)2 /2 CF SSA SSB SSC SSAB SSBC - SSAC

? 0 ?

By Subtraction TSS = Y2 - CF

The 24 Experiment Computing SS


A1 45 48 68 80 43 45 75 70 474 A2 71 65 60 65 100 104 86 96 647 A1 B1 Two Way Table B2 Total A2 Total 548 573 1121 45, 68 71, 60 231 317 43, 75 100, 86 48, 80 65, 65 330 243 45, 70 104, 96 474 647

One Way Table

T = 474 + 847 = 1121

CF = T2/16 = 11212/16 = 75540.0625 SSA = A12/8 + A22/8 CF = 4742/8 6472/8 75540.0625 = 1870.5625

SScell total = 2312/4 + 2432/4 + 3172/4 + 3302/4 CF = 1909.6875 SSB = 5482/8 + 5732/8 CF = 39.0625 SSAB = SScell total SSA SSB = 1909.6875 1870.5625 39.0625 = 0.0625

2k Experiment Computing SS

Computing interaction SS from the 2-way,3-way,.., k-way tables is obviously very tedious Yates method is very helpful for easy computation of the SS We wont discuss Yates method, since MINITAB is available However, it will be instructive to compute the SS from the effect contrasts using Excel

The 24 Experiment - Computing Factorial Effects and SS


Recoded Design Matrix 1 -1, 2 +1
Trial No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Y Total L1 Y Total L2 A -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 B -1 -1 +1 +1 -1 -1 +1 +1 -1 -1 +1 +1 -1 -1 +1 +1 C -1 -1 -1 -1 +1 +1 +1 +1 -1 -1 -1 -1 +1 +1 +1 +1 D -1 -1 -1 -1 -1 -1 -1 -1 +1 +1 +1 +1 +1 +1 +1 +1 Y 45 71 48 65 68 60 80 65 43 100 45 104 75 86 70 96 1121

Let the factor-level totals given at the last two rows of the table be denoted by Fi Effect of A = EA = A2 A1 = (547/8) (474/8) = 21.625 SS due to A = SSA = (A2)2/8 + (A1)2/8 CF = 4742/8 + 6472/8 11212/16 = 1870.563 Effect and SS due to other factors are obtained similarly

474 647

548 573

521 600

502 619

Effect Contrast
Trial No. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Y Total L1 Y Total L2 A -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 B -1 -1 +1 +1 -1 -1 +1 +1 -1 -1 +1 +1 -1 -1 +1 +1 C -1 -1 -1 -1 +1 +1 +1 +1 -1 -1 -1 -1 +1 +1 +1 +1 D -1 -1 -1 -1 -1 -1 -1 -1 +1 +1 +1 +1 +1 +1 +1 +1 Y 45 71 48 65 68 60 80 65 43 100 45 104 75 86 70 96 1121

474 548 521 502 647 573 600 619

Contrast for main effect of B = Column B x Column Y = - Y1 - Y2 + Y3 + Y4 -Y5 - Y6 + Y7 +Y8 - Y9 - Y10 + Y11 + Y12 - Y13 - Y14 + Y15 +Y16 = 573 548 = 25 Esource = 2*Contrastsource/n SSsource = (Contrastsource)2/n n = Total number of observations EB = (2*25)/16 = 3.125 SSB = (252)/16 = 39.0625 We can verify that SSB = (B12+B22)/8 - CF = 39.0625 (A12 +A22)/n (A1+ A2)2/(2*n) = (A1 A2)2/(2*n) ContrastAB = ? . ContrastABCD = ?

Contrast and SS of Interactions


A -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 -1 +1 474 647 B -1 -1 +1 +1 -1 -1 +1 +1 -1 -1 +1 +1 -1 -1 +1 +1 548 573 C -1 -1 -1 -1 +1 +1 +1 +1 -1 -1 -1 -1 +1 +1 +1 +1 521 600 D -1 -1 -1 -1 -1 -1 -1 -1 +1 +1 +1 +1 +1 +1 +1 +1 502 619 AB ABC ABCD Y +1 -1 +1 45 - 1 +1 -1 71 - 1 +1 -1 48 +1 -1 +1 65 +1 +1 -1 68 -1 -1 +1 60 -1 -1 +1 80 +1 +1 -1 65 +1 -1 -1 43 - 1 +1 +1 100 - 1 +1 +1 45 +1 -1 -1 104 +1 +1 +1 75 -1 -1 -1 86 -1 -1 -1 70 +1 +1 +1 96 560 553 555 1121 561 568 566

ColAB = ColA x ColB ColABC = ColAB x ColC ColABCD = ColABC x ColD Columns for other interactions are obtained similarly ContrastAB = ColAB x ColY = 561 560 = 1 SSAB = 12/16 = 0.0625 (same as obtained before from two way table) SSABC = (568 - 553)2/16 = 14.0625 SSABCD = (566 555)2/16 = 7.5625 SS for the other components can be obtained similarly

The 24 Experiment - ANOVA


Source A B C D AB AC AD BC BD CD ABC ABD ACD BCD ABCD TOTAL df 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 15 SS 1870.5625 39.0625 390.0625 855.5625 0.0625 1314.0625 1105.5625 22.5625 0.5625 5.0625 14.0625 68.0625 10.5625 27.5625 7.5625 5730.9375

How to test or judge significance of the effects? Recall that in two-way ANOVA with one observation per cell, SSAB was assumed to be error and the main effects of A and B were tested against MSAB We shall adopt the same approach here The components having relatively smaller SS will be pooled together to get an estimate of experimental error

The 24 Experiment Pooling


Arrange the MS in descending order Let MSsmallest = MSerror = MSAB Compute F = MSnext / MSerror = MSBD / MSerror= 0.5625/0.0625= 9< F1,1,.05=161.4 Stop if H0 is rejected. If not, pool the component with error. Pooled MSerror = (SSAB + SSBD)/(dfAB + dfBD) = (0.5625+0.0625)/2 = 0.3125 Continue pooling till H0 is rejected or all the components have been pooled In our case, significance resulted while testing C. So final estimate of MSerror = (0.0625++68.0625)/10 = 19.5125 Use Judgment. It is usually safe to pool six smallest components when dftotal 15
Source A AC AD D C ABD B BCD BC ABC ACD ABCD CD BD AB TOTAL df 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 15 SS 1870.5625 1314.0625 1105.5625 855.5625 390.0625 68.0625 39.0625 27.5625 22.5625 14.0625 10.0625 7.5625 5.0625 0.5625 0.0625 5730.9375 MS 1870.5625 1314.0625 1105.5625 855.5625 390.0625 68.0625 39.0625 27.5625 22.5625 14.0625 10.0625 7.5625 5.0625 0.5625 0.0625

Mechanical approach to pooling

The 24 Experiment Final ANOVA


Source A AC df MS % 1 95.86** 1 67.34** 1 56.66** 1 43.85** 1 19.99** 1 68.0625 1 39.0625 1 27.5625 1 SS F 1870.5625 32.3 1314.0625 22.6 1105.5625 19.0 855.5625 14.6 390.0625 06.5 68.0625 39.0625 27.5625 1870.5625 1314.0625 % indicates importance of the component Higher the F ratio, the higher will be %. But F ratio is difficult to interpret

AD
D C

Pooled

1105.5625 F.05,1,10 = 4.96 F.01,1,10 = 10.04 855.5625


390.0625

ABD
B BCD

SSSource dfError x MSError x 100 % = TSS

BC

22.5625

The 24 Experiment NPP


Normal Probability Plot of the Effects
(response is filtration rate, Alpha = .05) 99 95 90 80 70 60 50 40 30 20 10 5 1 AC -20 -10 0 10 20 C D A AD Effect Type Not Significant Significant

Effect
Lenth's PSE = 2.625

MINITAB output. We can compute the effects from the contrasts as described before Insignificant effects are smaller in magnitude and tend to be centered around zero the fitted line Significant effects are larger in magnitude and fall far away from the fitted line The same five effects as found by pooling are found to be significant

NPP should not be viewed as an alternative to pooling, since we need an estimate of error for constructing prediction band for the best combination. Lengths PSE (Pseudo Standard Error) in MINITAB output appears to be too conservative NPP can be used for having better judgment on pooling In many other cases, finding significant effects from NPP will not be as easy as above

Percent

The 24 Experiment Best Combination


Interaction Effect AD
Interaction Effect AC
90

Two way table of mean filtration rate


85 76.75

Mean Filtration Rate

80 70 60 50 45.25 40 73.25

C2 C1

A1
A2

D1 60.25 A2D2

D2 58.25

A1

A2 65.25

96.50

A2C2

Overall Optimum: A2C2D2


The best level of the factor B can be chosen based on cost and convenience Cost is always an important consideration for selecting the optimum

Practical Limitations of Full Factorials

Let us consider a situation where we wish to investigate 13 factors, each at three levels A FF experiment will require 313 trials 313 = 1594323 Even if it takes only 10 minutes to conduct a trial, the complete experimental results will be available only after 60 years!! Industrial experiments involving ten or more three level factors is common

The Alternative

Two practical alternatives


Fractional Factorials (2k-p, 3k-p designs) Orthogonal Arrays (OA designs L4, L8, L16, L18, )

Above classification is conventional, but somewhat arbitrary FFs are orthogonal and OAs are saturated FFs Here we shall discuss only OA designs developed by G. Taguchi because of their ease of construction and flexibility

Taguchis OA designs are available in MINITAB


3k-p and mixed level FFs are not available in MINITAB

Orthogonal Array L8
1 1 2 3 4 1 1 1 1 2 1 1 2 2 3 1 1 2 2 4 1 2 1 2 5 1 2 1 2 6 1 2 2 1 7 1 2 2 1

5
6 7 8

2
2 2 2

1
1 2 2

2
2 1 1

1
2 1 2

2
1 2 1

1
2 2 1

2
1 1 2

One factor is assigned to one column Maximum of seven two level factors 1s and 2s in each column - Two levels of the factor assigned to the column Each row constitute a trial Assume four two level factors (A-D) are assigned to the first four columns First trial: A1B1C1D1 Eighth trial: A2B2C1D2 Vacant columns are used for estimating either interactions or experimental error

Columns 1, 2 and 4 constitute the 23 design 1/16th fraction of the 27 design or 27-4 design

Orthogonal Array Series

Two level series: L4(23), L8(27), L16(215),

L16(215): 16 rows, 15 columns, each column consists of 1s and 2s only. Similarly for other arrays L9(34): 9 rows, 4 columns, each column consists of 1s, 2s and 3s.

Three level series: L9(34), L27(313),

Mixed level series: L18(21 x 37), L36(211 x 312), .

L18(21 x 37): 18 rows, one 2-level column and seven 3level column

In short, OAs are represented as L8, L16, L18,

Meaning of Orthogonality

Orthogonal means balanced or seperabe or unbiased

The idea of balancing for clean separation of alternatives is used in many forms of experiments with which we are familiar In football games, the changing of field and each team getting a chance to kick-off is an act of balancing to avoid bias

Technically, orthogonality implies that in any pair of columns the all possible factor-level combinations appear with the same frequency

To explain, consider the array L9

Meaning of Orthogonality
L9
Expt. No. 1 2 3 4 5 Col. 1 (A) 1 1 1 2 2 Col. 2 (B) 1 2 3 1 2 Col. 3 (C) 1 2 3 2 3 Col. 4 (D) 1 2 3 3 1 Response Y1 Y2 Y3 Y4 Y5

IMPLICATIONS
_ A1 = (Y1 + Y2 + Y3)/3 _ A2 = (Y4 + Y5 + Y6)/3 _ A3 = (Y7 + Y8 + Y9)/3 In each of these quantities, the effect of B1, B2 and B3 appear only once. The same is the case with C and D. This makes the above three quantities unbiased

6
7 8 9

2
3 3 3

3
1 2 3

1
3 1 2

2
2 3 1

Y6
Y7 Y8 Y9

The nine combinations (1,1), (1,2), (1,3), (2,1), (2,2), (2,3), (3,1), (3,2) and (3,3) appear exactly once in any pair of columns

Linear Graphs

A linear graph consists of a set of nodes and edges A node denotes a column of the OA The edge joining two nodes denotes the column(s) of the OA representing the interaction of the pair of columns under consideration
A standard linear graph of L8 1 3 2 6 1 5 4 7 There are more than one standard linear graphs associated with most of the arrays The standard linear graph of L9 3, 4 2

Designing a Simple OA Experiment

Step 1: Identify the control factors, noise factors and the response

Control factors are those factors whose optimal level can be fixed and monitored during experimentation as well as in actual practice In case of field experiments, select about 5 -12 control factors. Reproducibility of results will be poor with less than five factors. Experimental error and cost of experimentation may be very large with more than twelve factors. The Cause and Effect Diagram can be very useful in identifying important control and noise factors.

Identification of the noise factors is important for having proper randomization and blocking schemes.
Selection of the response is not trivial, although it may seem so. However, considerations for selecting proper response is beyond the scope of this course.

Designing a Simple OA Experiment


Step 2: Select appropriate levels for the control factors
Not a trivial task. Process knowledge is extremely important. Greater the process knowledge, better will be the selection of factors and their levels. Use more than two levels for studying nonlinearity. Nonlinearity should be expected when the range of experimentation is large. Too narrow a range may show the effect as insignificant. In case of factors having discrete levels, use all the levels which are thought to be important, provided cost of experimentation with each of the levels is more or less the same.

Create levels using measurements that can be implemented in practice. For example, if in practice the amount of additive is measured as , 1, 1, etc., then there is no point in selecting levels as 5.4, 6.1 and 6.8 grams using a precision balance just for the sake of experimentation.

Designing a Simple OA Experiment


Example: Factors, levels and response
Factor Code A B C D E Factor % Cu % Mg % Zn Water Cooling Air Cooling Level 1 Level 2 0.10 0.05 0.03 On On 0.20 0.07 0.06 Off Off Response: Rockwell Hardness (B scale)

Die Casting Experiment

Levels can be created using any scale continuous, discrete, ordinal or nominal All the factors need not have the same number of levels. Such situations will be discussed later

Designing a Simple OA Experiment

Step 3: Decide on the interactions to be estimated


Experimentation keeping provisions for estimating too many interactions (say more than three) is not a good practice. Interactions are best dealt with through proper selection of factors and their levels (Steps 1 and 2). Use supplementary measurements and sliding level technique to deal with interactions. These techniques will be explained later through examples and case studies. Keep provisions for estimating only important interactions, which cannot be tackled otherwise.

Designing a Simple OA Experiment

Step 4: Draw the required linear graph


Required linear graph of the die casting experiment)
A AxB B C D E

Poor Construction of Linear Graphs


Both the following representations are poor B AxB AxB C A E

D
A C D E B

Designing a Simple OA Experiment

Step 5: Choose an appropriate OA

Compute the total degrees of freedom (TDF) needed to estimate all the main effects and the desired interactions. Compute Minimum Run Size (MRS) and then the Desirable Run Size (DRS).

MRS = TDF + 1 DRS = MRS + 4 (error degrees of freedom)

Choose an OA from the appropriate series and of size >=DRS or of size >=MRS and replicate. If the number of factors involved is ten or more then use the MRS criteria. For our die casting experiment, TDF = 5x1(five main effects) + 1x1(assuming A x B is also desired) = 5 + 1 = 6. MRS = 6 + 1 = 7. DRS = 7+4 = 11. Thus we have to choose either L16 or replicate L8.

Designing a Simple OA Experiment

Choosing the right OA The MRS criteria may fail

Die casting example: MRS= 7. So one more interaction can be accommodated in L8. Can we accommodate one of AC, AD, BD etc? YES. Can we accommodate one of CD, DE etc? NO. Lesson: Two or more independent edges cannot be estimated using L8. We have to use the array L16 in such cases. Similarly, independent interactions involving two 3-level factors cannot be estimated using L27 (see the standard LG of L27). Another extension (2-level factors): Interactions forming two or more independent triangles cannot be estimated using L16 . The array L32 is necessary for this purpose (see standard LGs).

Designing a Simple OA Experiment

Choosing the right OA When the DRS criteria fail


Modify the required LG by dropping a few interactions so that it becomes possible to use a smaller array. Using a larger array for estimating one or two specific interactions is likely to be unnecessary wastage of resources.

Designing a Simple OA Experiment

Step 6: Allocate the factors to the columns of the chosen OA and construct the OA layout

If no interactions are present and complete randomization of the trials is possible then any factor can be allocated to any of the columns. When interactions are present, it will be convenient to proceed as follows:

6.1: Choose a standard linear graph and modify it to match the required linear graph 6.2: Allocate the factors to the nodes of the modified LG by comparing the modified LG with the desired LG. 6.3: Construct the OA layout of the experiment

Alternatively, the interaction table of the chosen OA can also be used for the purpose of allocation

Designing a Simple OA Experiment


6.1: The required linear graph and its modification
Required Linear Graph (Die casting experiment)
A AxB

Standard Linear Graph (L8)


1

Modified Linear Graph


1

3 2
6

5 4 7 2

3 4 5 6 7

Designing a Simple OA Experiment


6.2: Factors allocated to the nodes of the modified LG
Required Linear Graph
A AxB

Allocation of Factors to the Modified LG


A AxB B 2 3 C 4 D 5 E 6 e 7 1

Designing a Simple OA Experiment


6.3: OA layout of the experiment
Source Column No. A B AxB C D E e

1 1 1 1 1 1 2 2 2 2 2 3 4 5 6 7 8

2 1 1 2 2 1 1 2 2

3 1 1 2 2 2 2 1 1

4 1 2 1 2 1 2 1 2

5 1 2 1 2 2 1 2 1

6 1 2 2 1 1 2 2 1

7 1 2 2 1 2 1 1 2

Preserve the layout. It will be useful during data analysis

Designing a Simple OA Experiment

Step 7: Decide the number of replications and repetitions


Strictly speaking, to be decided based on expected experimental error (process capability) and the least effect size that we want to detect But rarely done in practice. Mostly decided based on cost of experimentation and experimental budget A block factor may be associated with the replicates Recall that the experimental error is variation between trials and repetition error is the variation within a trial It is always advisable to have repetitions since repetitions are usually easy to make more than one part/sample from each trial and more than one measurement on each part Measurement error is a common to both primary and secondary error. So, it is extremely important to check for measurement adequacy

Conducting the Experiment

Step 1: Construct the physical layout of the experiment


Delete the interaction and error columns from the OA layout Substitute the actual levels of the control factors in place of the coded levels of the OA layout
Trial (1) (2) No %Cu %Mg 1 .1 .05 2 .1 .05 Physical layout 3 .1 .07 of our die 4 .1 .07 casting example 5 .2 .05 6 .2 .05 7 .2 .07 8 .2 .07 (4) %Zn .03 .06 .03 .06 .03 .06 .03 .06 (5) WC ON OFF ON OFF OFF ON OFF ON (6) AC ON OFF OFF ON ON OFF OFF ON

Conducting the Experiment

Step 2: Randomize the order of the trials


Replicate I Replicate II 7 11 8 16 1 4 2 8 3 5 4 9 5 6 6 7 7 13 8 1 1 2 3 14 4 2 5 15 6 10

Trial # Random order

3 12

The eight trials within each replicate could be randomized separately In that case we would have lost 1 df due to the restriction imposed on randomization

If a block factor is associated with the replicate then in many cases we may have to impose such a restriction
If the block factor is like shift or operator then the period of experimentation may become too long with complete randomization

Conducting the Experiment

Step 3: Design the experimental log sheet

The physical layout + the data column + the provisions for recording special experimental conditions Exercise local control Be careful in setting the levels of the control factors correctly. But no special care should be exercised Preserve the experimental units, input material etc., wherever feasible Statistical treatment of missing values is difficult Misplacement and breakage of parts before measurement may lead to missing values

Step 4: Conduct the trials and record results


Step 5: Repeat erroneous and missing trials, if possible


Exercise: Construction of Physical layout

This experiment was performed by the National railway Bureau, Japan in 1959. The objective was to find the best operating condition for mechanical strength, Xray inspection, appearance and workability of an arc welding joint between two steel plates. The following factors and levels were chosen. Factor
1. Type of welding rod 2. Drying of welding rod 3. Welded material 4. Thickness of material 5. Angle of welded part 6. Opening of welded part 7. Current 8. Welding method 9. Pre-heating

Code
A B C D E F G H I

Level 1
J100 No drying SS41 8 mm 600 1.5 mm 150 A Weaving No preheating

Level 2
B17 1-day drying SB35 12 mm 700 3.0 mm 130 A Single Preheating at 150C

Besides the main effects, four interactions AG, AH, GH and AC were also considered important. Draw the required linear graph, choose an appropriate OA and construct the physical layout of the experiment.

Analysis of Experimental Data and Confirmation of Results


Step 1: Identify significant effects ANOVA Step 2: Find the best factor-level combination Effect curves, production cost, operating cost Step 3: Predict expected response at the best combination Simple prediction formulae or regression analysis Step 4: Confirm predictions A small confirmation run and comparison with the predictions

Exercise: Analysis of the Die Casting Experiment


Trial #
1 2 3 4

A (1)
1 1 1 1

B (2)
1 1 2 2

AxB (3)
1 1 2 2

C (4)
1 2 1 2

D (5)
1 2 1 2

E (6)
1 2 2 1

e (7)
1 2 2 1

Hardness R1 R2
71 72 59 76 73 72 55 71

5
6 7 8

2
2 2 2

1
1 2 2

2
2 1 1

1
2 1 2

2
1 2 1

1
2 2 1

2
1 1 2

78
63 74 75

75
69 70 72

Perform ANOVA and find the best combination

Analysis of the Die Casting Experiment


Column #
Source
Level 1 Level 2

1
A

2
B

3
AB

4
C

5
D

6
E

7
e

Interaction AB
B1 B2 Effect
A1 288 261 - 6.75 A2 285 291 + 1.50

549 573 579 555 537 591 567 576 552 546 570 588 534 558

T = Total of the 16 observations = 1125 RSS = Raw sum of squares of the 16 observations = 712 + 732 + + 722 = 79685 CF = T2/16 = 11252 / 16 = 79101.56, TSS = RSS CF = 583.44 (df = 15) A1 = Total of 8 observations at A1 = 549, A2 = Total of 8 observations at A2 = 576 SScol1 = SSA = (A1)2/8 + (A2)2/8 CF = 45.56 (df = 1) Similarly for the other columns / sources SSe1 = SScol7 = 5672 /8+ 5582/8 CF = 5.07 (df=1) SS (pure error) = SSe2 = TSS - j Sscolj = 583.44 (45.56+ +5.07) = 57.50 (df=8) SSe2 can also be computed as ? SSreplicate need not be computed since the 16 trials have been randomized completely ANOVA table follows

Analysis of the Die Casting Experiment


ANOVA Table
Source
A B C D E AB e1 e2 (Pooled error) Total

df
1 1 1 1 1 1 1 8 (9) 15

SS
45.6 27.6 14.1 162.6 203.1 68.1 5.1 57.5 62.6 583.4

MS
45.6 27.6 14.1 162.6 203.1 68.1 5.1 7.19 6.95 -

F
6.55* 3.97 2.02 23.39** 29.21** 9.79* -

%
6.6 3.5 1.2 26.7 33.6 10.5 17.9 100 % =
SSsource dfsource x MSerror TSS x 100

Factors D and E are most important. Factor A and the interaction AB also play significant role

* Significant at 95% level of confidence ** Significant at 99% level of confidence

Analysis of the Die Casting Experiment


Average Response Graphs for the Significant Effects
74 72 70 68 67.125 66 D1 D2 74 72 72 70 72 73.5 76 74 72 70 68 66 64 62 73.875

Choice is D2

Choice is E1

66.75

E1

E2

73
72 71 70 69 68 A1 A2 68.625

B1 B2

72.75 71.25

Choice is ?

68
66 64 A1

Choice is ?
65.25
A2

What about C?

Analysis of the Die Casting Experiment


OPTIMUM COMBINATION The interaction graph shows that BHN is maximized at A2B2. However, we shall choose A2B1 instead of A2B2 since B1 is more robust than B2 and there is not much difference in the expected response between A2B1 and A2B2. C1 (=0.03%Zn) is less costly than C2 (=0.06% Zn) Thus, the overall optimum is A2B1C1D2E1

Prediction of expected response

Main effect (significant): Expected response at A1 and A2 are


A1 T ( A1 T ) and A2 T ( A2 T ) respectively.

If the factor A is insignificant then both ( A1 T ) and ( A2 T ) are zero and hence the expected response is just T-bar. Interaction effect (significant): Effectof ( Ai B j ) T Ai B j Ai B j Assuming the effects of A, B and AB are additive, the expected response at AiBj is then given by E(Y) = (T-bar + Effect of Ai + Effect of Bj + Effect of AiBj)

Case 1: A, B and AB are significant: E(Y ) T ( Ai T ) ( B j T ) ( Ai B j Ai B j T ) Ai B j


Case 2: Only A and AB are significant:E(Y ) T ( Ai T ) ( Ai B j Ai B j T ) T ( Ai B j B j ) Case 3: Only B and AB are significant: E(Y ) T ( Ai B j Ai ) Case 4: Only AB is significant: E(Y ) 2T Ai B j Ai B j

Die Casting Experiment: Prediction of Mean Response at the Optimum


Optimum combination: A2B1C1D2E1
T ( A2 T ) ( A2 B1 A2 B1 T ) ( D2 T ) ( E1 T ) E (YA2 B1C1D2 E1 ) A2 B1 B 1 D2 E1 T 71.25 71.63 75.50 73.88 70.31 78.69

Existing combination: A1B1C1D1E1


( Existing ) A1 B1 B 1 D1 E1 T E (YA1B1C1D1E1 ) 72 .00 71 .63 67 .13 73 .88 70 .31 71 .07

Gain: (78.69 -71.07) RC = 7.6 RC + GREATER ROBUSTNESS

Confirmation of prediction
It is always wise to confirm the results of any field experiment before full-scale implementation. In case of OA experimentation, confirmation of prediction for the optimum combination is a must since most of the interactions are ignored. Small confirmation run of size 1-10 is usually sufficient. Confirmation of prediction can be facilitated by constructing Confidence Intervals (CI) for the true results. The 95% CI for an average of r observations from the trial run is given by
F0.05,1, f Ve ( 1 1 ) ne r

Ve is the same as MSerror and ne is the effective degrees of freedom given by


ne = Total number of trials Sum of df of all the sources (including T) considered for obtaining -cap

Putting r = 1 in the above, we have the CI for an individual observation

Die Casting Experiment : Confidence Interval


From F table, F0.05, 1, 9 = 5.12 From ANOVA table, Ve = 6.95 ne = 16/(df of T + df of A + df of AB + df of D + df of E) = 16/5 = 3.2 Thus 95% of the individual observations of the confirmation run should lie within 78.69 6.82 RC. Confirmation failure indicates that either the factorial effects are not additive or important interactions have been ignored.

Exercise: Prediction of Mean Response

ANOVA of an experiment indicates that A, C, D, E, F, AB and BD are significant. The optimum combination is found as A1B1C2D2E2F2. Give the formula for estimating mean response at the optimum combination.

Optimization: Complex Situations

Example 1: Consider an experiment involving two responses Y1 and Y2. Assume A2 is better than A1 with respect to Y1 but A1 is better than A2 with respect to Y2. How should we choose the best level of A? Example 2: Consider three factors A, B and C. The combinations A1B1 and B2C1 are found to be the best. How can we find the overall best combination from the average response table?

Optimization: Trade-Off Analysis


Factor Level
A1 A2 B1

Operating Cost
*

Response Y1
**

Y2
*

Overall Preference
A2

B2 C1
C2 D1 D2 E1 E2

*
* * ** ** *

B2 C1

D2 E1

* Preferable, ** Highly preferable, No symbol: No preference

Optimization: Example 2

Example 2: Consider three factors A, B and C. The combinations A1B1 and B2C1 are found to be the best. How can we find the overall best combination from the average response table? Case 1: Both A and C are insignificant. Compare simply A1B1 and B2C1 and select the better of the two. Case 2: Both A and C are significant. Estimate the expected responses at (A1B1)C1, (A1B1)C2, A1(B2C1) and A2(B2C1). Select the best of the four. Other cases?

Analysis of 3-Level OA Experiments


Each main effect has two df and each interaction has 2 x 2 = 4 df. SSA (for example) = (A12 + A22 + A32)/r CF, r = No. of observations in the total Ai. If the two columns representing the interaction between two 3-level factors are available in the OA layout then the interaction SS may be obtained by adding the SS of the two columns. Alternatively, the interaction SS may be obtained from the two-way table. If needed, each SS can be partitioned further into components of 1 df each.

Analysis of 3-Level OA Experiments


For example, SSA (2df) = SSAL (1df) + SSAQ (1df) SSAL = (A3 total A1 total)2 / (2r), r = No. of observations in Ai total. SSAQ = (A1 total + A3 total 2*A2 total)2 / (6r) SSAB (4df) = SSALBL (1df) + SSALBQ (1df) + SSAQBL (1df) + SSAQBQ (1df) Partitioning as above will be useful when the available error df is very limited Note that partitioning of SS into components of 1df is meaningful only for quantitative factors

Modification of OA: Mixed-Level Factors


Dummy-Level technique To assign a k-level factor to a (k+1)-level column. For example, a 2-level factor A can be assigned to a 3level column as follows: A1 = A1, A2 = A2, A3 = A1 Which level should be repeated A1 or A2? More important one (say a new level on which much information is not available). Also consider the cost of experimentation Any one of the three of levels of the column can be used as dummy

Dummy Technique

Suppose the factor A having two levels is assigned to a column of L9. Let r = No. of replicates of the L9 experiment. Main effect of A = (A2)/(3r) (A1 + A1)/(6r) SSA = (A1 + A1)2/(6r) + (A2)2/(3r) CF Generalization: A k-level factor can be assigned in a n>k level column. For example, a 6-level factor can be assigned in a 9-level column by dummying three of the nine levels. Orthogonality is not lost.

Modification of OA: Mixed-Level Factors

Collapsing of Columns To create a higher level column within a lower level array

Creating a 4-level column in a 2-level array Creating a 8-level column in a 2-level array Creating a 9-level column in a 3-level array Creating a 6-level column in L18.

Creating a 4-level Column in a 2-Level Array

A 4-level factor has 3df. So we need three columns of a 2level array to create a 4-level column.

Select any two columns and their interaction column Let (i, j), i=1, 2 and j=1, 2 be the level combinations of any two of these three columns Transform (i, j) as follows: (1, 1) 1, (1, 2) 2, (2, 1) 3, (2, 2) 4 Erase the three columns and insert in its place the 4-level column created as above An example follows.

Creating a 4-level Column in L8


Columns of L8 1 1 2 1 3 1 Columns used for transformation (1 2) 1 (1 3) 1 (2 3) 1

1 1
1 2 2 2 2

1 2
2 1 1 2 2

1 2
2 2 2 1 1

1 2
2 3 3 4 4

1 2
2 4 4 3 3

1 4
4 2 2 3 3

Any one of the three 4-level columns can be used

Method of collapsing of columns followed by dummy technique can be used to assign a 3-level factor in a 2-level array (say in L8 or L16)

Modification of OA: Mixed-Level Factors

Compound factor method


Allocation of more factors than the number of columns in a OA Allocation of k two-level factors in a three-level array using the dummy technique will result in a loss of k df. Such a loss, in some situations may increase the size of the experiment unnecessarily. Consider the following situation: Two 2-level and three 3-level factors. No interactions are required. TDF=2x1+3x2 = 8. So we should be happy to use L9. But the same is not possible following the dummy technique, since L9 has only four columns. We shall need to use L18. Alternatively, we can compound the two 2-level factors (say A and B), to create a three level compound factor (AB) as follows: A1B1 (AB)1, A2B1 (AB)2, A1B2 (AB)3 The 3-level compound factor (AB) and the other three 3-level factors can be assigned to four columns of L9.

Compound Factor Method


A1B1 (AB)1, A2B1 (AB)2, A1B2 (AB)3 Main effect of A = (AB)2 (AB)1 Main effect of B = (AB)3 (AB)1 SS(AB) = [{(AB)12 + (AB)22 + (AB)32}/(3r)] CF

Assuming the assignment is to L9 and r is the no. of replicates

SSA = [(AB)1 (AB)2]2/(6r) SSB = {(AB)1 (AB)3]2/(6r)


Note that SS(AB) SSA + SSB, since A and B are not orthogonal If any of the two factors (say B) is found insignificant, then we may consider (AB)1 = A1, (AB)2 = A2 and (AB)3 =A1 and recalculate SSA as applicable for a dummy technique

Dealing with interactions

Original study was a L27 experiment involving 9 factors and four responses. Here we shall consider only two factors and one response. Alloy: Hypereutectic Al-Si alloy.
Factors : A=%Si (17%, 22%, 27%), B = %P (0.1%, 0.2%) Response: BHN. Objective is to find the best levels of A and B to maximize hardness.
17%Si 0.1%P 90 127 126 119 111.67 90 110 113 114 101 22%Si 145 142 133 129.33 158 122.33 95 109 27%Si 96 118.33 145 132.33 143 126

0.2%P

Effect Curves: 3 x 2 Experiment


135
132.33

130 125
BHN

129.33

122.33

120 115 110 105


111.67

B1
110

B2

118.33

A1

A2

A3

How can we avoid the interaction?

Case Study
Reduction of variation of unaccounted gas in a bulk transmission and distribution network

Background

The bulk gas transmission and distribution network consists of thirty two flow meters, of which two are turbine meters and the rest are orifice meters. The whole network can be partitioned into several overlapping segments. The gas balancing equation for each such segment is given by

Unaccounted Gas (UAG) = Gas Out Gas In Gas In is the total flow measured by the input meters Gas out is the total flow measured by the output meters plus the Line Pack component

Background

The company monitors daily UAG (as % of Gas In) in all the segments. This project was undertaken when the %UAG dropped to -0.6%. This amounted to a loss of about Rs. five millions per month. It should however be mentioned that the real loss may not be as high as above, since all the gas out points are not billing points

Intermediate Studies

The study consisted of systematic investigation of the process to identify the root causes for high variation of UAG. The details of this investigation, consisting of many small studies, will not be discussed here. The main result that was obtained at the end of these studies is that the UAG% could be kept near zero if the gas temperatures at two particular stations could be controlled in a particular fashion. Such a result was surprising since the meters are supposed to record flow in standard condition. This led us to the flow validation study, discussed next.

Flow Validation

Two main sources of error Error in on-line measurement of gas composition, pressure, temperature and specific gravity AND error in flow computation by the flow computers. Detailed scrutiny of the calibration record eliminated the first possibility. Thus, although all the flow computers connected to the SCADA (Supervisory Control And Data Acquisition) are AGA -3 and AGA-7 compliant, it was decided to validate the flow computations by the flow computers.

Flow Validation Experiments


The array L25 was used to study the effect of six factors on the error in flow computation. The levels of the factors were chosen to cover the entire range of operating conditions. Five levels were selected for each factor since the operating range was large and the theoretical flow computation equations involved fourth order terms. For each of the 25 experimental conditions, flow was computed by each of the five computers and the same compared with the corresponding standard value provided by a standard laboratory.

Factors and Levels


Factor Tube Diameter (TD) Diameter ratio (DR)* Absolute pressure (PR) Differential Pressure (DP) Unit mm KPa KPa Level

1
40 0.3 500 0.2

2
110 0.4 2000 25

3
180 0.5 3500 50

4
250 0.6 5000 75

5
320 0.7 6500 100

Temperature (TE)
Specific Gravity (SG)

C
-

10
.50

20
.55

30
.60

40
.65

50
.70

* Orifice Diameter (OD) = TD*DR

Note that the factor OD has sliding levels. For example, the levels of OD are 12, 16, 20, 24 and 28 when TD = 40 mm but the levels are 96, 128, 160, 192 and 224 when TD = 320 mm.

Experimental Results
Trial # 1 . . 18 Flow (SCMH) Bristol Babcock 41.5 . . 24150.2 Flow Boss 600 37.7 . . 19323.07 Flow Boss 503 42.261 . . 21847.946 ROC 809 42.049 . . 21867.01 Turbo 2500 42.258 . . 21873.03 Apex (Standard) 42.248 . . 22054.52

.
25

.
243562.6

.
245153.01

.
247495.359

.
247428.3

.
247917.4

.
247924.7

Analysis of Experimental Data

Let Fi be the flow measured by the ith computer and S be the corresponding standard value. The linear model [log(Fi) = i + *log(S)] was developed for the five computers. Ideally we should have i = 0 and = 1. It was found that in all the five cases was nearly unity. Accordingly, further analysis was carried out using the difference Z = log(S) log(F). Performance summary is given below
Flow Computer Bristol Babcock Flow Boss 600 Average Bias -0.01405 0.01735 Mean Square Error 0.00072 0.00066 Performance Rank 3 (Worst) 3

Flow Boss 503


ROC 809 Turbo 2500

0.00294
0.00247 0.00217

0.000063
0.0000094 0.0000089

2
1 1

Significant Factors
Significant Effect Components and their contribution
Flow Computer Significant Component % Contribution 98.3 97.2 68.0 39.9 24.6

Bristol Babcock
Flow Boss 600 Flow Boss 503 ROC 809 Turbo 2500

Specific Gravity (Linear)


Pressure (Linear, Quadratic, Cubic) Pressure (Linear), Differential Pressure (Quadratic, Cubic) Specific Gravity (Linear) Specific Gravity (Linear)

Conclusion: Flow computers are not computing flow values correctly. In particular, variation in specific gravity and pressure are not accounted properly

Corrective Action and Benefits


The matter was taken up with the supplier of the flow computers (Bristol Babcock and Flow Boss) Meanwhile these two computers were taken out of the system and the flow meters were connected to the other computers. Such a temporary corrective action resulted in marked improvement in the variation of UAG. Apart from reducing variation in UAG, the company now had a methodology for identifying erratic meters (not discussed here) Further, it was realized that the present practice of flow validation based on measurements made at a particular condition is inadequate.

Das könnte Ihnen auch gefallen