Sie sind auf Seite 1von 11

Teaching Statistics to Engineers Author(s): Sren Bisgaard Reviewed work(s): Source: The American Statistician, Vol. 45, No.

4 (Nov., 1991), pp. 274-283 Published by: American Statistical Association Stable URL: http://www.jstor.org/stable/2684452 . Accessed: 24/11/2011 14:32
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to The American Statistician.

http://www.jstor.org

Teaching Statistics to Engineers


S0REN BISGAARD*

Engineers perform experiments and analyze data as an integral part of their job regardless of whether they have learned statistics. But those that have are likely to be more effective engineers. The fact that many engineers have only recently "discovered" statistics suggests that we need to reconsider our approach to teaching this important science. I report on our experience teaching engineers using an approach that integrates statistics into engineering practice. Examples of course structure and curricula for both university and professional industrial courses are discussed. Special teaching methods are presented that help engineers understand that statistics can help them solve their problems. KEY WORDS: Design-of-experiment demonstration; Engineering education; Statistical consulting.

1.

INTRODUCTION

The title of this article is in some sense misleading. A more accurate title would be "Teaching Engineers About Solving Engineering Problems That Incidentally Require Statistical Methods." But that title is also misleading. The problem I really want to address is: "How to Get Statisticians to Teach Engineers How to Solve Engineering Problems That, Incidentally, Require Statistical Thinking and Methods." As this revised title indicates, there is an implied criticism of the way the statistics profession has approached what has become known as Engineering Statistics. And that criticism not only concerns teaching style, but also content and organization. I dare say that in the past we have failed miserably. Some might be offended by my criticism and say, "Show me the data," but the answer is obvious. If we statisticians had been successful in teaching engineers problem-solving skills using statistical methods, statistics and experimental design would today be incorporated in all engineering education and widely practiced, and it would not have been possible for Taguchi to move in and dazzle engineers with what is essentially simple (but often messed up) experimental design. Although many of us can agree on criticizing Taguchi's use of statistics, his enormous success among en*S0ren Bisgaard is Assistant Professor, Department of Industrial Engineering and the Center for Quality and Productivity Improvement, University of Wisconsin, Madison, WI 53705. This work was sponsored by National Science Foundation Grant DDM-8808138 and was presented as an invited paper at the annual meetings of the American Statistical Association, August 7, 1989 in Washington, DC. The author thanks George Box and Conrad Fung, as well as Tom Lewis of Mercury Marine, for many constructive discussions in developing the views expounded in this article and Albert Prat, Bill Hill, and Stephen Jones for many useful comments on previous drafts.

gineers ought to make us rethink what and how we teach engineers about statistical methods. A fundamental reevaluation ought not leave anything sacred. (Incidentally, it does not mean that we should copy Taguchi's teaching method, either.) Engineering statistics has traditionally been regarded as a separate topic, almost "orthogonal" to engineering. Instead, it should be an integral part of engineering. Statistical methods are not an adjunct to, but are an inseparable part of, how engineers solve engineering problems. Rather than looking for engineering problems that fit into our perception of engineering statistics, we should turn the question aroundand ask the engineers what they do, and see how we can help them solve their (engineering) problems. So what do engineers do? Engineers, of course, do a variety of things. In fact, they do so many differentthings that they, themselves, have divided engineering into at least four major categories: mechanical, electrical, civil, and chemical, and each of these is further subdivided. When we teach engineers we need to keep in mind that there are certainly commonalities, but also many differences, among the problems engineers face. From my experience, I know most about mechanical, electrical, and the related manufacturing and industrial engineering aspects of these areas. Therefore, my discussion in this article concerns only these fields. Broadly speaking, engineers in these areas develop new products; improve previous designs; build and test prototypes; design tools, machines, and processes to make and test products; maintain, control, troubleshoot, and improve ongoing manufacturing processes; and maintain, service, and repair products. In each of these functions engineers collect and analyze data. Whether or not engineers have learned statistics, they will do statistics. Therefore, the issue is not whether they use statistics or not, but how good they are at it. We statisticians have especially failed to convey to engineers that in fact statistics includes the art and science of collecting data, be it through designed experiments or by sampling. If an engineering student has had a statistics course, design of experiments has usually been tucked away in the back of the textbook and hence has not been covered in a onesemester course. We cannot expect a studentto sit through more than a one-semester course if it does not appear useful and relevant. Engineering educators have also not been very conscious about how much experimentation is in fact a part of an engineer's daily work. During the engineer's education it is not talked about. Perhaps it was the postWorld War II push for making engineeringeducationmore "scientific" that led to an overemphasis on pure, deductive reasoning. Like everything else, too little and too much is usually bad. Many engineering students leave today's universities thinking that everything they need to
(? 1991 American Statistical Association

274

The American Statistician, November 1991, Vol. 45, No. 4

know can essentially be deduced from Newton's Three Laws, Maxwell's Equations, Kirkhoff's Laws, and the Three Laws of Thermodynamics.It is not broughtto their attention that every one of these laws originated from an extensive series of experiments in an iterative process of induction and deduction. Once students graduate, of course, they will have to confront reality and will have to experiment, but they are ill equipped to do so. The only type of experiments they have been involved in, if any, are often demonstration experiments of alreadyknown phenomena. Unfortunately, these demonstrations only vaguely resemble real experiments from which new knowledge is generated. With rapidly developing new technologies, materials, and processes, it is today even more important that engineers learn how to learn basic facts. Learning how to learn is the only invariant in today's rapidly changing world. Thus it is essential to gain a working knowledge about how to conduct valid and efficient experiments, collect data, analyze the results, and build models. In essence, we should teach scientific method and problemsolving skills related to engineering. As we do that, statistics will presentitself at the forefrontof the stage. Then it will be natural to shift the emphasis in teaching from introducing statistics with excessive amounts of probability theory and, instead, reverse the order and front load the course with the teaching of design of experiments. 2. PHILOSOPHICAL PRELIMINARIES

Before I make specific suggestions for teaching statistics to engineers, I would like to discuss some philosophical issues that I think ought to serve as guiding principles. Most important, we need to make it clear to ourselves what we mean by statistics and the role we envision statistics should play in engineering science and practice. The (informal) definition of statistics that I like best is "statistics is the art and science of collecting and analyzing data." Like physics, it is a science distinct from mathematics. It is true that statistics, like physics, draws heavily on mathematics for developing theory and methods; I would like to emphasize that we should not underestimate the importance of mathematicsfor statistical theory. But as physics is not just applied differential equations, so is statistics not just applied probability. As a corollary to this, what is important for the science of statistics should not be judged by how complicated it is mathematically. Graphics and exploratory data analysis are examples of methods that require only a minimum of mathematics but are of extreme importance for engineering statistics. I tend to think it was a violation of this corollary that in the past led to the damaging overemphasis on acceptance sampling in quality control. Ishikawa's Seven Tools, on the other hand, are from a mathematical point of view trivial (even pitiful), and no can be proved about them, but the phifancy theoremas losophy behind their use is very importantin helping engineers improve processes and they are vital tools for solving problems as detectives. As a science, statistics

is more appropriatelyconsidered the science of inductive reasoning and experimentation. What we emphasize and teach engineers should be to that end. Box (1976) described a useful model for the role of statistics in a broader, scientific context. His model of the scientific learning process displayed an iterative process of induction and deduction, as illustrated in Figure 1. As previously mentioned, most engineering education is presented almost exclusively as a deductive development. The historical development of theories that had an empirical and inductive heritage is ignored. Even truly empirical relationships such as Taylor's tool-life equations are often disguised as if they were deduced from some natural law. For teaching large quantities of accumulated knowledge, that is perhaps the most efficient way to proceed. It gives students the false impression, however, that science and scientific thinking are synonymous with deductive reasoning only. They might even think that empirical work and inductive reasoning are somehow "dirty"and unscientific. Even worse, they might think that they can simulate everything on a computer and if there is a difference between their results and nature, nature is wrong. It is important that we introduce statistics not as a separate topic, but as an integral component in the scientific learning process of induction and deduction. The impression of science as purely deductive is quite unproductive and can be the cause for the difficulty of introducing statistics as a valuable topic of study for engineers. The introductory statistics courses should be used to assure that students learn how to generate fundamental new knowledge through induction and experimentation. In this context it is well worth rereading many of R. A. Fisher's more philosophical articles. Few will dispute his profound understanding of statistics and science. In particular, Fisher's strong emphasis on the role of statistics in scientific investigations is importantwhen

1. An IterationBetween Theory and Practice


PRACTICE DATA FACTS
induction deduction induction deduction induction

HYPOTHESES MODEL CONJECTURE THEORY IDEA

2. Hj+i replaces.
,-

A Feedback Loop
FACTS

H3

|Hypothesis F1j |
J , Hj Hypothesi

Modified

indcton L~ideduct~ion

EROR RSIGNAL

Hi M~~IR~

~~~~~~~~~Consequences
of H3

Figure 1.

The Advancement of Leaning [reproduced from Box

(1976)]. 275

The American Statistician, November 1991, Vol. 45, No. 4

we consider how to teach statistics to engineers. For example, Fisher wrote:


. . . [variable] phenomena come to our knowledge by observation of the real world, and it is no small part of our task to understand, design and execute the forms of observation, survey or experiments, which shall be competent to supply the knowledge needed. The observational material requires interpretation and analysis, and no progress is to be expected without constant experience in analysing and interpreting observational data of the most diverse types. Only so, as I have suggested, can a genuine and comprehensive formulation of the process of inductive reasoning come into existence. (Fisher 1948, p. 40)

It is our duty as educators to help students appreciate inductive reasoning through this "constant experience in analysing and interpretingobservational data of the most diverse types." Let us challenge students with real life and, often, messy problems. We should not startout with, "Let X1, X2, ... be iid normally distributed random variables . . .," but let them cut the problems out themselves, so that reasonable, plausible models and assumptionscan be tried. Even better, we should let students be involved in arranging the experimental situations so that standard assumptions are plausible. Real statistical work involves a lot of practicalconsiderationsthat I think can only be learned by doing statistics. Teaching inductive reasoning is difficult and does not seem to lend itself to the traditionaltheorem, proof, and example mode. The best way to learn it is by participating in all phases and details of real experimentation and through data analysis of real data sets (not simulated). Fisher also had advice to statistics teachers (they were called mathematicians at the time) that is as relevant today as when it was first written. For example:
I want to insist on the important moral that the responsibility for the teaching of statistical methods in our universities must be entrusted, certainly to highly trained mathematicians, but only to such mathematicians as have had sufficient prolonged experience of practical research, and of responsibility for drawing conclusions from actual data, upon which practical action is to be taken. Mathematical acuteness alone is not enough. (Fisher 1938, p. 16)

I personally consult with many engineers both from the University of Wisconsin-Madison campus and from industry. I do not see how I could teach statistics without the experience that this consulting gives me. Moreover, I know my students appreciate the relevance and realism it brings to my teaching (see Bisgaard 1989). Consulting certainly has taught me a lot about what the real problems are. In fact, I think it ought to be as inconceivable to be a statistician who never consults as it is to be a "theoretical" physician who never sees a patient. When teaching introductory statistics courses to engineers I think it is important that we teach what I call (for lack of a better term) "statistical intuition." For example, what is the intuitive idea of the paired experiment? Why is it essential that the pairs be formed, and how does that setup differ from the unpaired experiment? Why from an intuitive point of view does the t distribution have fatter tails than the normal? And why does the t distribution tend to the normal as the sample
276

size increases? After all, statistics is just common sense. As was pointed out in Davies (1954, p. 14), "By means of statisticalmethods the intuitive type of reasoning which an intelligent person might apply in drawing inferences from data is made objective and precise." Conversely, "objective and precise" statistical methods summarize a lot of common sense reasoning that is worth explaining to students in detail and in nonmathematical terms so that we elevate their level of intuition. We should remind ourselves of David Cox's comment (Cox 1981, p. 289): "Theory, while often mathematical, is not necessarily so and theory is certainly not synonymous with mathematical theory." Blocking, randomization, and other precautions that ensure that an experiment is properly conducted are partof the theoryof statisticsbut are not entirely mathematical. Experimental strategy and tactics are also mostly nonmathematical, but are of no small importance for the successful completion of an experiment. In teaching engineers statistics, we can learn a great deal from the physicists. They are not shy about teaching physics, starting with mostly qualitative insights, experiments, and applications. And, they use numerous wellchosen physical experiments, demonstrated right in front of the students, to explain the theory. Only later, when a certain level of qualitative understanding and intuition has been built up, do they introduce differential equations, calculus of variations arguments, and so on. We should do the same! Moreover, we should teach fun things and methods the students can use, so that they become enthusiastic. Once they are "hooked" on statistics and can see just how much fun it can be, they will have the energy to study in more detail. The best we can do in a one-semester introductoryengineering statistics course is to provide an "appetizer" to statistics-to get students excited. Another point I want to make is that our job as teachers is not to screen out the less mathematically inclined students. It is to help them become better engineers! Some of the best experimenters I have taught were not necessarily the most mathematicallyand academically minded students and engineers. (Sometimes a practical and even moderately risk-taking nature is required to be a good experimenter. And that is not necessarily the trademark of a student inclined to book work.) Why should we deprive otherwise good engineers of the benefits of learning experimental design by setting up artificial barriers in terms of combinatorics and probability that they really do not need? That certainly is to nobody's benefit in the long run. We must get away from letting the introductory statistics course be used to (artificially) create academic prestige (or boost our own vanity) by unduly challenging the students' mathematical skills through rigor and exercises that have only little relevance to engineering practice. Even the most intelligent students are justified in having aversion for something that seems unnecessarily complicated and not very useful. Universities tend to teach large quantities of facts. In statistics that tendency manifests itself in the teaching of a large number of techniques, in particular, hypothesistesting techniques. However, more is often less. Good

The American Statistician, November 1991, Vol. 45, No. 4

statistical work requires a certain craftsmanship. Cuthbert Daniel has taught us a lot about that. I think we should teach fewer things, but teach good statistical craftsmanship through careful and detailed analysis of a few examples in the style of Daniel (1976). Box's analysis and discussion of Quinlan's experiment (see Box 1988; Quinlan 1988) is a good example of this that we use in our own teaching at the University of Wisconsin (with a few more details and more graphics than are provided in the published version). For the engineer's general education, it is important to teach critical thinking. In most (deductive) courses, there is usually only one solution, obtained in a logical and linear (deductive) fashion. The introductory statistics course may be the first occasion where a student has the experience that there is more than one solution to a problem, that "all models are wrong but some are useful" (Box 1979, p. 202), and where criticism of the experimental setup is important. We also need much more emphasis on the experimentalsituation and scenario. The same data might be interpreteddifferently depending on the experimental setup. For example, if the experiment had repeated measurements or split plotting instead of genuine replications, we need to adopt a different analysis and interpretation.To understandthe physical setup of the experiment is more important than to understand the details of the computations. The qualitative and nonmathematical notions are more importantthan the details of the derivation of a procedure. The example on abrasion resistance of rubber sheets from Davies (1954) is a beautiful illustration of this. Fisher's discussion of Darwin's experiment on self-fertilized versus cross-fertilized plants (see Fisher 1935) is another. We can, through statistics, teach studentsto exercise judgment and think about what is important and what is not. If we do that, I think we can help a great deal to ease our students' transition into the real world. George Box, Conrad Fung, and I have done some rethinking about how we teach statistics to engineers. The foregoing considerations are a result of this (although I take full responsibility for whatever might be controthat versial, since it is my interpretation is presentedhere). The way we teach our courses has been worked out in close collaboration and after many long discussions. It is impossible to untanglewho had which idea at this point, but I am indebted to them for their many ideas and insights. Moreover, the late Bill Hunter has very much influenced our thinking on the issue of teaching engineering statistics. We owe many of the ideas to him. We have also benefited from the ideas presented by Snee (1980), Kempthorne (1980), Deming (1975), and Hogg (1985). I will explain in more detail how our courses are designed. In all of the courses we teach, we have reversed the traditional sequence of teaching probability first and experimental design last. We emphasize concepts, ideas, and the qualitative and philosophical aspects more than mathematicaltechniques and their derivation. Most things are explained inductively and with graphics and conceptual diagrams rather than equations. We focus on what

engineers do. We always start with a problem, rather than focusing on techniques. Most important, we show a live demonstration of a product developed through experimentation using statistics. We have experimented with these ideas for some time, and we think we have managed to impress on our students that statistics is an integral part of engineering science and practice. In the following sections I will discuss three different types of courses that I have been involved in teaching. The first is an undergraduate-level one-semester universitycourse. The second and third are courses for practicing engineers from industry, the difference between the two being that one is a one-week short course often taught at a site remote from the engineer's company and the other is spread out over an extended period of time and taught on site. 3. CURRICULUM FOR A UNIVERSITY COURSE

The one-semester university course that I teach, in collaborationwith ConradFung and designed jointly with George Box, is not officially an introductory undergraduate statistics course, but could be and actually is, for some students. Most students are from industrial, mechanical, and manufacturingengineering. As a key point, we do not really teach statistics in the traditional sense. Instead, we teach engineering problem solving that incidentally requires statistics. Therefore, we startout with an engineering problem and bring in appropriate statistical techniques and thinking as needed. We start with a philosophical introduction and overview of the scientific context of modem quality improvement and the role of engineering statistics, with a discussion of informed observation(mostly Ishikawa's Seven Tools for problem solving) and directed experimentation following the general outline of Box and Bisgaard (1987). After this introduction we go directly to a problem related to the development of a new product. (No coin flipping or red and blue balls first.) To make it real, we have a small paper helicopter (our product) that we bring to the classroom. We tell the students that we are in the process of developing a better helicopter and, therefore, need to test this prototype. Specifically, we tell them that an importantcharacteristicof the helicopter is flight time. We then climb up on a ladder and drop the helicopter four times from the ceiling, measure the time it takes to hit the floor, and, of course, get different numbers. Next we plot the data as a dot diagram showing that the first step in any analysis is to plot the data. From the plot, we develop the ideas of location and dispersion and show how to quantify these notions in terms of the average and the standard deviation. We also show small diagrams that explain the central limit effect and the fact that an average has the same mean but a smaller variance than the original observations. Everything is introduced heuristically, inductively, and intuitively so that the students get a feel for what it means in the engineering context of the problem. The helicopter experiment immediately raises questions about how to measure flight time, operational def277

The American Statistician, November 1991, Vol. 45, No. 4

initions of when the flight time begins, etcetera. Conducting the experiment for the class brings realism to statisticsand stimulatesvaluable discussions. It also drives home the point that statistics is a natural part of engineering. After all, would not a good engineer test the prototype helicopter before it is sent to production or to the customer? We are often asked if we could use a computer to simulate this experiment. My answer is no. The difficult part of experimental design is the physical conduct of the experiment and the problems, practical and theoretical, that arise. After this initial experiment we present a set of data from a second helicopter experiment conducted the previous day with a different helicopter. We now ask the class to evaluate whether this (red) paper helicopter is different from the first (blue) helicopter. Technically, we are introducing comparative experiments. But this setup raises questions about the process of inductive inference, blocking, and confounding, because the second experiment was conducted the day before where, for example, the humidity and wind flow may have been different. We explain that one of the important issues in design of experiments is to guard against criticism from a "heavyweight authority" (Fisher 1935, p. 2) who might dispute our conclusions because the experiment was "ill designed. " With this discussion as a preamble, we discuss the experimental situation that leads to the paired comparisons and explain the advantage of blocking. We also show various ways of analyzing the data. Later we introduce the problem of comparing more than two products, experimentationin a noisy environment (blocking), and the role of randomization. We bypass analysis of variance by using the intuitively much simpler reference distribution approachas explained in Box, Hunter, and Hunter (1978). We also show various kinds of graphics for analyzing data and introduce the idea of residual checking. Note that we compare more than two products, not more than two means. This is to reinforce the point that we are not doing statistics, but engineering that incidentally requires the use of statistical methods. Next we raise the question of how we would evaluate the effect of changing the dimensional characteristics of the helicopter, such as the body length, body width, wing length, and so on. Thus we need an experimental design technique that can accommodate many factors. This leads us to explain the idea of two-level factorials, how to compute and interpret main effects and interactions, and the advantages of factorials. Then, we show the use of normal plots for analysis and for residual checking. We also briefly explain the use of simple factorials in evolutionary operation. We think that two-level fractional factorials are among the most useful things we can teach engineers. But we also acknowledge that the technical aspects are quite complicated if taught the usual way. We emphasize, therefore, the ideas of fractional factorials, aliases, and confounding, but we do not bring the studentsto the point where they actually can construct the more complicated
278

designs and derive their confounding patternthemselves. Instead, by using numerous examples from engineering we explain the ideas and make sure that they can interpret the meaning of aliases, confounding, etcetera. For the actual designs, I have developed a set of tables that contain all possible eight- and sixteen-run two-level factorials as well as fractional factorials, including all possible ways of blocking these. All the designs are written out in detail with all the pluses and minuses as preprinted worksheets (see Bisgaard 1989a). Moreover, the corresponding alias structures are provided. We believe these are the most used designs and cover most engineering situations, at least for engineers who are just learning design of experiments. Larger and more complicated designs can be obtained from computer programs. Moreover, it is best deferred to later when the student has an appreciation for the power of these methods. The design tables also help make design of experiments more concrete and ready-to-use for the students. The development of the theory of two-level fractional factorials and their analysis is illustrated with many examples from engineering design and process improvement. Good engineering examples are important. They should be relevant to the engineer's field of application and, in addition, should show a diversity of applications within that field. We cannot expect beginning engineering students to think abstractly enough to see that an experiment on the effects of fertilizers on mangold roots is the same as an experiment on which factors cause friction in a throttle handle or on how to get more horsepower out of a combustion engine. We also infuse a healthy dose on philosophy of experimentation. For example, we have a lecture called the "IterativeNature of Experimentation"and another called "The Quality Detective" (see Bisgaard 1989b). It really is unfortunate that many statisticians tend to think that theory is synonymous with mathematical equations. So many aspects of experimental design are philosophical and qualitative. Now that we have built up the necessary theory for conducting a fractional factorial experiment we return to our paper helicopters. We consider eight design factors for the helicopters that all seem to have a potential effect on flight time. We have 16 paper helicopters made up before class according to a 284 design. In about an hour, and with active and enthusiastic participation from the students, we conduct an experiment in the class. We carefully explain how we intend to conduct the experiment, discuss various management issues to prevent anything from going wrong, and randomize the run sequence with the students' help. We try as much as possible to make the experiment realistic. We assign various people to document and keep a laboratorynotebook, take the flight times, write down the times, check that all 16 helicopters are made right, that the helicopters are taken in the right order, etcetera. With experienced engineers and studentswe often have an interestingdiscussion about all the practical issues that are just as important as the pluses and minuses.

The American Statistician, November 1991, Vol. 45, No. 4

After we complete the experiment, we enter the data into a computer and perform the analysis on-line. At this point the students are usually as eager to see what happened as if they were watching the conclusion of a mystery movie. The on-line analysis also provides us an opportunity to discuss the results, not only in terms of statistics and what is statistically significant, but also in terms of the engineering implications, empirical and scientific feedback, what the engineers should do next, and what recommendations they should make to management after the experiment. Incidentally, as it turns out, we also "discover" that flight time, which we decided to measure prior to the experiment, is not necessarily the only important response. In fact, some of the 16 helicopters are quite unstable, so perhaps we should have used stability as another response. When we have observantnote-takers,they often notice this during the experiment and sometimes we can pinpointfrom their notes which factors most likely influence flight stability. Again, it is not uncommon that we discover something unexpected during an experiment that can lead to important discoveries useful in future experiments (see Box and Youle 1955, p. 320). As far as I know, however, this issue is not talked about enough in most statistics textbooks. In fact, some textbooks still promote the idea that to be "scientifically correct," all of the hypotheses must be stated prior to the experiment. That argument, of course, is out of step with how scientists and engineers work. It only serves the engineers or scientists who are looking for an excuse to say that statistics is useless in their work. The helicopterexperimentis the highlight of our course. This exercise is undoubtedly invaluable for teaching design of experiments. The students have seen a real experiment performed for them. It is no longer just an abstract idea, but something they have a mental image of when they perform the first experiment on their own. It is much like the way we learned basic physics by seeing interesting demonstrations. Statistics becomes alive and relevant to engineers. Again, I want to stress that a computer simulation experiment could never teach all the practical issues that a real, physical experiment does. After the helicopter experiment we discuss simple linear regression and model building. As usual, we emphasize the ideas ratherthan the derivation of the normal equations. Next follows a gentle introductionto the ideas of response surface methods, followed by discussion and illustrations of Taguchi's ideas of robust product design and reduction of variation. As in the earlier lectures, everything is introduced with conceptual pictures and graphics and only few equations. We emphasize Taguchi's good engineering ideas of robust products and variance reduction, but show alternative, more conventional, and simpler methods for statistical design and analysis (see Box, Bisgaard, and Fung 1988). StudentAssignments. Since this is a university course, we assign weekly homework as well as a final project. Most assignments are developed by Conrad Fung and myself and are taken from our consulting. We often model

a consulting situation where students are supposed to pretend that they have received a set of data from a client or need to design an experiment. Students are then required to returnthe following week with a reporton what they have found out, what further questions they might have for the client, and their recommendations. We often give students the data sheets that we originally got from the client (disguising only proprietaryinformation). The problems are sometimes messy and without an obvious structurethat would lead the students to look up a particular test procedure in a textbook. In fact, it is not always obvious what the objective is, so the students might have to figure it out themselves. Sometimes most of the work the students need to do is exploratory in nature and much like detective work. The computer work is often extensive. When necessary, we also require that they familiarize themselves with the technological aspects of the problem. We stress in these homework assignments the importance of being skeptical, not trusting the data and the measurements, and looking for outliers, curious patterns, clues, or anything unusual. Assumptions are only to be made tentatively, never trusted, and always checked. We emphasize that data analysis is iterative and adaptive. We also try to get the students to understandthat there might be more than one analysis and one interpretation a given set of data. for A key feature of our course is a final project-that is an idea we have adopted from Bill Hunter (see Hunter 1977). At the beginning of the semester the students are told that they are required to team up with a few other students and design, conduct, and analyze an experiment of their own choice. Each team is requested to present the results of this experiment to the rest of the class on the last day of the semester and prepare a written report. Many of our students are not used to oral presentations, but they will probably need these skills when they graduate, so we think this adds to the learning experience. These projects help students learn the practical aspects of experimental design. Only seeing the helicopter experiment conducted in class is not the same as planning and conducting an experiment oneself. It also helps them overcome the initial hesitation many people have when doing something for the first time. Only occasionally do our students have access to laboratory facilities where they can conduct real, researchtype experiments.Therefore,to inspirethem to think about experiments they can perform at home or elsewhere, we give a list of 101 student experiments compiled by Bill Hunter (Hunter 1975) and an additional list of 30 experiments we have compiled. A representative sample from this list is given in Table 1. We do not want the students to repeat these experiments, but to use the list to provide inspiration for other experiments. As a rule we do not allow computer-generated and simulation experiments-that is too easy, seldom provides any "A-ha" experiences, and gives little practical experience in real (physical) experimentation. It is impressive to see the students' ingenuity. Midway throughthe semester we ask the teams to hand in a proposal for the experiment they plan to do. We ask
279

The American Statistician, November 1991, Vol. 45, No. 4

Table 1. Titles of Some Experimental Design Projects Planned, Conducted, and Analyzed by Undergraduate Engineering Students at the University of Wisconsin-Madison, Spring of 1988 The Effectof Posture,WaitingTime, Venous Occlusionand Blood CollectionTube Type on Total Serum CholesterolMeasures:A 251Fractional FactorialExperiment RubberBand StrengthExperiment: 25-1 Fractional A FactorialExperiment RubberBand Shooting Experiment: 25-1 Fractional A FactorialExperiment WaitingTime in Ski Lift: Practical A Application Experimental of Design How To Get the DarkestTan:An Experimental Approach Identifying Effectof FourVariableson Torqueand ThrustForces the DuringDrilling ShootingDarts:A 25-1 Fractional FactorialExperiment The ERAChallenge:24-1 Fractional Experiment WashingSocks on Fractional to Factorial Experiment Study the HumanMemorySystem-A Report Designinga BetterPing Pong Catapult DehairingHog Carcasses-Experimental Process Analysis WhatAffectsthe Foam Thicknesson PouringBeer An The Artof Bowling: Analysisof the VariousTechniquesWhichMakea Great Bowler FactorsAffecting Distance of Catapulted the WaterBalloons

perimented with during the past few years. It is taught to engineers in small segments over several months internally at their respective companies. The main difference between this and the other courses is that it is taught on site and that we work on projects directly related to the company's own manufacturingand design problems. 4.1 Industrial Short Course: Concentrated Version

them to pretend that the proposal is to upper management, clearly stating the objective of the experiment, which factors they want to experiment with, the design, what the response(s) is (are), how they plan to measure the response(s), what resources they need, and how much time it will take. We want them to practice getting permission from managementto conduct an experiment. The benefits for us are that we can check their design, perhaps give practical advice, and avoid the natural procrastination from which we all suffer. The proposal mechanism seems to improve the quality of the projects significantly. We have taught this course for a while and have gotten an enthusiastic response from our students. Statistics usually has a bad reputation among engineering students. I think, however, that we have managed to make this an enjoyable class where students discover that statistics is very useful for engineering practice and is something about which they want to learn more. In fact, we have gotten quite a following, enrollment is up (without any advertisement from us), and we now also offer a graduate course in more advanced topics. 4. CURRICULA FOR COURSES FOR INDUSTRY

Teaching an industrial short course in 4' days does put some obvious constraints on what can be covered in such a short time. We have, however, carefully considered this as a "knapsackproblem" and left out everything that does not have top priority. This course follows the general outline for our one-semester university class, except we obviously do not have homework assignments and major projects. As a substitute we have, however, several workshops where the students try out various computations, designs, and analyses. Again, developing a better paper helicopter is our leading example. We have a special lecture on case studies, mostly from mechanical manufacturing, that illustrates the use of simple statistical methods (mostly Ishikawa's Seven Tools) and design of experiments for product and process development. A computer with an overhead projection pad helps illustrate computer implementation of the methods. Sometimes we also spend an evening session in a computer laboratory so students can practice designing and analyzing experiments. This course has been very well received by engineers. At the end of the course we always hand out a questionnaire, and the feedback has been overwhelmingly positive. They seem to like the hands-on and problemdriven approach, the emphasis on engineering and product and process improvement rather than on statistics, our less mathematical approach, and the emphasis on philosophy. 4.2 Industrial Short Course: Extended Period

Teaching statistics to manufacturing and design engineers in industry is a challenge that requires a different approach. But it also provides different opportunities for making the teaching effort more relevant and useful. In this section I will discuss two different types of industrial courses. The first is a one-week (42 day) short course, developed by George Box, Conrad Fung, and myself, taught primarilyto engineers who come to the University of Wisconsin-Madison campus, although we have occasionally taught the course in-house for large corpora-

tions, also in a 42 day format.The secondtype of course


I will discuss is a type of short course that I have ex280

The format for the in-house, extended industrial statistics and experimental design course that I teach has evolved during the past several years and continues to change as I discover new ways to make it more successful both in terms of content and administration. I believe the best way to learn statistics is to do statistics. Preferablystudentsshould work on problems that are their own problems, ratherthan textbook examples where there is no emotional involvement. This idea has several administrative implications that I will now explain. Usually, I start the negotiation about a teaching contract with the company's quality manager, who will also later act as the program administrator. In our initial conversation I insist that someone from upper management, in most cases the vice-president of manufacturingor engineering design, and sometimes both, be involved in planning the course. If these managers are not familiar with statistics I make a short presentation for them outlining what can be done with statistics and how I plan to structure the course.

The American Statistician, November 1991, Vol. 45, No. 4

Next I ask upper management to pick 8 to 10 problems that they consider most important for their business to be used as projects. It is essential that the projects be important. Working on unimportantproblems because of timidity, unfamiliarity with statistics, or a risk-averse attitude almost guarantees later failure. Nobody has time to work on unimportantproblems for long, and even if they do, the problems remain unimportant and do not have impact even if they are successfully solved. Based on the project proposals, the managers and program administrator select engineers already associated with the problems to work on the projects. If possible, the projects should be a part of the engineer's ordinary job, and not an extra, irrelevant effort for which their interest will diminish or for which they will not be rewarded. This is important, because the course may go on for as long as six months (because of breaks in between) and the management and the engineers' priorities could change. Usually a class includes 18-20 engineers, each assigned to projects in small teams. Often I teach a morning and afternoon session, each for about 20 engineers, to avoid pulling too many employees away from the job at the same time. The class initially meets for 3' to 4 hours every week. The class time is divided equally between teaching prepared material and discussing projects. All groups must be present when we discuss the projects. This generates interestingdiscussions within and between the teams. In addition, the more enthusiastic participants have a tendency to inspire the less enthusiastic ones by showing that in fact these methods are applicable in their jobs and by someone just like themselves. Sometimes I improvise a short lecture on a relevant topic brought up by the projects. This combined teaching and consulting approach is a serious attempt to implement the idea of solving the engineer's problems (or, as some like to express it, "listening to the voice of the customer") and not teach what we think might be "good" for them to know. ManufacturingEngineers. If the engineers are mostly from manufacturing I spend the first six to seven sessions presenting my own modified version of Ishikawa's (1976) Guide to Quality Control. I exclude the chapters on sampling inspection and add material on general quality improvementphilosophy (see Box and Bisgaard 1987), exploratory data analysis, graphics, measurements, operational definitions, flow diagrams, CUSUM plots, and Deming's philosophy (Deming 1986). I stress the importance of working as detectives, finding and removing causes for quality problems. Depending on the engineers' backgrounds, I might show the videotapes "Road Map for Change: The Deming Approach" (Encyclopaedia Brittanica Educational Corporation 1984) and "Right First Time" (British Productivity Council 1954) for quality management philosophy. In the remaining 12 to 2 hours of each session I discuss projects with the groups. In the initial lecture, I ask the teams to prepare for the next meeting a Pareto analysis and cause-and-effect diagrams for their project. Since that occurs at the same

time that I teach these techniques, it helps them understand the power of these simple tools. It also helps us focus on what the important issues are and on how to get started. I also ask the engineers to bring to class their products, blueprints of the products, or other relevant material so that I can get a better understanding of what the problems are. At each subsequent meeting I ask the teams to tell me and the class what they have accomplished during the intervening week. Often I have them make short impromptu presentations. As the course progresses some teams, of course, report only little progress. If that is the case we become aware of this through the presentations and discussions and can take action. Here, I often benefit from the active participation of the program administrator. During lunch he or she can explain to me some of the internal company politics, as well as organizational or personal obstacles confronting the engineers. The program administrator can also, if necessary, call on the vice-president if progress on the projects is obstructed by something that requires his or her help to remove. Sometimes it is as simple as the vice-president reiterating that working on the projects has a high priority with upper management. I have in the past been fortunate to work with very competent program administrators and supportive, enlightened upper management. I think that has been a key to my success with this type of teaching. It is not unusual that the engineers find it difficult to make sufficient progress on their projects between the weekly meetings. After all, a week is a short time and it often takes time to get prototypes built, acquire testing materials, equipment, or whatever they need prepared. Therefore, as we get through the basic material on the Seven Tools and an introduction to two-level factorials, we sometimes take a break for a few weeks. Another useful strategy is to cancel the formal class and, instead, visit the teams on their turf. This helps me better appreciate their problems, and I often find the engineers more talkative and open for discussion. On the site visits, I almost always discover something about the experimental setup or the measurement process that the engineers did not think was important enough to bring to my attention but is in fact important. I have sometimes visited up to 12 different groups at different locations in one day. The success of this approach has, therefore, depended on the competent planning and scheduling by the program administrator. Design Engineers. If the course is primarily for design engineers, the projects are usually product development work and prototype testing. Therefore, I spend only one initial lecture on Ishikawa's Seven Tools and go immediately to design of experiments, since this is more relevant. Again, the engineers are assigned to projects as described previously. With design engineers it is usually easier to keep their attention, since they are not bothered to the same extent by the daily fire-fighting in the manufacturingenvironment. They often work, however, against predeterminedproduct-releasedeadlines and are, therefore, also under time pressure. It helps that it
281

The American Statistician, November 1991, Vol. 45, No. 4

is their job to experiment, even if they have not previously used factorial designs. At the start of the course I suggest that the responsible upper management person speak to the engineers, acknowledging management's commitment to this teaching program. This is extremely important. Employees most often work according to their perception of management's priorities. Another key feature is that the engineers on the first day of class are told that they, at the end of the course, usually 12 sessions, are to make a short presentation for upper management of their accomplishments during the course. This again serves to show the engineers that this course has upper management's attention. It also serves the importantadditional function that management is forced to see the often very spectacular improvements that have been accomplished by the end of the course. That helps sustain their continued support for quality improvement after the course. Some politics like this is necessary. As a consultant I also feel good about this combined teaching and consulting approach, because when the course is over, I know that the company has not only received education but also can see some tangible results. It is clear that this kind of teaching is demanding on the instructor. It is difficult to work on so many consulting projects at the same time. In addition, a good understanding of engineering is necessary. This is, of course, not unique to engineering statistics, but common for all applications of statistics. Subject matter knowledge is necessary. With respect to working with mechanical and production engineers, several good books are available that can give the statistician a short introduction to materials and manufacturing processes. My own collection includes Doyle, Keyser, Leach, Schrader, and Singer (1985) and Niebel, Draper, and Wysk (1989). Of course, I personally benefit from having a manufacturing engineering education. I want to emphasize, however, that consultants do not need to know everything about engineering. They mostly need (as a minimum) to know what the client does not know, but what is essential as a catalyst for solving the problems. In martial arts the combatants do not rely on their own strength only, but also use the muscles of their opponent for leverage. Similarly, in consulting we should combine our knowledge and skills with those of the client. The consultant should ask catalyzing questions like "why," "why not," "show me the data," "how do you know," and so on and, in addition, provide "statistical thinking" and problem-solving skills. As a consultant, I never single-handedly solve a problem. I act as a team player who provides leadership, provokes discussion and thinking, and helps with statistics. There is a small kid in most of us. Engineers particularly love to play with machinery. By using design-ofexperimenttechniques, we can legitimately play as adults. Moreover, many people enjoy solving puzzles, and most of our projects are of that nature. I am convinced that this combined teaching and consulting approach is the most efficient and satisfactory way for engineers to learn statistics. When the participants present their projects,
282

they undoubtedly have enjoyed themselves. I do not hear complaints about statistics being boring or the worst class they ever had. On the contrary, we have our best times when the final projects are presented. On a longer-term basis, if I continue to work with the company, the consulting afterwardsis much better. The engineers can solve many of the simpler problems themselves, and they know me better and what kind of service I can provide for more complicated problems. 5. CONCLUSION

The problems of production are not any one of the individual steps or processes that are to be performed. Taken individually they are often trivial. The real problem is to assure that all of the individual processes collectively perform well, like the wheels in a clockwork, under production and volume conditions. One of the key obstacles to this is excess variation. Therefore, statistics is an essential part of manufacturing, and it is to a large degree our (the statisticians) fault that we have not managed to secure statistics as an integral part of modem engineering design and manufacturing technology. We need to change this, and that calls for a change in the way we teach statistics to engineers. Over the past few years several statisticians, and most notably W. Edwards Deming, have labored very hard to raise the consciousness about the United States' failing international competitiveness. Once the problem is recognized and understood the next question is what to do about it. The educationof engineers in statisticsand quality improvement must, in my opinion, constitute a fundamental component of any strategy for regaining competitive strength. Our educational efforts, however, must not merely constitute an increased frequency of the offering of standard engineering statistics courses as we have known them for decades. We must fundamentally rethink the way we teach statistics to engineers, keeping in mind that past efforts did not succeed in making statistics an integral part of how engineers solve problems. This is, fortunately, beginning to be recognized, as evidenced by a recent effort to incorporate statistics into engineering accreditation programs (see Penzias 1989). Past education in engineering statistics, as much evidenced by standard textbooks, mostly focused on what can be presented as a simple, deductive science; an approach with which purely mathematically educated teachers are often most comfortable. But we need to teach statistics as part of scientific method. We should teach engineers inductive reasoning, experimentation, and problem solving using statistics. In particular, they need to work on real engineering problems, perform detective work, draw conclusions, and take action. It is important that we use relevant engineering examples. (Please, no more baseball examples! If we want to be respected as a useful profession, we need to show that we work on important problems.) If we teach statistics in the spirit outlined here, I think we can bring something unique to engineering. Statistics will then, it is hoped, be considered an indispensable tool for solving enginleeringproblems.

The American Statistician, November 1991, Vol. 45, No. 4

Engineers can get excited about statistics. I believe that if they see that statistics is about inductive reasoning, learning from the real world, detective work, and experimental design, they will consider their statistics course one of the most interesting courses they ever had. I make this statement based on data-I have seen it and they have told me so. I believe that statistics in the hands of a large community of engineers can have a dramatic influence on the future of modem manufacturingand engineering.
[Received November 1989. Revised December 1989.]

REFERENCES
Bisgaard, S. (1989a), A Practical Aid for Experimenters, Madison: Starlight Press. (1989b) "The Quality Detective," Philosophical Transactions of the Royal Society of London, A, 327, 499-511. Box, G. E. P. (1976), "Science and Statistics," Journal of the American Statistical Association, 71, 791-799. (1979), "Robustness in the Strategy of Scientific Model Building," in Robustness in Statistics, eds. R. L. Launer and G. N. Wilkinson, New York: Academic Press. (1988), "Signal-to-Noise Ratios, Performance Criteria, and Transformations," Technometrics, 30, 1-17. Box, G., and Bisgaard, S. (1987), "The Scientific Context of Quality Improvement," Quality Progress, XX, 54-61. Box, G., Bisgaard, S., and Fung, C. (1988), "An Explanation and Critique of Taguchi's Contributions to Quality Engineering," Quality and Reliability Engineering International, 4, 123-131. Box, G. E. P., Hunter, W. G., and Hunter, J. S. (1978), Statistics for Experimenters, New York: John Wiley. Box, G. E. P., and Youle, P. V. (1955), "The Exploration and Exploitation of Response Surfaces: An Example of the Link Between the Fitted Surface and the Basic Mechanism of the System," Biometrics, 11, 287-323. British Productivity Council (1954), "Right First Time," Videotape distributed in the United States by Productivity-Quality Systems, Inc., Dayton, Ohio.

Cox, D. R. (1981), "Theory and General Principle in Statistics," Journal of the Royal Statistical Society, Ser. A, 144, 189-197. Daniel, C. (1976), Application of Statistics to Industrial Experimentation, New York: John Wiley. Davies, 0. L. (ed.) (1954), The Design and Analysis of Industrial Experiments, London: Oliver & Boyd. Deming, W. E. (1975), "On Probability As a Basis For Action" The American Statistician, Vol. 29, No. 4, pp. 146-152. (1986), Out of the Crisis, Cambridge, Mass.: MIT Press. Doyle, L. E., Keyser, C. A., Leach, J. L., Schrader, G. F., and Singer, M. B. (1985), Manufacturing Processes and Materials for Engineers (3rd ed.), Englewood Cliffs, NJ: Prentice-Hall. Encyclopaedia BrittannicaEducational Corporation(1984), "Road Map for Change: The Deming Approach" (videotape), Chicago: Author. Fisher, R. A. (1935). Design of Experiments, London: Oliver & Boyd. (1938), "Presidential Address, First Indian Statistical Conference," Sankhyd, 4, 14-17. (1948), "Biometry," Biometrics, 4, 1-9. Hogg, R. V., et al. (1985), "Statistical Eduation for Engineers: An Initial Task Force Report," The American Statistician, 39, 168175. Hunter, W. G. (1975), "101 Ways to Design an Experiment or Some Ideas About Teaching Design of Experiments," Technical Report 413, University of Wisconsin-Madison, Dept. of Statistics. (1977), "Some Ideas About Teaching Design of Experiments, With 25 Examples of Experiments Conducted by Students," The American Statistician, 31, 1. Ishikawa, K. (1976), Guide to Quality Control, Tokyo: Asian Productivity Association, UNIPUB. Kempthorne, 0. (1980), "The Teaching of Statistics: Content Versus Form," The American Statistician, 34, 17-21. Niebel, B. W., Draper, A. B., and Wysk, R. A. (1989), Modern Manufacturing Process Engineering, New York: McGraw-Hill. Penzias, A. (1989), "Teaching Statistics to Engineers," Science, 244, 1025. Quinlan, J. (1988), "1985 Winner, American Supplier Institute Taguchi Application Award-Product Improvement by Application of Taguchi Methods," Target, 4, 22-29. Snee, R. D. (Chairman, ASA Committee on Training Statisticians for Industry) (1980), "Preparing Statisticians for Careers in Industry," The American Statistician, 34, 65-80.

The American Statistician, November 1991, Vol. 45, No. 4

283

Das könnte Ihnen auch gefallen