Sie sind auf Seite 1von 8

15: Qua lity, Robust Design, a nd Optim ization

755

new set of control parameters that are closer to an optimum than an informed guess, and which are robust to the noise factors.

15.7.2 Tolerance Design


Often, as in Example 15.9, the parameter design results in a design optimized for robustness and with a low variability. However, there are situations when the variability is too large and it becomes necessary to reduce tolerances to decrease variability. Typically, analysis of variance (ANOVA) is used to determine the relative contribution of each control parameter so as to identify those factors that should be considered for tolerance tightening, substituting an improved material, or some other means of improving quality. Since these methods often incur additional cost, the Taguchi method of tolerance design provides careful methods for balancing increased quality (lower quality loss) with cost. This tolerance design methodology is beyond the scope of this text, but an excellent, readable source is available.29 Taguchis methods of quality engineering have generated great interest in the United States as many major manufacturing companies have embraced the approach. While the idea of loss function and robust design is new and important, many of the statistical techniques have been in existence for over 50 years. Statisticians point out 30 that less complicated and more ef cient methods exist to do what the Taguchi methods accomplish. However, it is important to understand that before Taguchi systematized and extended these ideas into an engineering context, they were largely unused by much of industry. The growing acceptance of the Taguchi method comes from its applicability to a wide variety of industrial problems with a methodology that does not require a high level of mathematical skills to achieve useful results.

15.8
OPTIMIZATION METHODS The example described in the previous section is a search for the best combination of design parameters using a statistically designed set of experiments when the desired outcome is clear. There is more than one solution to a design problem, and the rst solution is not necessarily the best. Thus, the need for optimization is inherent in the design process. A mathematical theory of optimization has become highly developed and is being applied to design where design functions can be expressed mathematically. The applicability of the mathematical methods usually depends on the existence of a continuously differentiable objective function. Where differentiable equations cannot be developed, numerical methods, aided by computer-based computation, are used to carry out optimization. These optimization methods require considerable depth of knowledge and mathematical skill to select the appropriate optimization technique and work it through to a solution.
29. C. M. Creveling, Tolerance Design: A Handbook for Developing Optimal Speci cations, AddisonWesley Longman, Reading, MA, 1997. 30. R. N. Kackar, Jnl of Quality Tech., Vol. 17, no. 4, pp. 176209, 1985.

15

756

By the term optimal design we mean the best of all feasible designs. Optimization is the process of maximizing a desired quantity or minimizing an undesired one. Optimization theory is the body of mathematics that deals with the properties of maxima and minima and how to nd maxima and minima numerically. In the typical design optimization situation, the designer has de ned a general con guration for which the numerical values of the independent variables have not been xed. An objective function31 that de nes the overall value of the design in terms of the n design variables, expressed as a vector x, is established. f ( x ) = f ( x1 , x 2 , . . . x n ) (15.19)

Typical objective functions can be expressed in terms of cost, weight, reliability, and material performance index or a combination of these. By convention, objective functions are usually written to minimize their value. However, maximizing a function f(x) is the same as minimizing f(x). Generally when we are selecting values for a design we do not have the freedom to select arbitrary points within the design space. Most likely the objective function is subject to certain constraints that arise from physical laws and limitations or from compatibility conditions on the individual variables. Equality constraints specify relations that must exist between the variables. h j ( x ) = h j ( x1 , x 2 , . . . x n ) = 0; j = 1 to p (15.20)

For example, if we were optimizing the volume of a rectangular storage tank, where x1 l1, x2 l2, and x3 l3, then the equality constraint would be volume V = l1, l2, l3. The number of equality constraints must be no more than the number of design variables, p n. Inequality constraints, also called regional constraints, are imposed by speci c details of the problem. gi ( x ) = gi ( x1 , x 2 , . . . x n ) 0; i = 1 to m (15.21)

15

There is no restriction on the number of inequality constraints.32 A type of inequality constraint that arises naturally in design situations is based on speci cations. Speci cations de ne points of interaction with other parts of the system. Often a speci cation results from an arbitrary decision to carry out a suboptimization of the system by establishing a xed value for one of the design variables. A common problem in design optimization is that there often is more than one design characteristic that is of value to the user. One way to handle this case in formulating the optimization problem is to choose one predominant characteristic as the objective function and to reduce the other characteristics to the status of constraints. Frequently they show up as rather hard or severely de ned speci cations. In reality,
31. Also called the criterion function, the payoff function, or cost function. 32. It is conventional to write Eq. (15.21) as 0. If the constraint is of the type by multiplying through by 1.

0, convert to this form

15: Qua lity, Robust Design, a nd Optim ization

757

such speci cations are usually subject to negotiation (soft speci cations) and should be considered to be target values until the design progresses to such a point that it is possible to determine the penalty that is being paid in trade-offs to achieve the specications. Siddal 33 has shown how this may be accomplished in design optimization through the use of an interaction curve.
EX A M PLE 15.10

The example helps to clarify the de nitions just presented. We wish to design a cylindrical tank to store a xed volume of liquid V. The tank will be constructed by forming and welding thin steel plate. Therefore, the cost will depend directly on the area of plate that is used. The design variables are the tank diameter D and its height h. Since the tank has a cover, the surface area of the tank is given by

A = 2 D 2 / 4 + Dh
We choose the objective function f(x) to be the cost of the material for constructing the tank. f(x) Cm A Cm(pD2/2 pDh), where Cm is the cost per unit area of steel plate.

An equality constraint is introduced by the requirement that the tank must hold a specied volume:

V = D2h / 4
Inequality constraints are introduced by the requirement for the tank to t in a speci ed location or to not have unusual dimensions. Dmin D Dmax hmin h hmax

Optimization methods in engineering design can be described by the following broad categories.34

Optimization by evolution: There is a close parallel between technological evolution and biological evolution. Most designs in the past have been optimized by an attempt to improve upon existing similar designs. Survival of the resulting variations depends on the natural selection of user acceptance. Optimization by intuition: The art of engineering is the ability to make good decisions without having exact mathematical justi cation. Intuition is knowing what to do without knowing exactly why one does it. The gift of intuition seems to be closely related to the unconscious mind. The history of technology is full of examples of engineers who used intuition to make major advances. Although the knowledge and tools available today are so much more powerful, there is no question that intuition continues to play an important role in the development of good designs. This intuition is often in the form of remembering what worked in the past. Optimization by trial-and-error modeling: This refers to the usual situation in engineering design where it is recognized that the rst feasible design is not necessarily the best. Therefore, the design model is exercised for a few iterations in the

15

33. J. N. Siddall and W. K. Michael, Trans. ASME, J. Mech. Design, Vol. 102, pp. 51016, 1980. 34. J. N. Siddall, Trans. ASME, J. Mech. Design, Vol. 101, pp. 67481, 1979.

758

hope of nding an improved design. This works best when the designer has sufcient experience to make an informed choice of initial design values. The parametric design of a spring in Sec. 8.5.2 is an example of this approach. However, this mode of operation is not true optimization. Some refer to this approach as satis cing, as opposed to optimizing, to mean a technically acceptable job done rapidly and presumably economically. Such a design should not be called an optimal design. Optimization by numerical algorithm: This approach to optimization, in which mathematically based strategies are used to search for an optimum, has been enabled by the ready availability of fast, powerful digital computation. It is currently an area of active engineering research.

There are no universal optimization methods for engineering design. If the problem can be formulated by analytical mathematical expressions, then using the approach of calculus is the most direct path. However, most design problems are too complex to use this method, and a variety of optimization methods have been developed. Table 15.5 lists most of these methods. The task of the designer is to understand whether the problem is linear or nonlinear, unconstrained or constrained, and to select the method most applicable to the problem. Brief descriptions of various approaches to design optimization are given in the rest of this section. For more depth of understanding about optimization theory, consult the various references given in Table 15.5. Linear programming is the most widely applied optimization technique when constraints are known, especially in business and manufacturing production situations. However, most design problems in mechanical design are nonlinear; see Example 15.10.

15.8.1 Optimization by Differential Calculus


We are all familiar with the use of the calculus to determine the maximum or minimum values of a mathematical function. Figure 15.10 illustrates various types of extrema that can occur. A characteristic property of an extremum is that f(x) is momentarily stationary at the point. For example, as point E is approached, f(x) increases, but right at E it stops increasing and the slope soon decreases. The familiar condition for a stationary point is df ( x ) dx
15

=0

(15.22)

If the curvature is negative, then the stationary point is a maximum. The point is a minimum if the curvature is positive. d 2 f ( x) dx
2

0 indicates a local maximum

(15.23)

d 2 f ( x) dx 2

0 indicates a local minimum

(15.24)

15: Qua lity, Robust Design, a nd Optim ization


TA BLE 15. 5

759

Listing of Numerical Methods Used in Optimization Problems


Type of Algorithm Linear programming Nonlinear programming Geometric programming Dynamic programming Variational methods Differential calculus Simultaneous mode design Analytical-graphical methods Monotonicity analysis Genetic algorithms Simulated annealing Ritz Newton-Raphson Structual optimization Johnsons MOD Example Simplex method Davison-Fletcher-Powell Reference (see footnotes) 1 2 3 4 5 6 7 8 9 10 11

1. W. W. Garvin, Introduction to Linear Programming, McGraw-Hill, New York, 1960. 2. M. Avriel, Nonlinear Programming: Analysis and Methods, Prentice Hall, Englewood Cliffs, NJ, 1976. 3. C. S. Beightler and D. T. Philips: Applied Geometric Programming, John Wiley & Sons, New York, 1976. 4. S. E. Dreyfus and A. M. Law, The Art and Theory of Dynamic Programming, Academic Press. New York, 1977. 5. M. H. Denn, Optimization by Variational Methods, McGraw-Hill, New York, 1969. 6. F. B. Hildebrand, Introduction to Numerical Analysis, McGraw-Hill, 1956. 7. L. A. Schmit (ed.), Structural Optimization Symposium, ASME, New York, 1974. 8. R. C. Johnson, Optimum Design of Mechanical Elements, 2d ed., John Wiley & Sons, New York, 1980. 9. P. Y. Papalambros and D. J. Wilde, Principles of Optimal Design, 2d ed., Cambridge University Press, New York, 2000. 10. D. E. Goldberg, Genetic Algorithm, Addison-Wesley, Reading MA, 1989. 11. S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, Optimization by Simulated Annealing, Science, Vol. 220, pp. 67179, 1983.

FIGURE 15.10 Different types of extrema in the objective function curve.

f(x)
D F A C x

15

Both point B and point E are mathematical maxima. Point B, which is the smaller of the two maxima, is called a local maximum. Point E is the global maximum. Point D is a point of in ection. The slope is zero and the curve is horizontal, but the second derivative is zero. When d 2f(x)/dx2 0, higher-order derivatives must be used to nd a derivative that becomes nonzero. If the derivative is odd, the point is an in ection point, but if the derivative is even it is a local optimum. Point F is not a minimum

760

point because the objective function is not continuous at it; the point F is only a cusp in the objective function. Using the derivative of the function to infer maxima or minima only works with a continuous function. We can apply this simple optimization technique to the tank problem described in Example 15.10. The objective function, expressed in terms of the equality constraint V pD2 h/4, is f ( x ) = Cm C D2 4 D2 + Cm Dh = m + Cm D VD 2 2 2 df ( x ) dD = 0 = Cm D
1/ 3

(15.25)

4CmV D2

(15.26)

4V D=

= 1.084V 1/3

(15.27)

The value of diameter established by Eq. (15.27) results in minimum cost because the second derivative of Eq. (15.26) is positive. Note that while some problems yield to analytical expressions in which the objective function is a single variable, most engineering problems involve objective functions with more than one design variable. Lagrange Multiplier Method The Lagrange multipliers provide a powerful method for nding optima in multivariable problems involving equality constraints. We have the objective function f(x) f(x, y, z) subject to the equality constraints h1 h1(x, y, z) and h2 h2(x, y, z). We establish a new function, the Lagrange expression (LE) LE = f ( x )2 ( x , y, z ) + 1h1 ( x , y, z ) + 2 h2 ( x , y, z ) (15.28)

where l1 and l2 are the Lagrange multipliers. The following conditions must be satised at the optimum point. LE =0 x
EX AMPLE 15.11

LE =0 y

LE =0 z

LE =0 1

LE =0 2

(15.29)

15

This example illustrates the determination of the Lagrange multipliers for use in optimization.35 A total of 300 linear feet of tubes must be installed in a heat exchanger in order to provide the necessary heat-transfer surface area. The total dollar cost of the installation includes: (1) the cost of the tubes, $700; (2) the cost of the shell 25D25L; (3) the cost of the oor space occupied by the heat exchanger = 20DL. The spacing of the tubes is such that 20 tubes must t in a cross-sectional area of 1 ft2 inside the heat exchanger tube shell.
35. W. F. Stoecker, Design of Thermal Systems, 2d ed., McGraw-Hill, New York, 1980.

15: Qua lity, Robust Design, a nd Optim ization

761

The purchase cost C is taken as the objective function. The optimization should determine the diameter D and the length of the heat exchanger L to minimize the purchase cost. The objective function is the sum of three costs.

C = 700 + 25 D 2.5 L + 20 DL

(15.30)

The optimization of C is subject to the equality constraint based on total length and cross-sectional area of the tube shell. Total ft3of tubes 20 tubes/ft2 = total length (ft).

D2 L 20 = 300 4
5 D 2 L = 300

=L

300 5 D 2
300 5 D 2

The Lagrange equation is: LE = 700 + 25 D 2.5 L + 20 DL + L 60 LE =0 = 2.5 ( 25) D1.5 L + 20 L + 2 D D3 LE = 2.5 D 2.5 + 20 D + = 0 L
LE 300 =L =0 5 D 2

(15.31)

(15.32)

(15.33)

60 ; From Eq. (15.32 = 25D 2.5 20 D D2 Substituting into Eq. (15.31 : From Eq. (15.33 , L =

60 60 60 + 20 =0 + 2 25D 2.5 20 D 62.5D 1.5 2 2 D D3 D

12.5D1.5 = 20

D = 1.60.666 = 1.37ft

15

Substituting into the functional constraint, between D and L gives L = 10.2 ft. Substituting the optimum values for D and L into the equation for the objective function, Eq. (15.30) gives the optimum cost as $1538. This is an example of a closed form optimization for a single objective function with two design variables, D and L , and a single equality constraint.

762

By their nature, design problems tend to have many variables, many constraints limiting the acceptable values of some variables, and many objective functions to describe the desired outcomes of a design. A feasible design is any set of variables that simultaneously satis es all the design constraints and ful lls the minimum requirements for functionality. An engineering design problem is usually underconstrained, meaning that there are not enough relevant constraints to set the value of each variable. Instead, there are many feasible values for each constraint. That means there are many feasible design solutions. As pointed out in the discussion of morphological methods (see Sec. 6.6), the number of feasible solutions grows exponentially as the number of variables with multiple possible values increases.

15.8.2 Search Methods


When it becomes clear that there are many feasible solutions to a design problem, it is necessary to use some method of searching through the space to nd the best one. Finding the globally optimal solution (the absolute best solution) to a design problem can be dif cult. There is always the option of using brute calculation power to identify all design solutions and evaluate them. Unfortunately, design options reach into the thousands, and design performance evaluation can require multiple, complicated objective functions. Together, these logistical factors make an exhaustive search of the problem space impossible. There are also design problems that do not have one single best solution. Instead they may have a number of sets of design variable values that produce the same overall performance by combining different levels of the performance of one embedded objective function. In this case, we seek a set of best solutions. This set is called a Pareto set. We can identify several classes of search problems. A deterministic search is one in which there is little variability of results and all problem parameters are known. In a stochastic search, there is a degree of randomness in the optimization process. We can have a search involving only a single variable or the more complicated and more realistic situation involving a search over multiple variables. We can have a simultaneous search, in which the conditions for every experiment are speci ed and all the observations are completed before any judgment regarding the location of the optima is made, or a sequential search, in which future experiments are based on past outcomes. Many search problems involve constrained optimization, in which certain combinations of variables are forbidden. Linear programming and dynamic programming are techniques that deal well with situations of this nature.
15

Golden Section Search The golden section search is an ef cient search method for a single variable with the advantage that it does not require an advance decision on the number of trials. The search method is based on the fact that the ratio of two successive Fibonacci numbers Fn 1/Fn 0.618 for all values of n 8. A Fibonacci series, named after a 13th century mathematician, is given by Fn Fn 2 Fn 1 where F0 1 and F1 1. n Fn 01 2 3 4 5 6 7 8 9 ... 1 1 2 3 5 8 13 21 34 55

Das könnte Ihnen auch gefallen