Beruflich Dokumente
Kultur Dokumente
R=20040161122 2019-05-27T16:42:39+00:00Z
AJAA-94-4325-c~
MUI,"IDBCIPLINARY OPTIMIZATION METHODS FORAIRCRAFT PRhWMINARY DESIGN
disciplinary communication, and methods of exploiting grams make modificationor extension of existing analyses
coarse-grainedparallelism for analysis and optimization. difficult. Thc problems are evidenced by the use of very
A new architecture. that involves a tight couplingbetween SimplifKd aerodynamics in synthesis codes that deal with
optimization and analysis, is intended to improve efficien- hundreds of different analyses. Discussions in the litera-
cy while simplifying the structure of multidisciplinary, ture of multidisciplinary design problems involving more
computation-intensive design problems involving many complex analyses are almost invariably restricted to two
analysis disciplines and perhaps hundreds of design vari- or (rarely) three des. The difficulties involved in deal-
ables. Work in two areas is described here: system decom- ing with large multidisciplinary problems lead to such re-
position using compatibility constraintsto simplify the sults as limited design "cycling", cruisedesigned wings
analysis structure and take advantage of coarse-grained with offdesign "fues". and an inability to efficiently con-
parallelism; and collaborative optimization, a decomposi- sider innovative configurations. A need exists, not simply
tion of the optimization process to permit parallel design to increase the speed of the analyses, but rather to change
and to simplify interdisciplinary communication require- the structure of such design problems to simultaneously
ments. improve pedormance and reduce complexity.
To
when one applies the same conceptto tile feed-fmvard llreanaiysis wasdecomposed into three pamas shown in
connections as well, the structure on the right side of the figure 4. Each part contains several analysis routines, with
figure is produced. This strikingly simple system repre- the resulting groups representing (roughly) structures,
sents a decomposition of the ariginal problem that is not high-speed aerodynamics, and low-speed performance.
only free of iteration, but also parallel. The optimization The decomposition introduces five auxiliary design vari-
process enforces the requirement that the results of one ables and their five associated compatibility constraints
computation match the inputs to another, permitting serial represented by the dashed lines in the figure.
tasks tobe parallelized in atransparent manner. As the
optimization proceeds, the discrepancy between assumed
inputs (auxiliary design variables) and computed values is
reducedto a specified feasibility tolerance.
698
variable valucs. This filesharing technique is simple but
computationally expensive, and is intended not to show
gains in computational efficiency. but rather to demon-
strate the feasibilityofpeFallelexecutMnoftheduxm-
Pasedsystem-
+ 1sm
u
The Compatibility constraint, y = y', may be posed in S ~ V -
~ u 4 0 0
eral forms. Sincenumerid optimizers can be sensitive
4
v) lcaoo to such details of problem fornulation,a systematic study
.a of the effect of changing the form of compatibility con-
$woo straints was undertaken. The following examples were
i-
z
mo
considered:
1. y - y' I *E1
2. (y -
y'P s E2
3. y/y' = 1 + & 3
BedhPASS Seqw8ialW.n IhDpnCdPdldWmt
4. ln(y/y') = 1 f e4
Figure 5. Results of 3-part decomposition of a h a f t de
sign problem. Solved using PASS and NPSOL on an IBM The limits el - ~4 were adjusted to demand a consistent
RS/6000. level of accuracy, but the compatibility constraint scaling
is not easily established and so the optimization was nm
with a variety of scaling. The data shown in Table 1 is
The aircraft design problem has been solved on a heten thus a measure not only of the relative computation times
geneous distributed network as shown in figure 6. The associated with the various forms of the compatibility con-
three analysis groups wait for the communication subrou- straints, but also of the sensitivity of each to scaling. The
tine to write the latest values of the design variables to a convergence rates are high enough for each form that it
file. They then compute the constraints (orobjective) for may be assumed that a "good" scaling can be found for
which they are responsible, and write the results to another any of the forms. The results show that differences of
file. The communications subroutine reads the results and squares are faster than the other forns, possibly because it
sends them to NPSOL,which returns a new set of design avoids switching of active constraint sets.
699
Form
-----
Total
Run8
%Converged CPU
,-----_----_------_-_^________________
Avg .
CPD
Min .
Y-Y' 86 87 22.0 17.5
(y-y' 27 89.3 18.1 13.6
YIY * 20 100 21.0 18.3
ln(y/y') 36 88.9 20.7 17.4
700
As 0 firot aumplc d celtaboraa‘veapimization,consider
minimization of Rosenbrock’s valley function:
min: J(XlJ2) = 100 ( ~ 2 xl2P
- -
+ (1 x ~ P
solution of this two-variable, l m c m s *m d
lem with Variousoptimization techniqpes ispresentbdin
Ref. 1. In the present analysis, all q t h h t t b isper-
f~withthesequentialquadraticprogrammingslgo-
rithm, NPSOL. From a stiirhg point of [O.O, 0.01, usiag
analytic gradients, NPSOL reached the the solution in 13
iterations. For demonstrationpurpose%assume that we are
dealing with a mOre complex, multidisciplinary optimiza-
tion problem than the RosenbrocL valley in which the de-
sign components are computed by different groups. Al-
though this mputationally-di~butcdproblem may be
reassembled and solved by a single optimizer, one of the
significant advantages of coilabcmtive optimization is that
this inegration is not necessary. For example, suppose
computation of the Roseribrock objective function is de-
composed among two groups as,
m i x J = J 1 + J2
where: J1 (XI& = 100 ( ~ -2~ 1 ~ ) ~
Figure 8. Two collaborative optimizationapproaches
and: Jz(x1) = (1 - ~ 1 ) 2 for Rosenbmcksvalley function
Let one analysis grwp be responsible for compnting J1
andamtherJ2 SinCethecomputationisdiseibuted,~~I-
labcmuion is required to ensure that the gmups have a con-
sistent description of the design space. Among numerow
strategies which achieve the necessary coordination,two
example approaches are illustrated in figure 8. In both
cases, the system-level optimizer is used to orchessate the sdulicm strategy 1
o v e d optimization process through selec:tion of system-
level met variables. A system-level target is needed for
each miable which is usedby more than one analysis
group (e%., y1 which is computed in analysis group 1 and
input to analysis group 2). The subspace optimizers have
as design variables all inputs required by their analysis
group. Note that for analysis group 1, this includes the lo-
cal -le x~ which is natusedinanalysisgrwp2. Thc
goal of each subspace optimizer is to minimize the dis-
crepancy between the local versions of each system-level
target variable and the target variable itself. Treaunent of
this discrepancy minimization is the source of the differ-
ences between the two co-ve strategies. In the fust
approach (left si& of figure! 8), a summed square discrep
ancy is minimized and the collaborative framework does
not introduce any additional umstral‘nts to the analysis
group. Note that this subspace objective function is at least
quadratic but is generally more nonlinear. In the approach
depicted on the right, a constraint is added for each sys-
tem-level target needed by the analysis group (Ref. 10). If
the system-level target is represented locally as an input,
this added constraint is l h , otherwise, the added con-
saaint is nonlinear.In this formulation, an extra design
variable ( x 3 is requiredand the objective is linear.
701
Obtaining the system-kvel objective gradient and con- is effective in both solution strategies, usc of the pvim
straint Jacobian by finitedifferencingoptimized subprob- Hessian information is of greatex benefit for Soluticm strat-
lems requires numemu subproblem optimizations (each egy2. This perfarancegainFesults~knowwgc
with numerous subspace iterations) forevery system-kvcl of the pviow solution's active constraint set is of signifi-
iteration. Fur&hexmcce,these subspace optimizationsmust cance for strategy 2.
be tightly converged to to yield accurate derivatives. To
furthet minimize numerical m,additional calculations In another appoach, an extra system-level design variable
were performed to s e k t appropriate finitedifference in- is added (lo)which also serves as the system-level objec-
tervals. Thw extra iterations are shown in patenthesis in tive function (seeFig. 3). This concept which may be
Table 1. Without the proper choice of finitedifference in- adapted to either solution strategy results in a linear sys-
terval, the convergenceof the collaborative strategies was tem-level objective and completely eliminatesthe fmite-
not robust to changes in the starting point. difference requirements between the system and subspace
levels. The analysis group which computes the actual ob
To reduce both computationalexpense and numerical er- jective function @roup 2 in this case) is now also responsi-
ror, optimal sensitivity information from the converged ble for matching the target system objective. n i s added
subproblem may be used to provide system-level deriva- responsibility causesmOre difficulty far the subspace anal-
tives. (Ref. 11) This is possible since the system-level tar- ysis (as evident in the total number of function evalua-
gets are trated as parameters in the subproblems and as tions). Furthermore. an increased number of system-level
design variables in the system optimization. Hence, the iterations is required. An alternate means of eliminating
ni:" problem Jacobian is equivalent to the subproblem these finitedifferencerequirements is to rely on additional
dJ /dp. The required information is generally available at post-optimality information in the subspace analysis to es-
the solution of each subspaceoptimization problem (with timate the change in actual objective functionwith respect
little to no added cost) through the following equation. to a change in the parameters as,
(Refs. 11- 13.)
-=-
dP aP
Here A* represents the Lagrange multiplier vector at the
solution. Because of the special structure of the collabom-
tive optimization solution strategies, this equation be- Figure 9 shows the arrangement of analyses in an example
comes, aircraft design problem. Here the goal is to select values af
the design variable that maximize range with a specified
- aJ
dJ' = -= -2(x- x0 ) for solution strategy 1 gross weight. The figure shows the analysis grouped into
dP aP three disciplinary units, aerodynamics, structures, and per-
formance. The problem stated in this way is not directly
---+
dJ' aci
-=q for solution strategy 2 executable in parallel. Dependent disciplines must wait
dP aP for their inputs variables to be updated before they may
execute. The issue is further complicated when there axe
Note that in either of the proposed collaborative formula- feedbacks as between structures and aerodynamics due to
tions, calculation of the required partial derivatives is trivi- aeroelastic effects. In such a case there is an iterative loop
al. However, although many optimizers provide an esti- that may take several iterations to converge before subse-
mate of X* as part of the mination process, accurate quent disciplines (in this case,performance) may be exe-
estimates are only ensured when the problem has con- cuted.
verged tightly (Refs. 12-13). For this reason,solution
strategy 1 may be preferred. As shown in Table 1, using
this post-optimality information results in significant com-
putational savings by reducing both the finitedifference
interval computationsand the number of calls to each sub-
space optimizer. Note that in this case,finitedifferencing
is still used to estimate the system-level objective gradient.
701
Obtaining the system-kvel objective gradient and con- is effective in both solution strategies, use of the pvious
straint Jacobian by finitedifferencingoptimized subprob- Hessian information is of greatex benefit for Soluticm strat-
lems requires numemu subproblem optimizations (each egy2. This perfarancegainFesults~knowwgc
with numerous subspace iterations) forevery system-kvcl of the pvious solution's active constraint set is of signifi-
iteration. Fur&hexmcce,these subspace optimizationsmust cance for strategy 2.
be tightly converged to to yield accurate derivatives. To
furthet minimize numerical m,additional calculations In another appoach, an extra system-level design variable
were performed to s e k t appropriate finitedifference in- is added (lo)which also serves as the system-level objec-
tervals. Thw extra iterations are shown in patenthesis in tive function (seeFig. 3). This concept which may be
Table 1. Without the proper choice of finitedifference in- adapted to either solution strategy results in a linear sys-
terval, the convergenceof the collaborative strategies was tem-level objective and completely eliminatesthe fmite-
not robust to changes in the starting point. difference requirements between the system and subspace
levels. The analysis group which computes the actual ob
To reduce both computationalexpense and numerical er- jective function @up 2 in this case) is now also responsi-
ror, optimal sensitivity information from the converged ble for matching the target system objective. n i s added
subproblem may be used to provide system-level deriva- responsibility causesmOre difficulty far the subspace anal-
tives. (Ref. 11) This is possible since the system-level tar- ysis (as evident in the total number of function evalua-
gets are trated as parameters in the subproblems and as tions). Furthermore. an increased number of system-level
design variables in the system optimization. Hence, the iterations is required. An alternate means of eliminating
ni:" problem Jacobian is equivalent to the subproblem these finitedifferencerequirements is to rely on additional
dJ /dp. The required information is generally available at post-optimality information in the subspace analysis to es-
the solution of each subspaceoptimization problem (with timate the change in actual objective functionwith respect
little to no added cost) through the following equation. to a change in the parameters as,
(Refs. 11- 13.)
-=-
dP aP
Here A* represents the Lagrange multiplier vector at the
solution. Because of the special structure of the collabom-
tive optimization solution strategies, this equation be- Figure 9 shows the arrangement of analyses in an example
comes, aircraft design problem. Here the goal is to select values af
the design variable that maximize range with a specified
- aJ
dJ' = -= -2(x- x0 ) for solution strategy 1 gross weight. The figure shows the analysis grouped into
dP aP three disciplinary units, aerodynamics, structures, and per-
formance. The problem stated in this way is not directly
---+
dJ' aci
-=q for solution strategy 2 executable in parallel. Dependent disciplines must wait
dP aP for their inputs variables to be updated before they may
execute. The issue is further complicated when there axe
Note that in either of the proposed collaborative formula- feedbacks as between structures and aerodynamics due to
tions, calculation of the required partial derivatives is trivi- aeroelastic effects. In such a case there is an iterative loop
al. However, although many optimizers provide an esti- that may take several iterations to converge before subse-
mate of X* as part of the mination process, accurate quent disciplines (in this case,performance) may be exe-
estimates are only ensured when the problem has con- cuted.
verged tightly (Refs. 12-13). For this reason,solution
strategy 1 may be preferred. As shown in Table 1, using
this post-optimality information results in significant com-
putational savings by reducing both the finitedifference
interval computationsand the number of calls to each sub-
space optimizer. Note that in this case,finitedifferencing
is still used to estimate the system-level objective gradient.
703
le. i
ia ..... Figure 12a shows the variation of aspect mtio and range
14. s~&em-level itarations. Fig~r!12b shows sub
f 12
10.
space nsults during the first five system-level iterations.
Included on the plot an the targu values for aspect ratio
designated by t h ~ystem-levcl
~ OpimiZer along With the
value of aspect ratio used in the bcal sub-@lms of
structures and acmdynamics. Note that mitially these
2
groups start out with their own guesses for aspect ratio
0.
0. 5. 10. 15. 2Q. (the structures group guesses a low 8.0, the aerodynami-
sv*wnLmlnudion
cists hope it will be 14.0). They receive the target value
from the system-level optimizer (12.0) and immediately
try to match it. For the next two system-level iterations
Figure 11: History of Sup-Level Variable Changes the targa value of aspect ratio is not changed by the sys-
tem optimizer. At the third iteration, though, themget val-
The figure shows that the variables nearly reach theit con- ue for aspect ratio is red& to about 11.5. One can see
verged values within 10 iterations of the starting point that the sub-problems immediately try to match this value.
Within each of these iterations there are many sublevel it-
erations as the sublevels attempt to match the system- Note that the subproblems occasionally do not exactly
level target variable values. To illustrate this prccess the match the target value as may be seen after system-level
variation in the parameter aspect ratio is shown in Figure iteration #2;. This is because, as was described earlier in
12a and b. this section, the subproblems are trying to achieve the
18. best match far the entire set of design variables not simply
14.
16.
1 1
................................ ..
........... ............ ............................... ....................
I the one shown here. So in this instance, the sub-problem
found that allowing a slight deviation in the local value of
aspect ratio allowed a greates degree of Compatibility be-
tween all the variable values used in that sub-problem and
their corresponding target values.
B:
-
As describexi in the previous section, the gradient of the
2. objective function and the gradient of the COllstraJn
*tsatthe
0. system level may be computed analytically based on opti-
4 5.-
mal sensitivity results. In fact, in this problem it was
found that thesegmdients need to be specified analytical-
ly for computational stability. Obtaining gradients via fi-
14. nitedifferencing can lead to spurious gradient values and
+
..... .................. .......................... i.......................... i.......................... difficulties for the system level optimizer. These same
problems result when the tolerances ofthe sub-problems
I I
., . . .. . ..
12. ....... - . ...i.. ....,...,.-...
.: -
: i
';
-.......
L.; .......................... are not set properly and the sub-problems do not reach a
.. :-
9.
1. ..... ......-..-..........+..........................+..........................
I I..........................
1I
8. '
0. 10. 20. 30. 40.
subspace Heration Number
704
present SChtQling algorithm using this objc€tivc. It is
nearly identical to theoneobtakdby DeMaid, which
does not employ an explicit objective function. Although
whether the analysis is decomposed using compatibility the subppobkmsare in adiffereatordet,cach Consists of
constraints, artbe design poMemis dccoarposed using thcsamcanal~asm the DeMaidpoblem,exceptthe
collaborativeopimization,thesaucturedthedccomposi- second,which is aunionof twosubproblems from t h e b
tiondetenninestheefficiencyofthedecomposedsyslem Maid solutioh
For example, consider two subroutineswith wmputath I" I I I I I
timesA andB,withn hputstoA,andm intermediate
values computed by A which m inputs to B. To compute
gradients of quantities computedby B with respect to the
inputs of A ,the computational time using finite differenc-
es is n(A+B). If we use decomposition. the computation
time for all the gradients becomes nA + mB. When n is
much larger than m (contraction), decomposition will gen-
erally reduce computation time. If, however, m is larger
than n, decomposition may increase computation time.
Such ideas were considered in the decomposition of the
aircraft design problem in figure 4, but only in a qualita-
tive fashion. Moreover. the ad hoc procedure was quite
time-consuming. To exploit contraction, avoid expansion,
and assign analyses to subproblems efficiently, an auto-
matic tool is desirable. Such a program is described here.
It uses a genetic algorithm to find a decomposition that
minimizes the estimated computational time of a gradient- Figure 13. Solution of the DeMaid example problem us-
based optimizationof the resulting decomposed system. mg extent of feedback as objective function.
Given a list of analyses and the global variables which are Minimizing feedback exm is not the goal for a system
inputs and outputs to each, the program creates ahpea- that is decomposedusing compatibility constraints and op-
dence matrix of integers. The element Dep(ij) corre timization. The difference between feedback and feedfor-
sponds to the number of outputs fnnn routine i which are ward disappeafs, since the subproblems run in parallel.
inputs to routine j . If the routines are executed sequential- Therefore the length of an individual feedback is unimpor-
ly, entries in Dep which are below the main diagonal are tant, and some feedback benveen the subproblems is not
feedbacks, and entries above the main diagonal i m feed- disadvantageous. Conversely, the number of feedforwads
forwards. As theordersoftheroutinesarechanged,the between subproblems is important. Thus, a rather differ-
structures of the dependence matrix changes. By includ- ent objective is used for the optimal decomposition formu-
ing infomation about where in the ordering thereare lation. The objective used here is an explicit estimate of
"breaks" between subpmblems, various objective func- the computation time for optimization of the decomposed
tions can be evaluated from the dependence matrix. designproblem.
--
addition, it is assumed that the subproblems can be execut-
jective function used here is: ed in parallel. Finally,we assume that gradients for a sub-
n i -1 problem depend on all routines within that subproblem.
J = A2 Dep (ij) (i-j)
The objective function is then:
i=lj=l
= Toptimiation = @he searches) * peach line search)
and reflects the "total length of feedback" in the system.
The solution shown in Figure 13, was obtained by the N h e -,.hes is proportional to the total number of vari-
705
abies. Therdore the numbex of line searches varies as:
%ne searches =%esign vars + Nauxiliary design vars Subroutine9
For a typical gradient-based optimizationeach line search BIUL
requires that each subroutinebe executed once for cach in-
dependent variable (for gradients) and then an averaw of
twice more for the line search itself. Ihe taal numbex of Subroutine6
calls tothemtinesineachgroup is the= Subroutine4
N* *
= + Ndesign vars(sub i) + Nauxiliary var (sub i) 12 BlUk
The full objective function is therefore: Figure 14. Decoding of genetic string into subroutine or-
= mdesign v a n + Nauxiliary design vars) * i der and subproblems.
Results Relerenees
The DeMaid problem, described previously, was solved 1. h, 1.. "An Interxtive System for Aimaft Design and
with this objective, assuming that the overall design prob- Optimization",AIAA Paper #92-1190, Feb. 1992.
lem consisted of as many as 9 subproblems. The Optimal 2. Rogers, JL., "&Maid --A Design Manager's Aid for
decomposition is shown in figure 16. The estimated opti- Intelligent Decomposition, Users Guide," NASA TM
mization time for this system is 3.147, compared with De- 101575, March 1989.
Maid's ordering which yields a time of 3.325. As is clear 3. Gage, P.,Kroo, I., "Quasi-F'rocedural Method and Opti-
from the figure,the extent of feedback is substantially mization", AIAAJNASA/AirForce Multidisciplinary Oph-
greater for this solution despite its more efficient parallel mization Conference, Sept 1992.
decomposition. 4. Cousin, J., Metcalfe, M.,"TheBAe Ltd Transport Air-
craft S thesis and Optimization Rogram," AIAA 90-
3295,
5. h,
!g.& 1990.
I., Takai, M.,"A Quasi-procedural, Knowled@
Based System for Airaaft Synthesis", AIAA-88-6502.
.ll..PL,Murray, W., and Wright, M.H., practical Op
6.. Gi
hmnatmn, Academic Press, Inc., 1981.
7. Haftka, RT.,"OnOptions for Interdisciplinaty Analysis
and Design Optimization", Srructural Optimitnrion,Vol.
4, No. 2, June 1992.
8. Padula, S., Polignone, D., "New Evidence Favoring
Multilevel Decomposition and Optimization," Third Sym-
posium on MDO, Sep. 1990.
9. Sobieszczanski-S, J., "Optimization by Decgm-
position: A Step from Hienuchic to Non-Hierarchic Sys-
tem~."NASA CP-3031, S q t . 1989.
10. Sobieszczanslu'-Sobieski, J., "TwoAlternate Ways For
Solving the Coordination problem in Multilevel O p t i d -
tion," StrUchn;il opbmization, Vol. 6, p ~205-215,1993.
.
figure 16. 'Ihe DeMaid sample problem as ordered for
11. Braun, R.D.,KIUO, I.M.. and Gage, PJ.,"Post-
optimization with decomposition using compatibility con-
straints. Optimality Analysis in Aerospace Vehicle Design," AIAA
93-3932, AUg. 1993.
12 Hallman, W.,"Sensitivity Analysis for Trajectoxy Op
Using this objective function, the optimal decomposition
tool has been applied to PASS,yielding a structure very timization Problems," AIAA 90047 1, January 1990.
13. Beltracchi,TJ.,and Nguyen, HN., "Experience with
similar to that produced manually and shown in figure 4.
F'arameter Sensitivity Analysis in FONSIZE," AIAA 92-
4749, Sep. 1992.
Other objective functions which will soon be formulated 14. Starkweather. T.,McDaniel, S., Mathias, K, Whitley.
include one for decomposition with compatibility con- D., Whitley, C., "A Comparison of Genetic Sequencing
straints assuming gradient information from automatic dif-
Operators",proceedings of the 4th International Confer-
ferentiation rather than fmite-differencing, and one for col- ence on Genetic Algorithms, Belew, R. & Booker, L., ed
laborative optimization. The fact that a new objective for Morgan Kaufmann,San Mateo, 1991.
a different type of problem can be easily inserted makes 15. Syswetda. G. "Schedule Optimization Using Genetic
this program a flexible and useful tool which can be adapt- Algorithms", in Handbook of Genetic Algorithms. Davis,
ed to a variety of different formulations. I., ed. Van Nosaand Reinhold, New Yorlr, 1990.
70 7