Sie sind auf Seite 1von 130

Design Exploration

ANSYS, Inc.
Southpointe
275 Technology Drive
Canonsburg, PA 15317
ansysinfo@ansys.com
http://www.ansys.com
(T) 724-746-3304
(F) 724-514-9494

Release 12.1
November 2009
ANSYS, Inc. is
certified to ISO
9001:2008.

Copyright and Trademark Information


2009 SAS IP, Inc. All rights reserved. Unauthorized use, distribution or duplication is prohibited.
ANSYS, ANSYS Workbench, Ansoft, AUTODYN, EKM, Engineering Knowledge Manager, CFX, FLUENT, HFSS and any and
all ANSYS, Inc. brand, product, service and feature names, logos and slogans are registered trademarks or trademarks
of ANSYS, Inc. or its subsidiaries in the United States or other countries. ICEM CFD is a trademark used by ANSYS, Inc.
under license. CFX is a trademark of Sony Corporation in Japan. All other brand, product, service and feature names
or trademarks are the property of their respective owners.

Disclaimer Notice
THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS AND ARE CONFIDENTIAL
AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES, OR LICENSORS. The software products and documentation are furnished by ANSYS, Inc., its subsidiaries, or affiliates under a software license agreement that contains provisions concerning non-disclosure, copying, length and nature of use, compliance with exporting laws, warranties,
disclaimers, limitations of liability, and remedies, and other provisions. The software products and documentation may
be used, disclosed, transferred, or copied only in accordance with the terms and conditions of that software license
agreement.
ANSYS, Inc. is certified to ISO 9001:2008.

U.S. Government Rights


For U.S. Government users, except as specifically granted by the ANSYS, Inc. software license agreement, the use, duplication, or disclosure by the United States Government is subject to restrictions stated in the ANSYS, Inc. software
license agreement and FAR 12.212 (for non-DOD licenses).

Third-Party Software
See the legal information in the product help files for the complete Legal Notice for ANSYS proprietary software and
third-party software. If you are unable to access the Legal Notice, please contact ANSYS, Inc.
Published in the U.S.A.

Table of Contents
Overview .................................................................................................................................................... 1
Introduction to Design Exploration ......................................................................................................... 1
Available Tools .................................................................................................................................. 1
ANSYS Design Exploration: a Typical Session, What to Look For? ......................................................... 2
Initial Simulation: Analysis of the Product Under all Operating Conditions .......................................... 2
Identify Design Candidates ............................................................................................................... 2
Assess the Robustness of the Design Candidates ............................................................................... 3
Determine the Number of Parameters ............................................................................................... 4
Increase the Response Surface Accuracy ........................................................................................... 4
Limitations ............................................................................................................................................. 4
Design Exploration Licensing Requirements ............................................................................................ 5
The User Interface .................................................................................................................................. 5
Parameters ............................................................................................................................................. 7
Design Points ......................................................................................................................................... 9
Response Points ..................................................................................................................................... 9
Workflow ................................................................................................................................................ 9
Adding Design Exploration Templates to the Schematic .................................................................... 9
Duplicating Existing Design Exploration Systems ............................................................................. 10
Running Design Exploration Analyses ............................................................................................. 10
Monitoring Design Exploration Analyses ......................................................................................... 11
Design Exploration Options .................................................................................................................. 11
Using Design Exploration ......................................................................................................................... 17
What is Design Exploration? .................................................................................................................. 17
Design of Experiments .......................................................................................................................... 18
Parameters Parallel Chart ................................................................................................................ 19
Design Points vs Parameter Chart .................................................................................................... 20
Parameters Correlation ......................................................................................................................... 20
Correlation Scatter Chart ................................................................................................................. 21
Correlation Matrix ........................................................................................................................... 21
Determination Matrix ..................................................................................................................... 22
Sensitivities Chart ........................................................................................................................... 22
Response Surface ................................................................................................................................. 22
Min-Max Search .............................................................................................................................. 24
Response Chart .............................................................................................................................. 25
Local Sensitivity Chart ..................................................................................................................... 25
Spider Chart ................................................................................................................................... 26
Goal Driven Optimization ..................................................................................................................... 26
Tradeoff Chart ................................................................................................................................ 27
Samples Chart ................................................................................................................................ 28
Sensitivities Chart (GDO) ................................................................................................................. 28
Six Sigma Analysis ................................................................................................................................ 28
Design of Experiments (SSA) ........................................................................................................... 29
Six Sigma Analysis .......................................................................................................................... 30
Sensitivities Chart (SSA) .................................................................................................................. 31
Working with Parameters ...................................................................................................................... 31
Input Parameters ............................................................................................................................ 32
Setting Up Design Variables ............................................................................................................ 33
Setting Up Uncertainty Variables ..................................................................................................... 33
Output Parameters ......................................................................................................................... 34
Design Points ....................................................................................................................................... 35
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

iii

Design Exploration
The Design of Experiments Solution ...................................................................................................... 36
Design Criteria Considered in DOE .................................................................................................. 36
DOE Using a Central Composite Design ........................................................................................... 39
Upper and Lower Locations of DOE Points ....................................................................................... 41
DOE Matrix Generation ................................................................................................................... 42
Using Response Surfaces ...................................................................................................................... 42
Meta Model Types .......................................................................................................................... 42
Standard Response Surface - Full 2nd-Order Polynomials ........................................................... 43
Kriging ..................................................................................................................................... 45
Non-Parametric Regression ....................................................................................................... 46
Suggested Best Practices .................................................................................................... 49
Neural Network ........................................................................................................................ 49
Meta Model Refinement ................................................................................................................. 50
Change Meta Model Types ........................................................................................................ 51
Perform a Manual Refinement ................................................................................................... 51
Refine the Model Using the Kriging Meta Model ........................................................................ 51
Goodness of Fit ............................................................................................................................... 53
Min-Max Search .............................................................................................................................. 53
Response Surface Charts ................................................................................................................. 54
Using the Response Chart ......................................................................................................... 55
Using the Spider Chart .............................................................................................................. 56
Using the Local Sensitivity Chart ............................................................................................... 57
Global Sensitivities ............................................................................................................................... 59
Adding Design Points and Response Points ........................................................................................... 60
Importing and Exporting Design Points ........................................................................................... 60
Inserting Design Points and Response Points ................................................................................... 61
Using Parameters Correlation ................................................................................................................ 61
Running a Parameters Correlation ................................................................................................... 62
Purpose of Parameters Correlation .................................................................................................. 62
Sample Generation ......................................................................................................................... 62
Viewing the Quadratic Correlation Information ............................................................................... 63
Determining Significance ................................................................................................................ 63
Viewing Significance and Correlation Values .................................................................................... 63
Using Goal Driven Optimization ............................................................................................................ 63
Optimization Options ..................................................................................................................... 63
Screening ................................................................................................................................. 64
MOGA ...................................................................................................................................... 65
NLPQL ...................................................................................................................................... 66
Defining Design Objectives ............................................................................................................. 67
Creating New Design Points and Response Points from Candidates ................................................. 69
Goal Driven Optimization Charts ..................................................................................................... 69
Using the Sensitivities Chart (GDO) ........................................................................................... 69
Using the Tradeoff Chart ........................................................................................................... 69
Using the Samples Chart ........................................................................................................... 70
Using Six Sigma Analysis ....................................................................................................................... 70
Performing a Six Sigma Analysis ...................................................................................................... 70
Using Statistical Postprocessing ...................................................................................................... 70
Tables (SSA) .............................................................................................................................. 71
Using Parameter Charts (SSA) .................................................................................................... 71
Using the Sensitivities Chart (SSA) ............................................................................................. 71
Statistical Measures ........................................................................................................................ 71
Understanding Goal Driven Optimization ............................................................................................... 73

iv

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Design Exploration
Principles (GDO) ................................................................................................................................... 73
Guidelines and Best Practices (GDO) ...................................................................................................... 74
Postprocessing GDO Results ................................................................................................................. 76
Postprocessing Overview ................................................................................................................ 77
Understanding Sensitivities Charts .................................................................................................. 77
Understanding Tradeoff Charts ....................................................................................................... 77
Understanding Samples Charts ....................................................................................................... 78
Goal Driven Optimization Theory .......................................................................................................... 79
Shifted Hammersley Sampling Method ........................................................................................... 79
Pareto Dominance in Multi-Objective Optimization ......................................................................... 80
MOGA (Multi-Objective Genetic Algorithm) ..................................................................................... 81
Decision Support Process ................................................................................................................ 81
Non-linear Programming by Quadratic Lagrangian (NLPQL) ............................................................. 85
Understanding Six Sigma Analysis ........................................................................................................... 95
Principles (SSA) ..................................................................................................................................... 96
Guidelines for Selecting SSA Variables ................................................................................................... 97
Choosing and Defining Uncertainty Variables .................................................................................. 97
Uncertainty Variables for Response Surface Analyses ................................................................. 97
Choosing a Distribution for a Random Variable .......................................................................... 98
Measured Data ................................................................................................................... 98
Mean Values, Standard Deviation, Exceedance Values ........................................................... 98
No Data .............................................................................................................................. 99
Distribution Functions ............................................................................................................ 100
Sample Generation ............................................................................................................................. 104
Weighted Latin Hypercube Sampling .................................................................................................. 104
Postprocessing SSA Results ................................................................................................................. 105
Histogram .................................................................................................................................... 105
Cumulative Distribution Function .................................................................................................. 105
Probability Table ........................................................................................................................... 106
Statistical Sensitivities in a Six Sigma Analysis ................................................................................ 107
Six Sigma Analysis Theory ................................................................................................................... 109
Troubleshooting ..................................................................................................................................... 117
Appendices ............................................................................................................................................. 119
Glossary of General Terms ................................................................................................................... 119
Index ........................................................................................................................................................ 121

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

vi

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Overview
The following links provide quick access to information concerning Design Exploration and its use:

Using Design Exploration Help (p. 1)

Limitations (p. 4)

What is Design Exploration? (p. 17)

The User Interface (p. 5)

Parameters (p. 7)

Design Points (p. 9)

Workflow (p. 9)

Design Exploration Options (p. 11)

"Using Design Exploration" (p. 17)

Using Design Exploration Help


Design Exploration Help is included as part of the ANSYS Workbench Help, where you can also find an explanation of general Workbench features. To view the ANSYS Workbench Help, see Getting Started in ANSYS
Workbench.

Introduction to Design Exploration


Because a good design point is often the result of a trade-off between various objectives, the exploration
of a given design cannot be performed by using only direct optimization algorithms leading to a single
design point. It is important to gather enough information about the current design so as to be able to answer
the so-called what-if questions quantifying the influence of design variables on the performance of the
product in an exhaustive manner. By doing so, the right decisions can be made based on accurate information
- even in the event of an unexpected change in the design constraints.
ANSYS Design Exploration describes the relationship between the design variables and the performance of
the product by using Design of Experiments (DOE), combined with Response Surfaces. DOE and Response
Surfaces provide all of the information required to achieve Simulation Driven Product Development. Once
the variation of the performance with respect to the design variables is known, it becomes easy to understand
and identify all changes required to meet the requirements for the product. Once the Response Surfaces
are created, the information can be shared in easily understandable terms: curves, surfaces, sensitivities, etc.
They can be used at any time during the development of the product without requiring additional simulations
to test a new configuration.

Available Tools
ANSYS Design Exploration software has a powerful suite of DOE schemes: Central Composite Design (CCD)
or Optimal Space-Filling. CCD provides a traditional DOE sampling set, while Optimal Space-Fillings objective
is to gain the maximum insight with the fewest number of points. This feature is very useful when the
computation time available to the user is limited.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Overview
After sampling, ANSYS Design Exploration software provides four different meta models to represent the
simulations responses: Full Second-Order Polynomial, Kriging, Non-Parametric Regression and Neural Network.
These meta-models can accurately represent highly nonlinear responses such as those found in high frequency
electromagnetics.
Once the simulations responses are characterized, ANSYS Design Exploration supplies three different types
of optimization algorithms: Screening (Shifted Hammersley), Multi-Objective Genetic Algorithm (MOGA) and
Non-linear Programming by Quadratic Lagrangian (NLPQL).
Several graphical tools are available to investigate a design: sensitivity plots, correlation matrices, curves,
surfaces, trade-off plots and parallel charts with Pareto Front display, and spider charts.
Correlation matrix techniques are also provided to help the user identify the key parameters of a design
before creating response surfaces.

ANSYS Design Exploration: a Typical Session, What to Look For?


The main purpose of ANSYS Design Exploration is to identify the relationship between the performance of
the product (maximum stress, mass, fluid flow, velocities, etc.) and the design variables (dimensions, loads,
material properties, etc.). Based on these results, the analyst will be able to influence the design so as to
meet the products requirements. He will be able to identify the key parameters of the design and how they
influence the performance.
ANSYS Design Exploration does provide tools to analyze a parametric design with a reasonable number of
parameters. The response surface methods described here are suitable for problems using about 10 to 15
input parameters.

Initial Simulation: Analysis of the Product Under all Operating Conditions


The first step of any design simulation is to create the simulation model. The simulation model can use
anything from a single physics up to a complex multiphysics simulation involving multiple conditions and
physics coupling.
In addition to performing the standard simulation, this step is also used to define the parameters to be investigated. The input parameters (also called design variables) are identified, and may include CAD parameters,
loading conditions, material properties, etc.
The output parameters (also called performance indicators) are chosen from the simulation results and may
include maximum stresses, fluid pressure, velocities, temperatures, or masses, and can also be custom defined.
Product cost could be a custom defined parameter based on masses, manufacturing constraints, etc.
CAD parameters are defined from a CAD package or from ANSYS DesignModeler. Material properties are
found in the Engineering Data section of the project, while other parameters will have their origin in the
simulation model itself. Output parameters will be defined from the various simulation environments
(mechanics, CFD, etc.). Custom parameters are defined directly in the Parameter Set (the bus bar that collects
all parameters) available from the project schematic.

Identify Design Candidates


Once the initial model has been created and parameters defined, the next step in the session is to create a
response surface. After inserting a Response Surface system in the project, you need to define the design
space by giving the minimum and maximum values to be considered for each of the input variables. Based
on this information, the Design of Experiment (DOE) part of the Response Surface system will create the

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Assess the Robustness of the Design Candidates


design space sampling. Note that this sampling depends upon the choice made for the DOE scheme
usually, the default CCD scheme will provide good accuracy for the final approximation. Then, the DOE needs
to be computed.
Once the DOE has been updated, a response surface is created for each output parameter. A response surface
is an approximation of the response of the system. Its accuracy depends on several factors: complexity of
the variations of the output parameters, number of points in the original DOE, and choice of the response
surface type. Four main types of response surfaces are available in ANSYS Design Exploration. As a starting
point, the Standard Response Surface (based on a modified quadratic formulation) will provide satisfying
results when the variations of the output parameters is mild, while the Kriging scheme will be used for
stronger variations.
After the response surfaces have been computed, the design can be thoroughly investigated using a variety
of graphical and numerical tools, and valid design points identified by optimization techniques.
Usually, the investigation will start with the sensitivity graphs. This bar or pie chart will graphically show
how much the output parameters are locally influenced by the input parameters around a given response
point. Note that varying the location of the response point may provide totally different graphs. Thinking
of the hill/valley analogy, if the response point is in a flat valley, the influence of the input parameters will
be small. If the point is at the top of a steep hill, the influence of the parameters will be strong. The sensitivity graphs provide the first indication about the relative influence of the input parameters.
The response surfaces will provide curves or surfaces that show the variation of one output parameter with
respect to one or two input parameters at a time. These curves/surfaces also are dependent on the response
point.
Both sensitivity charts and response surfaces are key tools for the analyst to be able to answer the Whatif questions (What parameter should we change if we want to reduce the cost?).
ANSYS Design Exploration provides additional tools to identify design candidates. While they could be determined by a thorough investigation of the curves, it might be convenient to be guided automatically to
some interesting candidates. Access to optimization techniques that will find design candidates from the
response surfaces is provided by the Goal Driven Optimization (GDO) systems. These systems can be dragged
and dropped over an existing response surface system so as to share this portion of the data. Several GDO
systems can be inserted in the project, which is useful if several hypothesis are to be analyzed.
Once a GDO system has been introduced, the optimization cell needs to be defined, which includes choosing
the optimization method and setting the objectives. Then the optimization problem can be solved. In many
cases, there will not be a unique solution to the optimization problem, and several candidates will be identified. The results of the optimization process is also very likely to provide candidates that cannot be manufactured (a radius of 3.14523 mm is probably hard to achieve!). But since all information about the variability
of the output parameters is provided by the response surface, it becomes easy to find a design candidate
close to the one indicated by the optimization process that will be acceptable.
As a good practice, it is also recommended to check the accuracy of the response surface for the design
candidates. To do so, the candidate should be converted to a design point and a full simulation performed
for that point, so as to check the validity of the output parameters.

Assess the Robustness of the Design Candidates


Once one or several design points have been identified, the probabilistic analysis will help quantify the reliability or quality of the product by means of a statistical analysis. Probabilistic analysis typically involves four
areas of statistical variability: geometric shape, material properties, loading, and boundary conditions. For
example, the statistical variability of the geometry of a product would try to capture product-to-product
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Overview
differences due to manufacturing imperfections quantified by manufacturing tolerances. Probabilistic characterization provides a probability of success or failure and not just a simple yes/no evaluation. For instance,
a probabilistic analysis could determine that one part in 1 million would fail, or the probability of a product
surviving its expected useful life.
Inserting a Six-Sigma system in the schematic will provide the robustness analysis. The process here is very
similar to the response surface creation. The main difference is in the definition of the parameters: instead
of giving minimum and maximum values defining the parameter's ranges, the parameters will be described
in terms of statistical curves and their associated parameters. Once the parameters are defined, a new DOE
will be computed, as well as the corresponding response surfaces. Additional results in the form of probabilities and statistical distributions of the output parameters are provided, along with the sensitivity charts
and the response surfaces.

Determine the Number of Parameters


To get accurate response surfaces within a reasonable amount of time, the number of input parameters
should be limited to 10 to 15. If more parameters need to be investigated, the correlation matrix will provide
a way to identify some key parameters before creating the response surface. The Parameters Correlation
systems should be used prior to the Response Surface systems, to reduce the number of parameters to the
above mentioned limit. The parameters correlation method will perform simulations based on a random
sampling of the design space, so as to identify the correlation between all parameters. The number of simulations will depend upon the number of parameters, as well as the convergence criteria for the means and
standard deviations of the parameters. The user can provide a hard limit for the number of points to be
computedit should be noted that the accuracy of the correlation matrix might be affected if not enough
points are computed.

Increase the Response Surface Accuracy


Usually, a richer set of points for the DOE will provide a more accurate response surface. However, it may
not be easy to know before the response surface is created how many additional points are required. The
Kriging response surface will allow ANSYS Design Exploration to determine the accuracy of the response
surface as well as the points that would be required to increase this accuracy. To use this feature, the response
surface type should be set to Kriging. The refinement options will then be available to the user. The refinement
can be automated or user-defined. This last option should be used to control the number of points to be
computed.
There is also a manual refinement option for all of the response surface types, which allows the user to enter
specific points into the set of existing DOE points used to calculate the response surface.

Limitations

Suppressed properties are not available for the Design of Experiments (DOE) method.

Databases from the previous releases cannot be resumed in release 12.

Note
You cannot resume a DesignXplorer database that was created in a previous release of Workbench.
You can resume a project that was created in an earlier version of Workbench, but any associated
DesignXplorer database will be ignored. However, all parameters originally defined by the project
will still be available to Design Exploration, and the Design Exploration components can be used
to rebuild and re-solve the DesignXplorer analyses.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Project Schematic

Design Exploration Licensing Requirements


In order to run any of the Design Exploration systems, you must have an ANSYS DesignXplorer license. You
must also have licenses available for the analysis systems that will be solving the Design Points generated
by the Design Exploration systems. The licenses for the solvers must be available:

when the user Updates the cells that generate Design Points (Design of Experiments, Parameters Correlation, Response Surface Refinement)

as soon as results need to be generated from the Response Surface

These solver licenses will be checked in when:

all Design Exploration systems are deleted from the schematic

when the user does a New or Open command to switch to a project without any Design Exploration
systems

when the user exits Workbench

Note

If you do not have an ANSYS DesignXplorer license, you can successfully resume an existing
Design Exploration project and review the already generated results, with some interaction
possible on charts. But as soon as a Design Exploration Update or a Response Surface evaluation is needed, a license is required and you get an error dialog.

In order to use the ANSYS Simultaneous Design Point Update capability you must have one
solver license for each simultaneous solve. Be aware that the number of Design Points that
can be solved simultaneously may be limited by hardware, by your RSM configuration, or by
available solver licenses.

The User Interface


The Workbench User Interface allows you to easily build your project by adding systems from a Toolbox
area on the left of the window to the Project Schematic area on the right. Design Exploration systems are
available in the Toolbox to allow you to perform the different types of parametric analyses on your project
as described in the previous section. Once you add the Design Exploration systems to your Project Schematic, you edit the cells in the systems to open the workspaces for those cells.

Project Schematic
You will initially create your project containing analysis systems, models, and other systems in the Project
Schematic. Then you can choose which of the parameters in those systems will be exposed to the Design
Exploration systems through the Parameter Set bar. Once the analysis systems and Parameter Set are defined,
you can add the Design Exploration systems that you need. A project schematic containing Design Exploration
systems would look something like the following:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Overview

Toolbox
When you are viewing the Project Schematic, the following system templates are available from the Design
Exploration Toolbox.

To perform a particular type of analysis, drag the template from the Toolbox onto your Project Schematic
below the Parameter Set bar.
When you are viewing a workspace, the Toolbox will display items that can be added to the Outline based
on the currently selected cell. For example, if you are in a Response Surface workspace and you select a
Response Point cell in the outline, the Toolbox will contain chart templates that you can add to that Response
Point. Double click on template or drag it onto the selected cell in the Outline view to add it to the Outline.

Workspace
Once a Design Exploration system is added to the schematic, you can edit the cells to open the Workspace,
which allows you to set up the analysis options, run the analysis, and view the results. For example, if you
right click on the Design of Experiments cell and select Edit ..., your workspace will show the following
configuration:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Parameters

You will see this window configuration for any Design Exploration system cell that you edit. The four panels
visible are:

Outline: provides a summary of all of the item that make up the cell you are editing.

Table: provides a list of either the design points or response points associated with that cell.

Properties: provides access to the properties of the various cells in the currently displayed Outline. For
example, you can set parameter limits, analysis types, and chart options from the properties panel.

Chart: displays the various charts available for the different Design Exploration system cells.

In general, you select a cell in the Outline view and either set up its Properties or review the Chart and/or
Table associated with that cell.

Toolbar
When you are editing a Design Exploration cell, additional buttons are added to the Workbench toolbar:

a button that allows you to update the particular object you are editing (for example, Update Design
of Experiments)

for some objects, a button that allows you to preview the results of the update (for example, Preview
Design of Experiments)

Parameters
The following types of parameters are used in Design Exploration:

Input Parameters (p. 8)

Output Parameters (p. 8)


Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Overview
See Working with Parameters and Design Points for more information about parameters.

Input Parameters
Input parameters are those parameters that define the inputs to the analysis for the model under investigation.
Input parameters have predefined ranges which can be changed. These include (and are not limited to) CAD
parameters, Analysis parameters, and DesignModeler parameters. CAD and DesignModeler input parameters
may include length, radius, etc.; Analysis input parameters may include pressure, material properties, materials, sheet thickness, etc.
When you start a Design of Experiments analysis in Design Exploration, a default of +/- 10% of the current
value of each input parameter is used to initialize the parameter range. If any parameter has a current value
of 0.0, then the initial parameter range will be computed as 0.0 10.0. Since Design Exploration is not aware
of the physical limits of parameters, you will need to check that the assigned range is compatible with the
physical limits of the parameter. Initially, the current value of the parameter within Design Exploration will
be the midpoint between the upper bound and lower bound.
Input parameters can be discrete or continuous, and each of these have specific forms.
Parameter Type

Description

Example

Discrete Parameters

Valid only at integer values

Number of holes, number of


weld points

Continuous Parameters

Continuous non-interrupted
range of values

Thickness, force, temperature

Continuous Parameters with


Usability Constraints

The theoretical values are


continuous, but manufacturing or real-world constraints
limit actual values

Drill bit sizes, readily available


bolts, screws

Continuous parameters physically vary in a continuous manner between a lower and an upper bound defined
by the user. Examples are CAD dimension or load magnitude.
Discrete parameters physically represent different configurations or states of the model. An example is the
number of holes in a geometry. You can set a parameter to be discrete by selecting Discrete as the Classification for the parameter in the Properties view. You would then enter the values for the Levels that you
want the parameter to have in the Table view for the parameter. Levels are limited to integer values and
will not be sorted as you enter them.
Usability parameters are continuous parameters for which a post-processing filter is defined. You can enter
a restricted list of levels that are the only values authorized for post-processing. You can change a continuous
parameter to a usability parameter by selecting Usability as the Classification for the parameter in the
Properties view. You would then enter the values for the Levels that you want the parameter to have in the
Table view for the parameter. Levels are sorted numerically as you enter them. You are not restricted to only
entering integers for Usability Levels. Once you have the levels defined, all of the analysis done on the
sample set, from charts to optimization, will only use the usability levels. For instance, optimization won't
suggest a design where the usability parameter has a value not contained in the list of levels.

Output Parameters
Output parameters are those parameters that result from the geometry or are the response outputs from
the analysis. These include (and are not limited to) volume, mass, frequency, stress, heat flux, and so forth.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Adding Design Exploration Templates to the Schematic

Derived Parameters
Derived parameters are specific output parameters defined as an analytical combination of output or input
parameters. As the definition suggests, derived parameters are calculated from other parameters by using
equations that you provide. They are created in the analysis system and passed into Design Exploration as
output parameters. See Working with Parameters and Design Points for more information.

Design Points
A Design Point is defined by a snapshot of parameter values where output parameter values were calculated
directly by a project update. Design Points are created by Design Exploration; for instance, when processing
a Design of Experiments or a Correlation Matrix, or refining a Response Surface.
It is also possible to insert a Design Point at the project level from an optimization Candidate design, in order
to perform a validation update. Note that the output parameter values are not copied to the created Design
Point since they were calculated by Design Exploration and are, by definition, approximated. Actual output
parameters will be calculated from the Design Point input parameters when a project update is done. See
Adding Design Points and Response Points (p. 60) for more information on adding Design Points to the project.

Response Points
A Response Point is defined by a snapshot of parameter values where output parameter values were calculated
in Design Exploration from a Response Surface. As such, the parameter values are approximate and calculated
from response surfaces, and the most promising designs should be verified by a solve in the analysis system
using the same parameter values.

Using Response Points


New Response Points can be created when editing a Response Surface cell from the Outline, or from the
Table view. Response Points as well as Design Points can also be inserted from the Table view or the charts,
using the appropriate right mouse button menu option when selecting a table row or a point on a chart.
For instance, it is possible to insert a new Response Point from a point selected with the mouse on a Response
Chart. See Adding Design Points and Response Points (p. 60) for more information on creating custom Response
Points.

Workflow
To run a parametric analysis in Design Exploration, you will need to:

create your model (either in DesignModeler or other CAD software) and load it into one of the Analysis
systems available in Workbench

select the parameters you are interested in

add to your project the Design Exploration features that you want to use

set the parameter limits and update the analysis

view the results of the Design Exploration analysis

Adding Design Exploration Templates to the Schematic


You can use as many of the Design Exploration features as you like and rerun them with different parameters
and limits as many times as you need to refine your design. The following steps outline the process for
putting Design Exploration elements into your Project Schematic:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Overview
1.

Create parts and assemblies in either the DesignModeler solid modeler or any of the supported CAD
systems. Features in the geometry that will be important to the analysis should be exposed as parameters. Such parameters can be passed to Design Exploration.

2.

Drag an analysis system into the Project Schematic , connecting it to the DesignModeler or CAD file.

3.

Click on the Parameter Set bar

4.

In the Outline view, click on the various input parameters that you need to modify. Set the limits of
the selected parameter in the Properties view.

5.

Drag the Design Exploration template that you would like to use into the Project schematic below the
Parameter Set bar.

Duplicating Existing Design Exploration Systems


You can duplicate any Design Exploration systems that you have created on your schematic. The manner
in which you duplicate them dictates the data that is shared between the duplicated systems.

For any Design Exploration system, right click on the B1 cell of the system and select Duplicate. A new
system of that type will be added to the schematic under the Parameter Set bar. No data is shared with
the original system.

For a Design Exploration system that has a DOE cell, click on the DOE cell and select Duplicate. A new
system of that type will be added to the schematic under the Parameter Set bar. No data is shared with
the original system.

For a Design Exploration system that has a Response Surface cell, click on the Response Surface cell
and select Duplicate. A new system of that type will be added to the schematic under the Parameter
Set bar. The DOE data will be shared from the original system.

For a GDO or SSA Design Exploration system, if you click on those cells and select Duplicate, a new
system of that type will be added to the schematic under the Parameter Set bar. The DOE and Response
Surface data will be shared from the original system.

Any cell in the duplicated Design Exploration system that contains data that is not shared from the original
system will be marked as Update Required.

Running Design Exploration Analyses


After you have placed all of the Design Exploration templates on your schematic you should set up the individual Design Exploration systems and then run the analysis for each cell.
1.

Right click on each cell in each of the Design Exploration systems to open the workspace for that cell.

2.

In the workspace, set up any analysis options that are needed (parameter limits, optimization objectives,
optimization type, six sigma parameter distributions, etc.).

3.

To run the analysis for a cell from its workspace, click on the Update button in the toolbar for that cell
type (Update Optimization, Update Design of Experiments, etc.). You can also click on the Update
Project button to update the entire project.

4.

Make sure that you set up and solve each cell in a Design Exploration system to complete the analysis
for that system.

5.

View the results of each analysis from the workspaces of the various cells in the Design Exploration
systems (charts, tables, statistics, etc.).

10

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Design Exploration Options


When you are in the Project Schematic view, cells in the various systems contain icons to indicate their
state. If they are out of date and need an update, you can right click on the cell and select Update from
the menu.

Monitoring Design Exploration Analyses


After starting a Design Exploration analysis Update, there are two different views available to help you
monitor your analysis.

Progress View
There is a Progress view that you can open from the View menu or from Show Progress button at the
lower right corner of the window. During execution, a progress execution status bar appears in the Progress
cell. The name of the task that is executing appears in the Status cell and information about the execution
appears in the Details cell. This view continuously reports the status of the execution, until it is complete.
You can stop an execution by clicking the Stop button to the right of the progress bar and you can restart
it at a later time by using any of the Update methods that you would use to start a new Update for the cell
or workspace.

Messages View
If solution errors exist and the Messages view is not open, the Show Messages button in the lower right
of the window will flash orange and indicate the number of messages. Click this button to examine solution
error information. You can also open the Messages view from the View menu in the toolbar.

Design Exploration Options


You can set the default behavior of Design Exploration features through the Tools>Options dialog box.
These options will be used as the default values for the analyses that you set up in your project workspace.
In the workspace, each system has Properties panels that allow you to change the default values for any
specific cell in an analysis. Changing the options in a cell's Properties view does not affect the global default
options described here in the Options dialog box. To access Design Exploration default options:
1.

From the main menu, choose Tools>Options. An Options dialog box appears. Expand the Design
Exploration item in the tree.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

11

Overview

2.

Click on Design Exploration or one of its sub-options.

3.

Change any of the option settings by clicking directly in the option field on the right. You may need
to scroll down to see all of the fields. You will see a visual indication for the kind of interaction required
in the field (examples are drop-down menus, and direct text entries). Some sections of the screens
may be grayed out unless a particular option above them is selected.

4.

Click OK when you are finished changing the default settings.

The following options appear in the Options dialog box:


Design Exploration
Design of Experiments
Response Surface
Sampling and Optimization
There are many general Workbench options in the Options dialog box. You can find more information about
them in ANSYS Workbench Options.

Design Exploration
The Design Points category includes:

12

Preserve Design Points After DX Run: If checked, after the solution of a Design Exploration cell completes, save the design points created for the solution to the project Table of Design Points. If this
option is selected, the following option is available:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Design of Experiments

Retain Files for Preserved Design Points: If checked, in addition to saving the design points to
the project Table of Design Points, exports each of the Design Exploration design points as a separate project.

The Graph category includes:

Number of Gridlines: Changes the number of points used by the individual continuous input parameter
axes in the 2D/3D response surface charts. This enhances the viewing resolution of these charts. The
range is from 2 to 100. The default is 10.

Number of Colors: Controls the number of colors used by 2D and 3D Tradeoff charts. Choosing more
colors provides a better color quality display at the expense of slower chart rendering. Choosing less
colors provides faster chart rendering at the expense of lower color quality. The following choices are
available:

[8]

[16]

[32] (default)

[64]

Max (may be slow)

The Sensitivity category includes:

Significance Level: Controls the relative importance or significance of input variables. The allowable
range is from 0.0, meaning all input variables are "assumed" to be insignificant, to 1.0, meaning all input
variables are "assumed" to be significant. The default is 0.025.

Correlation Coefficient Calculation Type: Specifies the calculation method for determining sensitivity
correlation coefficients. The following choices are available:

Rank Order (Spearman) (default): Correlation coefficients are evaluated based on the rank of
samples.

Linear (Pearson): Correlation coefficients are evaluated based on the sample values.

The Parameter Options category includes:

Display Parameter Full Name: If checked (default), displays the full parameter everywhere in the Design
Exploration user interface. If unchecked, displays the short parameter name everywhere in the Design
Exploration user interface, except for the Outline view which always displays the full parameter name.

Parameter Naming Convention: Sets the naming style of input parameters within Design Exploration.
The following choices are available:

Taguchi Style: Names the parameters as Continuous Variables and Noise Variables.

Uncertainty Management Style (default): Names the parameters as Design Variables and Uncertainty
Variables.

Reliability Based Optimization Style: Names the parameters as Design Variables and Random
Variables.

Design of Experiments
The Design of Experiments Type category includes the following settings. Descriptions of each setting are
included in the Design Points (p. 35) section under "Using Design Exploration" (p. 17).

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

13

Overview

Central Composite Design (default): Note that changing the CCD type in the Options dialog box will
generate new design points provided the study has not yet been solved.

Optimal Space Filling Design

The Central Composite Design Options category includes:

Design Type: The following choices are available:

Auto Defined (default)

G-Optimality

VIF-Optimality

Rotatable

Face Centered

Enhanced Template: Select this option to enable the enhanced template for the Rotatable and Face
Centered design types. Selecting it with other design types will have no effect.

The Optimal Space Filling Options category includes:

Design Type: The following choices are available:

Max-Min Distance (default)

Centered L2

Maximum Entropy

Max Number of Cycles: Indicates the maximum number of iterations which the base DOE undergoes
for the final sample locations to conform to the chosen DOE type.

Sample Type: : This indicates the specific method chosen to determine the number of samples. The
following choices are available:

CCD Samples (default): Number of samples is the same as that of a corresponding CCD design.

Linear Model Samples: Number of samples is the same as that of a design of linear resolution.

Pure Quadratic Model Samples: Number of samples is the same as that of a design of pure quadratic (constant and quadratic terms) resolution.

Full Quadratic Model Samples: Number of samples is the same as that of a design of full quadratic
(all constant, quadratic and linear terms) resolution.

User Defined Samples: User determines the number of DOE samples to be generated.

Number of Samples: For a Sample Type of User Defined Samples, specifies the default number of
samples.

Response Surface
The Type category includes:

Standard Response Surface - Full 2nd Order Polynomials (default):

Kriging: Accurate interpolation method. Kriging Options include:

Kernel Variation Type, which can be set to:


Variable Kernel Variation (default): Pure Kriging mode.
Constant Kernel Variation: Radial Basis Function mode.

14

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Sampling and Optimization

Non-Parametric Regression: Provides improved response quality. Initialized with one of the available
DOE types.

Neural Network: Non-linear statistical approximation method inspired from biological neural processing.
Neural Network Options include:

Number of Cells: The number of cells controls the quality of the meta model. The higher it is, the
better the network can capture parameters interactions. The recommended range is from 1 to 10.
The default is 3.

Once a meta model is solved, it is possible to switch to another meta model type or change the options for
the current meta model in the Properties view for that Response Surface workspace in the analysis system.
After changing the options in the Properties view, you must Update the Response Surface to obtain the new
fitting.

Sampling and Optimization


The Random Number Generation category includes:

Repeatability: When checked, seeds the random number generator to the same value each time you
generate uncertainty analysis samples. With Repeatability unchecked , the random number generator
is seeded with the system time every time you generate samples. This applies to all methods in Design
Exploration where random numbers are needed, as in Six Sigma Analysis or Goal Driven Optimization.
The default is checked.

The Weighted Latin Hypercube category includes:

Sampling Magnification: To indicate number of times used to reduce regular Latin Hypercube samples
while achieving a certain probability of failure (Pf ). For example, the lowest probability of failure for
1000 Latin Hypercube samples is approximately 1/1000; magnification of 5 is meant to use 200
weighted/biased Latin Hypercube samples to approach the lowest probability of 1/1000. It is recommended not to use magnification greater than 5. A greater magnification may result in significant Pf error
due to highly biased samples. The default is 5.

The Optimization category includes:

Constraint Handling (GDS)


This option can be used for any optimization application and is best thought of as a "constraint satisfaction" filter on samples generated from the Screening, MOGA, or NLPQL runs. This is especially useful
for Screening samples to detect the edges of solution feasibility for highly constrained nonlinear optimization problems. The following choices are available:

As Goals (default): Implies that the upper, lower and equality constrained objectives of the candidate
designs shown in the Table of Optimization are treated as goals; thus any violation of the objectives
is still considered feasible.

As Hard Constraints: When chosen, the upper, lower and equality constrained objectives are treated
as hard constraints; that is, if any of them are violated then the candidate is no longer displayed.
So, in some cases no candidate designs may be displayed depending on the extent of constraint
violation.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

15

16

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Using Design Exploration


The topics in this section cover the basics of using the Design Exploration templates.
What is Design Exploration?
Design of Experiments
Parameters Correlation
Response Surface
Goal Driven Optimization
Six Sigma Analysis
Working with Parameters
Design Points
The Design of Experiments Solution
Using Response Surfaces
Global Sensitivities
Adding Design Points and Response Points
Using Parameters Correlation
Using Goal Driven Optimization
Using Six Sigma Analysis

What is Design Exploration?


Design Exploration is a powerful tool for designing and understanding the analysis response of parts and
assemblies. It uses a deterministic method based on Design of Experiments (DOE) and various optimization
methods, with parameters as its fundamental components. These parameters can come from any supported
analysis system, DesignModeler, and various CAD systems. Responses can be studied, quantified, and graphed.
Using a Goal Driven Optimization method, the deterministic method can obtain a multiplicity of design
points. You can also explore the calculated Response Surface and generate design points directly from the
surface.
After setting up your analysis, you can pick one of the system templates from the Design Exploration toolbox
to:

parameterize your solution and view an interpolated response surface for the parameter ranges

view the parameters associated with the minimum and maximum values of your outputs

create a correlation matrix that shows you the sensitivity of outputs to changes in your input parameters

set output objectives and see what input parameters will meet those objectives

perform a Six Sigma analysis on your model

Design of Experiments
Design of Experiments is a technique used to determine the location of sampling points and is included as
part of the Response Surface, Goal Driven Optimization, and Six Sigma systems. There are several versions
of design of experiments available in engineering literature. These techniques all have one common characteristic: they try to locate the sampling points such that the space of random input parameters is explored
in the most efficient way, or obtain the required information with a minimum of sampling points. Sample
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

17

Using Design Exploration


points in efficient locations will not only reduce the required number of sampling points, but also increase
the accuracy of the response surface that is derived from the results of the sampling points. By default the
deterministic method uses a central composite design, which combines one center point, points along the
axis of the input parameters, and the points determined by a fractional factorial design.

Parameters Correlation
Probabilistic sensitivities are based on a statistical correlation analysis between the individual probabilistic
design variables. You can review the Parameters Correlation data that has been used to derive sensitivities
and decide if individual sensitivity values are significant or not. This information is collected in the correlation
matrix of the random output parameters versus the random input variables. You can also review the correlations that have been sampled between random input variables. The correlations between random output
parameters are important if you want to use the probabilistic results of your probabilistic analysis as input
for another probabilistic analysis.

Goal Driven Optimization


You can use "Understanding Goal Driven Optimization" (p. 73) to state a series of design goals, which will be
used to generate optimized designs.
The optimization approach used in Design Exploration departs in many ways from traditional optimization
techniques. Many goals are allowed and can be assigned to each output parameter. Goals may be weighted
in terms of importance.

Response Surface
There is one Response Surface or Curve for every output parameter. Output parameters are represented in
terms of the input parameters, which are treated as independent variables.
For the deterministic method, response surfaces for all output parameters are generated in two steps:

Solving the output parameters for all design points as defined by a design of experiments

Fitting the output parameters as a function of the input parameters using regression analysis techniques

Six Sigma Analysis


Six Sigma Analysis (SSA) is an analysis technique for assessing the effect of uncertain input parameters and
assumptions on your model.
A Six Sigma Analysis allows you to determine the extent to which uncertainties in the model affect the results
of an analysis. An "uncertainty" (or random quantity) is a parameter whose value is impossible to determine
at a given point in time (if it is time-dependent) or at a given location (if it is location-dependent). An example
is ambient temperature: you cannot know precisely what the temperature will be one week from now in a
given city.

Design of Experiments
A Design of Experiments cell is part of the Response Surface, Goal Driven Optimization, and Six Sigma systems
(although the DOE for a six sigma system treats the inputs and outputs differently and cannot share data
with the other types of DOEs). The DOE workspace allows you to preview (generates the points but does
not solve them) or generate and solve a DOE matrix. Within the DOE workspace, you can set the input
parameter limits, set the properties of the DOE solve, view the design points table, and view several para-

18

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Parameters Parallel Chart


meter charts. The following views are available in the DOE workspace to allow you to customize your DOE
and view the updated results:
Outline: Allows you to:

Select the DOE cell and change its properties.

Select the input parameters and change their limits.

Select the output parameters and view their minimum and maximum values.

Select one of the available charts for display. You can insert directly into the Outline as many new charts
from the chart toolbox as you want. When a chart is selected, you can change the data properties of
the chart (chart type and parameters displayed).

Properties: (See Design Criteria Considered in DOE (p. 36) for complete descriptions of DOE options.)

Preserve Design Points After DX Run check this box if you want to retain design points at the project
level from each update of this DOE you perform.

DOE Type

Central Composite Design

Optimal Space-Filling Design

Custom

Settings for selected DOE Type (not for Custom)

Table: Displays data grid of the design points that populates automatically during the solving of the points.
Displays input and output parameters for each design point. You can add points manually if the DOE Type
is set to Custom.
Chart: Displays the available charts described below.

Related Topics:

The Design of Experiments Solution (p. 36)

Design Points (p. 35)

What is Design Exploration? (p. 17)

Design Exploration Options (p. 11)

Parameters Parallel Chart


Generates a graphical display of the DOE matrix using parallel Y axes to represent all of the inputs and
outputs. Select the chart from the Outline to display it in the Chart view. Use the Properties view as follows:
Properties:

Click on a line on the chart to display the values of the input and output parameters for that line in the
Parameters section. Use the Enabled checkbox to enable or disable the display of parameter axes on
the chart.

Various generic chart properties can be changed for this chart.

You can also look at this chart data a different way. Right click on the chart background and select Edit
Properties, then change Chart Type to Spider Chart, which will show all input and output parameters ar-

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

19

Using Design Exploration


ranged in a set of radial axes spaced equally. Each DOE point is represented by a corresponding envelope
defined in these radial axes.

Design Points vs Parameter Chart


Generates a graphical display for plotting design points vs. any input or output parameter. Select the chart
cell in the Outline to display it in the Chart view. Use the Properties view as follows:
Properties:

X-Axis (top,bottom), Y-Axis (right,left): Design points (X top and bottom axes) can be plotted against
any of the input and output parameters (any of the axes).

Various generic chart properties can be changed for this chart.

Parameters Correlation
A linear association between parameters is evaluated using Spearmans or Pearsons product-moment
coefficient. A correlation/sensitivity matrix is generated to demonstrate correlation between input and input
parameters, and sensitivity of output to input parameters.
A nonlinear (quadratic) association between parameters is evaluated using coefficient of determination of
quadratic regression between parameters. A determination matrix is generated between the parameters to
convey information of quadratic correlation if there is any and it isnt detectable with linear Spearmans or
Pearsons correlation coefficient.
The following views are available in the Parameters Correlation workspace to allow you to customize your
search and view the results:
Outline: allows you to:

Select the Parameters Correlation cell and change its properties and view the number of samples generated for this correlation.

Select the input parameters and change their limits.

Select the output parameters and view their minimum and maximum values.

Select one of the available charts for display. When a chart is selected, you can change the data properties of the chart (chart type and parameters displayed).

Properties:

Preserve Design Points After DX Run check this box if you want to retain design points at the project
level from this parameters correlation.

Note
There is no way to view the design points created for a parameters correlation other than
preserving them at the project level.

Reuse the samples already generated check this box if you want to reuse the samples generated
in a previous correlation.

Choose Correlation Type algorithm:

20

Spearman
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Correlation Matrix

Pearson

Number of Samples maximum number of samples to generate for this correlation.

Choose Auto Stop Type:

Enable Auto Stop

Execute All Simulations

Mean Value Accuracy (if Auto Stop is enabled) set the desired accuracy for the mean value of the
sample set.

Standard Deviation Accuracy (if Auto Stop is enabled) set the desired accuracy for the standard
deviation of the sample set.

Convergence Check Frequency (if Auto Stop is enabled) number of simulations to execute before
checking for convergence.

Size of Generated Sample Set number of samples generated for the correlation solution.

Table: Displays both a correlation matrix and a determination matrix for the input and output parameters.
Chart: Displays the available charts described below.

Related Topics:

Using Parameters Correlation (p. 61)

Design Points (p. 35)

Correlation Coefficient Theory

Correlation Scatter Chart


Shows a sample scatter plot of a parameter pair. Two trend lines, linear and quadratic curves, are added to
the sample points of the parameter pair. The chart conveys a graphical presentation of the degree of correlation between the parameter pair in linear and quadratic trends. Select the Correlation Scatter chart cell
in the Outline to display the Correlation Scatter chart in the Chart view. Use the Properties view as follows:
Properties:

Choose the parameters to display on the X Axis and Y Axis.

Enable or disable the display of the Quadratic and Linear trend lines.

View the Coefficient of Determination for the Quadratic and Linear trend lines.

Various generic chart properties can be changed for this chart.

Related Topics:

Viewing the Quadratic Correlation Information (p. 63)

Correlation Matrix
Provides information about the linear correlation between a parameter pair, if any. The degree of correlation
is indicated by color in the matrix. Placing your cursor over a particular square will show you the correlation
value for the two parameters associated with that square. Select the Correlation Matrix chart cell in the
Outline to display the chart in the Chart view. Use the Properties view as follows:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

21

Using Design Exploration


Properties:

Enable or disable the parameters that are displayed on the chart.

Various generic chart properties can be changed for this chart.

Related Topics:

Viewing Significance and Correlation Values (p. 63)

Determination Matrix
Provides information about the nonlinear (quadratic) correlation between a parameter pair, if any. The degree
of correlation is indicated by color in the matrix. Placing your cursor over a particular square will show you
the correlation value for the two parameters associated with that square. Select the Determination Matrix
chart cell in the Outline to display the chart in the Chart view. Use the Properties view as follows:
Properties:

Enable or disable the parameters that are displayed on the chart.

Various generic chart properties can be changed for this chart.

Related Topics:

Viewing the Quadratic Correlation Information (p. 63)

Sensitivities Chart
Allows you to graphically view the global sensitivities of each output parameter with respect to the input
parameters. Select the Sensitivities chart cell under Charts in the Outline to display the chart in the Chart
view. Use the Properties view as follows:
Properties:

Chart Mode set to Bar or Pie.

Enable or disable the parameters that are displayed on the chart.

Various generic chart properties can be changed for this chart.

Related Topics:

Global Sensitivities (p. 59)

Viewing Significance and Correlation Values (p. 63)

Response Surface
Builds a response surface from the DOE Design Points input and output values based on the chosen Response
Surface Type. Within the Response Surface workspace, you can view the input parameter limits and initial
values, set the properties of the response surface algorithm, view the response points table, and view several
types of response charts. The following views are available in the Response Surface workspace to allow you
to customize your Response Surface and view the results:
Outline: Allows you to:

22

Select the Response Surface cell and change its properties, including Response Surface Type.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Response Surface

Select input parameters to view their properties. You can also enable and disable the use of input
parameters by selecting or deselecting the Enable box next to the parameter.

Select the output parameters and view their minimum and maximum values, set their transformation
type, and view their goodness of fit.

Enable or Disable the Min-Max Search and select the Min-Max Search cell to view the table of results.

Select a Response Point and view its input and output parameter properties.

You can change the input values in the Properties view and see the corresponding output values.

From the right click menu, you can reset the input parameter values of the selected Response Point
to the initial values that were used to solve for the Response Surface.

In the Properties view, you can enter Notes about the selected Response Point.

Select added Response Points and view their properties. Drag charts from the toolbox to the Response
Points to view data for the added Response Points.

Select one of the available charts for display. When a chart is selected, you can change the data properties of the chart (chart type and parameters displayed).

Properties:

Preserve Design Points After DX Run check this box if you want to retain at the project level design
points that are created when refinements are run for this Response Surface

Note
Checking this box will not preserve any design points unless you run either a manual refinement or one of the Kriging refinements, since the Response Surface uses the design points
generated by the DOE. If the DOE of the Response Surface does not preserve design points,
when you perform a refinement, only the refinement design points will be preserved at the
project level. If the DOE is set to preserve design points, and the Response Surface is set to
preserve design points, when you perform a refinement the project will contain the DOE
design points and the refinement design points.

Choice of Response Surface Type algorithm

Standard Response Surface Full 2nd Order Polynomials

Kriging

Non-Parametric Regression

Neural Network

Refinement Type: Manual is available for all Response Surface Types, allowing you to enter Refinement
Design Points manually; Auto and Interactive are also available options for the Kriging Response Surface
Type.

Settings for selected algorithm, as applicable

Table:

Displays a grid of the Refinement Design Points created for this Response Surface. Both manual Refinement Design Points and Refinement Design Points created by one of the Kriging refinement algorithms
can be found in this table. Add a manual Refinement Design Point by entering a value for one of the
inputs on the New Refinement Design Point row of the table.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

23

Using Design Exploration

Displays a data grid of the Response Points for that response surface. Displays input and output parameters for each response point. You can change the points in the table and add new ones in several
different ways:

You can change the input values of a Response Point and see the corresponding output value.

You can reset the input parameter values of a Response Point to the initial values that were used
to solve for the Response Surface by right-clicking on the Response Point in the table and selecting
Reset Response Point.

You can add Response Points to the table by right-clicking on the Response Surface or a row in
several different tables, including the Min-Max Search and optimization tables, and selecting Insert
as Response Points.

You can create new Response Points by directly entering values into the input parameter cells of
the row designated with an asterisk (*).

Chart: Displays the available charts described below for each response point.

Related Topics:

Design Points (p. 35)

Using Response Surfaces (p. 42)

Goodness of Fit (p. 53)

Standard Response Surface - Full 2nd-Order Polynomials (p. 43)

Kriging (p. 45)

Non-Parametric Regression (p. 46)

Suggested Best Practices (p. 49)

Neural Network (p. 49)

Perform a Manual Refinement (p. 51)

Refine the Model Using the Kriging Meta Model (p. 51)

Min-Max Search
The Min-Max Search examines the entire output parameter space from a Response Surface to approximate
the minimum and maximum values of each output parameter. Select the Min-Max Search cell in the Outline
to display the sample points that contain the calculated minimum and maximum values for each output
parameter (they will be shown in the Response Surface Table). The maximum and minimum values in the
output parameter properties view will also be updated based on the results of the search.
The following views are available in the Min-Max Search workspace to allow you to customize your search
and view the results:
Outline: Check the Enable box to perform a Min-Max Search each time the Response Surface is updated.
Uncheck the box to disable the Min-Max Search feature. You may want to disable this feature if you have a
large number of input parameters and the search would be time consuming.
Properties:

Enter the Number of Initial Samples to generate for the optimization.

Enter the Number of Start Points to use for the Min-Max search algorithm. The more start points
entered, the longer the search time.

24

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Local Sensitivity Chart


Table: Displays the sample points (input and output parameter values) associated with the calculated maximum and minimum values for each output parameter. These sample points can be saved as Design Points
or Response Points from the right-click menu.

Related Topics:

Min-Max Search (p. 53)

Response Chart
Allows you to graphically view the impact that changing each input parameter has on the displayed output
parameter. Select the Response chart cell in the Outline to display the response chart in the Chart view.
You can add response points to the Table of Response Surface by right-clicking on the response chart and
selecting Insert as Response Points. You can add design points to the Project by right-clicking on the response chart and selecting Insert as Design Points. Use the Properties view as follows:
Properties:

Chart Mode set to 2D or 3D.

Number of Points on X (and Y, in 3D) set the number of points that you want to appear on the response curve.

Choose the input parameter(s) to display on the X Axis and Y Axis (3D).

Choose the output parameter to display on the Z Axis (3D) or Y Axis (2D).

Use the sliders to change the values of the input parameters that are not displayed on the curve to see
how they effect the values for the parameters that are displayed on the curve. You may also enter
specific values in the boxes above the sliders.

View the interpolated output parameter values for the selected set of input parameter values.

Various generic chart properties can be changed for this chart.

You can view Goodness of Fit information for the output parameters on the generated response curve. Select
one of the output parameter cells in the Outline to display its properties. For each output, you can also
change the transformation function to improve the quality of the approximation.

Related Topics:

Response Surface Charts (p. 54)

Adding Design Points and Response Points (p. 60)

Local Sensitivity Chart


Allows you to graphically view the impact that changing each input parameter has on the output parameters.
Select the Local Sensitivity chart cell under Response Point in the Outline to display the chart in the Chart
view. Use the Properties view as follows:
Properties:

Chart Mode set to Bar or Pie.

Use the sliders to change the values of the input parameters to see how the sensitivity is changed for
each output.

View the interpolated output parameter values for the selected set of input parameter values.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

25

Using Design Exploration

Various generic chart properties can be changed for this chart.

Related Topics:

Response Surface Charts (p. 54)

Using the Local Sensitivity Chart (p. 57)

Spider Chart
Allows you visualize the impact that changing the input parameters has on all of the output parameters
simultaneously. Select the Spider chart cell in the Outline to display the chart in the Chart view. Use the
Properties view as follows:
Properties:

Use the sliders to change the values of the input parameters to see how they affect the output parameters.

View the interpolated output parameter values for the selected set of input parameter values.

Various generic chart properties can be changed for this chart.

You can also look at this chart data a different way. Right click on the chart background and select Edit
Properties, and change Chart Type to Parallel Coordinate. Then click on the Spider cell under Response
Point in the Outline view and use the input parameter sliders in Properties view to see how specific settings
of various input parameters change the output parameters, which are arranged in parallel 'Y' axes in the
chart.

Related Topics:

Response Surface Charts (p. 54)

Using the Spider Chart (p. 56)

Goal Driven Optimization


Goal Driven Optimization (GDO) is a constrained, multi-objective optimization technique in which the best
possible designs are obtained from a sample set given the objectives you set for parameters. The DOE and
Response Surface Cells are used in the same way as described previously in this section. The following views
are available in the Optimization cell workspace to allow you to customize your GDO and view the results:
Outline: Allows you to:

Select the Optimization cell and change its properties, and view the size of the generated sample set.

Select the inputs and outputs to view the input values used in the analysis and the calculated maximum
and minimum values of each of the outputs.

Select one of the available charts for display. When a chart is selected, you can change the data properties of the chart (x- and y-axis parameters, parameters displayed on bar chart, etc.).

Properties: (See Optimization Options (p. 63) for complete descriptions of GDO options.)

26

Optimization Method

MOGA

NLPQL
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Related Topics:

Screening

Settings for selected Optimization Method

Constraint Handling (GDS)

As Goal (default)

As Hard Constraints

Table: Before the update, provides fields to allow you to set your objective for each input and output, or
set a target value, and indicate the relative importance of that objective. (See Defining Design Objectives (p. 67)
for complete descriptions of GDO options.) After the update, displays up to three best candidate points for
the stated objectives. A number of gold stars or red crosses are displayed next to each objective-driven
parameter to indicate how well it meets the stated objective, from three red crosses (the worst) to three
gold stars (the best).
Chart: Displays the available charts described below.

Related Topics:

Using Goal Driven Optimization (p. 63)

"Understanding Goal Driven Optimization" (p. 73)

Principles (GDO) (p. 73)

Guidelines and Best Practices (GDO) (p. 74)

Postprocessing GDO Results (p. 76)

Goal Driven Optimization Theory (p. 79)

Tradeoff Chart
Allows you to view the Pareto fronts created from the samples generated in the GDO. Select the Tradeoff
chart cell under Charts in the Outline to display the chart in the Chart view. Use the Properties view as follows:
Properties:

Chart Mode set to 2D or 3D.

Number of Pareto Fronts to Show Set the number of Pareto fronts that are displayed on the chart.

Show infeasible points Enable or disable the display of infeasible points. (Available when some of
the objectives are constraints.)

Click on a point on the chart to display a Parameters section that shows the values of the input and
output parameters for that point.

Set the parameters to be displayed on the X- and Y-axis.

Various generic chart properties can be changed for this chart.

Related Topics:

Using the Tradeoff Chart (p. 69)

Understanding Tradeoff Charts (p. 77)

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

27

Using Design Exploration

Samples Chart
Allows you to visually explore a sample set given defined objectives. Select the Samples chart cell under
Charts in the Outline to display the chart in the Chart view. Use the Properties view as follows:
Properties:

Chart Mode set to Candidates or Pareto Fronts. For Pareto Fronts, the following options can be set:

Number of Pareto Fronts to Show either enter the value or use the slider to select the number
of Pareto Fronts displayed.

Coloring Method can be set to per Samples or per Pareto Front.

Show infeasible points Enable or disable the display of infeasible points. (Available when some of
the objectives are constraints.)

Click on a line on the chart to display the values of the input and output parameters for that line in the
Parameters section. Use the Enabled checkbox to enable or disable the display of parameter axes on
the chart.

Various generic chart properties can be changed for this chart.

Related Topics:

Understanding Samples Charts (p. 78)

Sensitivities Chart (GDO)


Allows you to graphically view the global sensitivities of each output parameter with respect to the input
parameters. Select the Sensitivities chart cell under Charts in the Outline to display the chart in the Chart
view. Use the Properties view as follows:
Properties:

Chart Mode set to Bar or Pie.

Enable or disable the parameters that are displayed on the chart.

Various generic chart properties can be changed for this chart.

Related Topics:

Global Sensitivities (p. 59)

Using the Sensitivities Chart (GDO) (p. 69)

Six Sigma Analysis


The Six Sigma Analsysis (SSA) system consists of a DOE cell, a Response Surface cell, and a Six Sigma Analysis
cell. The DOE cell allows you to set up the input parameters and generate the samples for the analysis. The
Response Surface that results will be the same as the Response Surface from a standard Response Surface
analysis. The SSA cell allows you to set up the analysis type and view the results of the analysis.
By default, the Six Sigma Analysis DOE includes all of the input parameters and sets them to be uncertainty
parameters. When you run a Six Sigma Analysis, you will need to set up the uncertainty parameters in the
SSA DOE. If you want any input parameters to be treated as deterministic parameters, uncheck the box next
to those parameters in the SSA DOE Outline view.

28

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Design of Experiments (SSA)

Design of Experiments (SSA)


The Design of Experiments part of the Six Sigma templates is the same as the DOE in other templates, with
the exception of the input parameters being designated as uncertainty parameters. You will need to set up
the input parameter options before solving the DOE. The following views in the DOE workspace contain
components that are unique to the Six Sigma DOE:
Outline: Allows you to:

Select the input parameters and change their distribution properties.

View the Skewness and Kurtosis properties for each input parameter distribution.

View the calculated Mean and Standard Deviation for all Distribution Types except Normal and Truncated
Normal where you can set those values.

View the Initial Value for each input parameter.

Properties: (The properties for the uncertainty input parameters are described here.)

Distribution Type Type of distribution associated with the input parameter

Uniform

Triangular

Normal

Truncated Normal

Lognormal

Exponential

Beta

Weibull

Maximum Likely Value (Triangular Only)

Log Mean (Lognormal only)

Log Standard Deviation (Lognormal only)

Exponential Decay Parameter (Exponential only)

Beta Shape R (Beta only)

Beta Shape T (Beta only)

Weibull Exponent (Weibull only)

Weibull Characteristic Value (Weibull only)

Non-Truncated Normal Mean (Truncated Normal only)

Non-Truncated Normal Standard Deviation (Truncated Normal only)

Mean (Normal only)

Standard Deviation (Normal only)

Lower Bound (Settable for all but Normal and Lognormal)

Upper Bound (Uniform, Triangular, Truncated Normal, and Beta only)

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

29

Using Design Exploration


Table: When output parameters or charts are selected in the Outline, displays the normal DOE data grid of
the design points that populates automatically during the solving of the points. When an input parameter
is selected in the Outline, displays a table that shows for each of the samples in the set:

Quantile: the input parameter value point for the given PDF and CDF values.

PDF: (Probability Density Function) density of the input parameter along the X-axis.

CDF: (Cumulative Distribution Function) is the integration of PDF along the X-axis.

Chart: When an input parameter is selected in the Outline view, displays the Probability Density Function
and the Cumulative Distribution Function for the Distribution Type chosen for the input parameter. There
are also Parameters Parallel and Design Points vs Parameter charts as you would have in a standard DOE.

Related Topics:

Design of Experiments (p. 18)

Six Sigma Analysis


Once you have assigned the distribution functions to the input parameters in the DOE, you can update the
project to perform the Six Sigma Analysis. Once the analysis is finished, edit the Six Sigma Analysis cell in
the project to see the results. The following views are available in the Six Sigma Analysis workspace to allow
you to customize your analysis and view the results:
Outline: Allows you to:

Select each input parameter and view its distribution properties, statistics, upper and lower bounds,
initial value, and distribution chart.

Select each output parameter and view its calculated maximum and minimum values, statistics and
distribution chart.

Set the table display format for each parameter to Quantile-Percentile or Percentile-Quantile .

Properties:
For the Six Sigma Analysis, the following properties can be set:

Sampling Type Type of sampling used for the Six Sigma Analysis

LHS

WLHS

Number of Samples

For the parameters, the following properties can be set:

Probability Table Manner in which the analysis information is displayed in the Table view.

Quantile-Percentile

Percentile-Quantile

Table: Displays a table that shows for each of the samples in the set:

<Parameter Name>: a valid value in the parameter's range.

Probability: Probability that the parameter will be less than or equal to the parameter value shown.

30

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

General Information

Sigma Level: The approximate number of standard deviations away from the sample mean for the
given sample value.

If Probability Table is set to Quantile-Percentile in the Statistics section of the Properties view, you can edit
a parameter value and see the corresponding Probability and Sigma Level values. If it is set to PercentileQuantile, the columns are reversed and you can enter a Probability or Sigma Level value and see the corresponding changes in the other columns.
Chart: When a parameter is selected in the Outline view, displays its Probability Density Function and the
Cumulative Distribution Function. A global Sensitivities chart is available in the outline.

Related Topics:

Using Six Sigma Analysis (p. 70)

Statistical Measures (p. 71)

Sensitivities Chart (SSA)


Allows you to graphically view the global sensitivities of each output parameter with respect to the input
parameters in the Six Sigma Analysis. Select the Sensitivities chart cell under Charts in the Outline to display
the chart in the Chart view. Use the Properties view as follows:
Properties:

Chart Mode set to Bar or Pie.

Enable or disable the parameters that are displayed on the chart.

Various generic chart properties can be changed for this chart.

Related Topics:

Global Sensitivities (p. 59)

Statistical Sensitivities in a Six Sigma Analysis (p. 107)

Working with Parameters


General Information
Parameters are exposed from the individual analysis applications. Edit the cells (DOE, Response Surface,
Parameters Correlation, etc.) of a Design Exploration template to see the available input and output parameters in the Outline view. You can set options for those parameters (such as upper and lower limits) by
clicking on the parameter in the Outline and then setting the options for that parameter in the Properties
view. The Properties view will also give you additional information about the parameters.

Note
If you modify your analysis database after a Design Exploration solution (causing the parameters
to change), your Design Exploration studies may not reflect your analysis anymore. If you modify
your analysis, the Update Required symbol will appear on the Design Exploration cells to indicate
that they are out of date.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

31

Using Design Exploration

Parameters as Design Variables for Optimization


When you create a Design Exploration design study, each input parameter's range will default to +/- 10%
of the parameter's initial value. These default values may not be valid in certain situations, so be sure to
verify that the ranges are valid with respect to the input geometry and Analysis System scenario.

Effects of a Parameter Change on Design Exploration Systems


When a Parameter is added, deleted or modified in the project, either the existing Design Exploration systems
take the change into account immediately or they are set to a Refresh Required state. Depending on the
nature of the change, a Refresh or an Update operation is required to take the changes in account.
When non-parametric changes occur in the model (e.g. direction of a non-parameterized force, shape of the
geometry, non-parameterized dimension of the model, etc), the Design Exploration systems are set to a
Refresh Required state and an Update operation is required to rebuild the results. In some situations, nonparametric changes may not be caught and it is necessary to perform a Clear Generated Data operation
followed by an Update operation in order to synchronize the Design Exploration systems with the new state
of the project. For example, editing the input file of a Mechanical APDL system outside of Workbench is a
non-parametric change that is not caught. Inserting, Deleting, Duplicating or Replacing systems in a project
may not be caught as well in very rare situations.

Note
You can change the way that units that are displayed in your Design Exploration systems from
the Units menu. Changing the units display in this manner simply causes the existing data in
each system to be displayed in the new units system, and does not require an update of the
Design Exploration systems. If the user changes a parameter's unit from the Parameters cell on
the schematic, all Design Exploration systems become out of date, and will be marked as Update
Required.

Input Parameters
You can make changes to an input parameter in the first cell of the system (DOE, DOE (SSA), or Parameters
Correlation cell). In the Outline view of your cell workspace, select the input parameter that you want to
edit. Most changes will be made using the Properties view for that parameter. Changes made for parameters
will only affect the parameter information for the current system and any systems that share a project cell
(DOE, Response Surface) from that system.

An input parameter may be selected or unselected for use in that system by checking the box to the
right of it in the DOE Outline view.

In the Properties view, specify further attributes for the selected input parameter, such as a range (upper
and lower bounds) of values for a continuous input parameter or values for a discrete input parameter.

When you make any of the following changes to an input parameter in the first cell of the system (DOE or
Parameters Correlation cell):

enable or disable an input parameter

change the nature of an input parameter from continuous to usability or discrete, etc.

add or remove level of a discrete or usability parameter

change the range definition of an input parameter

32

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Setting Up Uncertainty Variables


All generated data associated with the system that contains the modified parameter will be cleared. This
means for instance, for a GDO system, generated data in all cells of the GDO system will be cleared if you
make one of these changes in the DOE cell. A dialog box is displayed to confirm the parameter change before
clearing all generated data. The change is not made if you select No in the dialog box.

Caution
Using the DOE Method, the number of generated design points in the DOE Matrix is directly related to the number of selected input parameters. The design and analysis Workflow shows that
specifying many input parameters will make heavy demands on computer time and resources,
including system analysis, DesignModeler geometry generation, and/or CAD system generation.
Also, large input parameter ranges may lead to inaccurate results.

Setting Up Design Variables


A range (upper and lower bounds) of values is required for all continuous input parameters, and values are
required for discrete and usability input parameters.

Setting Up Uncertainty Variables


For uncertainty variables (used in a Six Sigma analysis), you must specify the type of statistical distribution
function used to describe its randomness as well as the parameters of the distribution function. For the
distribution type, you can select one of the following:

Uniform

Triangular

Normal

Truncated Normal

Lognormal

Exponential

Beta

Weibull

For more information on the distribution types, see Distribution Functions (p. 100).
In the example below of a beam supporting a roof with a snow load, you could measure the snow height
on both ends of the beam 30 different times. Suppose the histograms from these measurements look like
the figures given below.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

33

Using Design Exploration

Figure: Histograms for the Snow Height H1 and H2


6.0E-01

Relative Frequency

5.0E-01
4.0E-01
3.0E-01
2.0E-01
1.0E-01

02
E+

66

02
3.

E+
85

02
2.

2.

03

E+

02
E+

22
1.

4.

07

E+

01

0.0E+00

Snow Height H1

8.0E-01

Relative Frequency

7.0E-01
6.0E-01
5.0E-01
4.0E-01
3.0E-01
2.0E-01
1.0E-01

9.
85
E+
01
2.
95
E+
02
4.
92
E+
02
6.
89
E+
02
8.
86
E+
02
1.
08
E+
03

0.0E+00

Snow Height H2

From these histograms, you can conclude that an exponential distribution is suitable to describe the scatter
of the snow height data for H1 and H2. Suppose from the measured data we can evaluate that the average
snow height of H1 is 100 mm and the average snow height of H2 is 200 mm. The parameter can be directly
derived by 1.0 divided by the mean value, which leads to 1 = 1/100 = 0.01 for H1, and 1 = 1/200 = 0.005
for H2.

Output Parameters
Each output parameter corresponds to a response surface which is expressed as a function of the input
parameters. Some typical output parameters are: Equivalent Stress, Displacement, Maximum Shear Stress,
etc.

34

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Saving Generated Design Points to the Project


When viewing output parameters in the Outline of the various Design Exploration cell workspaces (DOE,
Response Surface, etc.), you will see maximum and minimum values for each output parameter in the
Properties view when you click on the parameter cell in the Outline view. Please note that the maximum
and minimum values that you see in the Properties view for a parameter depend on the state of the Design
Exploration system.
The minimum and maximum values displayed as Properties of the outputs are the best min/max values
available in the context of the current cell; the best between what the current cell eventually produced and
what the parent cell provided. A DOE cell produces Design Points and a min/max can be extracted from
those. A Response Surface produces Min-Max Search results if this option is enabled. If a refinement is run,
new points are generated and a best min/max can be extracted from those. An Optimization produces a
sample set and again, a better min/max than what is provided by the parent Response Surface can be found
in these samples. So please keep in mind the state of the Design Exploration system when viewing the
minimum and maximum values in the parameter properties.

Design Points
When you solve a Design Exploration system, Design Points are generated based on the number of input
parameters in the system. Before solving a Design of Experiments or Response Surface (if performing a refinement) cell for a Six Sigma or non-Six Sigma system, you can preview the generated Design Points by
clicking the corresponding Preview button in the toolbar. You are not able to preview the generated Design
Points for a Parameters Correlation solution.
You can solve for the output parameter values for the generated design points in several ways:

From the workspace, click the corresponding Update button in the toolbar to solve for all of the generated design points.

From the Project Schematic, right click the cell and select Update from the menu.

From the Project Schematic, click the Update Project button in the toolbar to update all of the systems
in the project.

The Design Points are sent to the analysis system for solution, and will be solved simultaneously if the analysis system is configured to perform simultaneous solutions; otherwise they will be solved sequentially.
For the deterministic method, the locations of the generated Design Points are determined according to a
design of experiments method that is by default a central composite design with a fractional factorial design.
The number of generated Design Points is calculated using the number of input parameters selected in the
DOE. Response Surfaces will be generated from the associated DOE solution.

Saving Generated Design Points to the Project


The generated design points are temporarily written to the Table of Design Points in the project's Parameter
Set Bar during the solution of a Design Exploration cell. These generated design points will be removed
from the Table of Design Points after the solution unless:

The user has selected Preserve Design Points After DX Run in the Design Exploration cell's Properties
view

The user has selected Preserve Design Points After DX Run in the Tools>Options>Design Exploration
dialog

You may want to preserve generated design points for the following reasons:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

35

Using Design Exploration

Preserving design points has the advantage of letting the user see the exact design points used by a
Design Exploration cell. In a Parameters Correlation analysis for instance, it is the only way to see the
actual design points used to calculate the correlation since there is no preview for that cell.

If you preserve the design points, a subsequent update of that Design Exploration system can reuse
the existing Design Points, which will reduce the CPU cost.

Note
When a Design Exploration design point fails to update, it is retained automatically in the project,
even if the preserve option is not checked.
In addition to preserving the Design Points in the project, you can also export the preserved Design Points
from the project. If you check the Retain Files for Preserved Design Points option in the Properties under
the Preserve Design Points After DX Run property, in addition to saving the design points to the project
Table of Design Points, each of the Design Exploration Design Points is exported as a separate project.

Note
If you run a Design Exploration analysis and export Design Points, and then you uncheck the
option to export them, any Design Points that were marked for export and are reused in ensuing
analyses will still be exported, although newly created Design Points will not be exported.

The Design of Experiments Solution


One you have set up your input parameters, you can update the DOE, which submits the generated Design
Points to the analysis system for solution. Design Points are solved simultaneously if the analysis system is
set up to do so; sequentially, if not. After the solution is complete, you can update the Response Surface
cell, which generates response surfaces for each output parameter based on the data in the generated
Design Points.
You should set up your DOE Properties before generating your DOE design point matrix.
Design Criteria Considered in DOE
DOE Using a Central Composite Design
Upper and Lower Locations of DOE Points
DOE Matrix Generation

Design Criteria Considered in DOE


When you are editing a DOE cell, the Properties view allows you to set up the analysis options as described
below. You select the algorithm from the Design of Experiments Type drop down menu located in the
Properties view of the Design of Experiments cell. You can also set defaults for most of these settings
from the Design Exploration > Design of Experiments section of the Options dialog box, which is accessible
from the Tools menu.

Note
If you change the Design of Experiments Type after doing an initial analysis and preview the
Design of Experiments Table, any Design Points generated for the new algorithm that are the
same as Design Points solved for a previous algorithm will appear as up-to-date. Only the Design
Points that are different from any previously submitted Design Points need to be solved.

36

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Design Criteria Considered in DOE


The DOE types available are:

Central Composite Design (default): Provides a screening set to determine the overall trends of the
Meta Model to better guide the choice of options in the Optimal Space-Filling Design.
A Design Type Specifies a Central Composite Design (CCD) type to help improve the response surface
fit for DOE studies. For each CCD type, the alpha value is defined as the location of the sampling point
that accounts for all quadratic main effects. The following choices are available:

Auto Defined (default): Design Exploration automatically selects the Design Type based on the
number of input variables. Use of this option is recommended for most cases as it automatically
switches between the G-Optimal (if the number of input variables is 5) or VIF-optimal otherwise.
However, the Rotatable design may be used if the default option does not provide good values for
the goodness of fits from the response surface plots. Also, the Enhanced template may be used if
the default Standard template does not fit the response surfaces too well.

Face Centered: A three-level design with no rotatability. The alpha value equals 1.0. A Template
Type setting automatically appears, with Standard and Enhanced options. Choose Enhanced for
a possible better fit for the response surfaces.

Rotatable: A five-level design that includes rotatability. The alpha value is calculated based on the
number of input variables and a fraction of the factorial part. A design with rotatability has the same
variance of the fitted value regardless of the direction from the center point. A Template Type
setting automatically appears, with Standard and Enhanced options. Choose Enhanced for a possible
better fit for the response surfaces.

VIF-Optimality: A five-level design in which the alpha value is calculated by minimizing a measure
of non-orthogonality known as the Variance Inflation Factor (VIF). The more highly correlated the
input variable with one or more terms in a regression model, the higher the Variance Inflation Factor.

G-Optimality: Minimizes a measure of the expected error in a prediction and minimizes the largest
expected variance of prediction over the region of interest.

See DOE Using a Central Composite Design (p. 39) for more information.

Optimal Space-Filling Design: Creates optimal space filling design of experiments (DOE) plans according
to some specified criteria. The objective is to fill the design space efficiently with as small number of
points as possible while keeping the discrepancy as low as possible. This is a more efficient DOE scheme
when a more complex Meta Modeling technique such as Kriging, Non Parametric Regression or Neural
Networks is desired. In these situations, traditional CCD based DOEs are often not suitable as they have
poor space filling capabilities.

Note
If you have several parameters and one is discrete (having 4 values, for example) and you
ask for an Optimal Space Filling analysis, User-Defined with 20pts, you will actually have
20x4=80 pts. This is by design; a different DOE must be created for each combination of the
discrete parameters, which may quickly result in a lot of design points being generated.
The following options are available with Optimal Space-Filling Design:

Design Type: The following choices are available:


Max-Min Distance (default): Computationally faster than Maximum Entropy.
Centered L2: Computationally faster than Maximum Entropy.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

37

Using Design Exploration


Maximum Entropy: This option often provides better results for highly correlated design spaces.

Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs,
which in turns determines the discrepancy of the DOE. The optimization is essentially combinatorial,
so a large number of cycles will slow down the process; however, this will make the discrepancy of
the DOE smaller. For practical purposes, 10 cycles usually is good for up to 20 variables. Must be
greater than 0. The default is 10.

Samples Type: Determines the number of DOE points the algorithm should generate. This option
is suggested if you have some advance knowledge about the nature of the Meta Model. The following
choices are available:
CCD Samples (default): Generates the same number of samples a CCD DOE would generate for
the same number of inputs. You may want to use this to generate a space filling design which
has the same cost as a corresponding CCD design.
Linear Model Samples: Generates the number of samples as needed for a linear Meta Model.
Pure Quadratic Model Samples: Generates the number of samples as needed for a pure quadratic Meta Model (no cross terms).
Full Quadratic Samples: Generates the number of samples needed to generate a full quadratic
model.
User-Defined Samples: Expects the user to enter the desired number of samples. When selected
with any design type, a Number of Samples option is available below the Samples Type entry.
You must enter a positive number. The default is 10.

Custom: Allows definition of a custom DOE Table. You can manually add new Design Points, entering
the input and (optionally) output parameter values directly into the table. If you previously solved the
DOE using one of the other algorithms, those Design Points will be retained and you can add new
Design Points to the table. You can also import and export design points into the custom DOE Table
from the Parameter Set.
The table can be in one of two modes: All Outputs Calculated, or All Outputs Editable. The default
mode for the table is All Outputs Calculated. To change table modes, right click on the table and select
the desired mode. The mode behavior is as follows:

All Outputs Calculated allows you to enter values for any or all of the inputs in a row, and the outputs
for that row will be calculated when the DOE is updated. In this mode, individual rows in the table
can be overridden so that the outputs are editable. To override a row, right click on it and select
Set Output Values as Editable.

All Outputs Editable allows you to enter values for any or all of the inputs and outputs in a row. In
this mode, individual rows in the table can be overridden so that the outputs are calculated. To
override a row, right click on it and select Set Output Values as Calculated. If a row is changed
from editable to calculated, it invalidates the DOE, requiring an Update. Rows with editable outputs
are not computed during an Update of the DOE.

You can add new rows by entering values in the * row of the table.

38

If the table is in All Outputs Calculated mode, you may enter values in the input parameter columns.
Once you have entered a value in one column in the * row, the row is added to the table and the
values for the remaining input parameters will be set to the initial values of the parameters. You
can then edit that row in the table and change any of the other input parameter values if needed.
Output parameter values will be calculated when the DOE is updated.

If the table is in All Outputs Editable mode, you can enter values in the input or output parameter
columns. If you enter a value in an input column in the * row, the row is added to the table and the
values for the remaining input parameters will be set to the initial values of the parameters. The
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

DOE Using a Central Composite Design


output columns will be blank and you must enter values in all columns before updating the DOE.
If you enter a value in an output column in the * row, the row is added to the table and the values
for all input parameters will be set to the initial values of the parameters. The remaining output
columns will be blank and you must enter values in all columns before updating the DOE.
Once the matrix is defined, choose the Update button as usual to solve all of the unsolved design
points. There are no specific controls that you can perform on the validity of the customized matrix.

Note

You can generate a DOE matrix using one of the other design options provided, then
switch to Custom to modify the matrix. The opposite however is not possible; that is, if
you define a Custom matrix first, then change to one of the other design options provided,
the matrix is cleared.

The table can contain derived parameters. Derived parameters are always calculated by
the system, even if the table mode is All Output Values Editable.

Editing output values for a row changes the DOE cell's state to UpdateRequired. The DOE
will need to be updated, even though no calculations are done.

If several rows are selected in the table and all of their outputs are calculated, the right
click menu will only offer the Set Output Values as Editable option. If all selected rows
are editable, the right click menu will only offer the Set Output Values as Calculated option.
If the selected rows are both Calculated and Editable, both options will be available on
the right click menu.

If the table is in All Outputs Editable mode, once all of the outputs are entered, the DOE
will need to be Updated even though no calculations are done.

If the Design Points are solved and you change the table to All Output Values Editable
and then change it back to All Output Values Calculated without making any changes,
the outputs will be marked Out of Date and you must Update the DOE. The Design Points
will be recalculated.

You can copy/paste in the table (to/from a spreadsheet, text file, or Design Point table,
for example). The copy/paste adapts to the current state of the table, pasting only inputs
or inputs and outputs if outputs are editable.

The DOE charts do not reflect the points added manually using the Custom option until
the DOE is Updated.

It is expected that the Custom option will be used to enter DOE plans that were built
externally. If you use this feature to enter all of the points in the matrix manually, you
must make sure to enter enough points so that a good fitting can be created for the response surface. This is an advanced feature that should be used with caution; always
verify your results with a direct solve.

DOE Using a Central Composite Design


In Central Composite Design (CCD), a Rotatable (spherical) design is preferred since the prediction variance
is the same for any two locations that are the same distance from the design center. However, there are
other criteria to consider for an optimal design setup. Among these criteria, there are two commonly considered in setting up an optimal design using the design matrix.
1.

The degree of non-orthogonality of regression terms can inflate the variance of model coefficients.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

39

Using Design Exploration


2.

The position of sample points in the design can be influential based on its position with respect to
others of the input variables in a subset of the entire set of observations.

An optimal CCD design should minimize both the degree of non-orthogonality of term coefficients and the
opportunity of sample points having abnormal influence. In minimizing the degree of non-orthogonality,
the Variation Inflation Factor (VIF) of regression terms is used. For a VIF-Optimality design, the maximum
VIF of the regression terms is to be minimized, and the minimum value is 1.0. In minimizing the opportunity
of influential sample points, the leverage value of each sample points is used. Leverages are the diagonal
elements of the Hat matrix, which is a function of the design matrix. For a G-Optimality design, the maximum
leverage value of sample points is to be minimized.
For a VIF-Optimality design, the alpha value/level is selected such that the maximum VIF is minimum.
Likewise, for a G-Optimality design, the alpha value/level is selected such that the maximum leverage is
minimum. The rotatable design is found to be a poor design in terms of VIF- and G-Efficiencies.
For an optimal CCD, the alpha value/level is selected such that both the maximum VIF and the maximum
leverage are the minimum possible. For the Auto Defined design, the alpha value is selected from either
the VIF- or G-Optimality design that meets the criteria. Since it is a multi-objective optimization problem, in
many cases, there is no unique alpha value such that both criteria reach their minimum. However, the alpha
value is evaluated such that one criterion reaches minimum while another approaches minimum.
For the current Auto Defined setup (except for a problem with five variables that uses G-Optimality design)
all other multi-variable problems use VIF-Optimality. In some cases, despite the fact that Auto Defined
provides an optimal alpha meeting the criteria, an Auto Defined design might not give as good of a response
surface as anticipated due to the nature of the physical data used for fitting in the regression process. In
that case, you should try other design types that might give a better response surface approximation.

Note
Default values for CCD Types can be set in the Design Exploration > Design of Experiments
section of the Options dialog box, which is accessible from the Tools menu.
It is good practice to always verify some selected points on the response surface with an actual simulation
evaluation to determine its validity of use for further analyses in Design for Six Sigma. In some cases, a good
response surface does not mean a good representation of an underlying physics problem since the response
surface is generated according to the predetermined sampling points in the design space, which sometimes
misses capturing an unexpected change in some regions of the design space. In that case, you should try
Extended DOE. In Extended DOE, a mini CCD is appended to a standard CCD design, where a second alpha
value is added, and is set to half the alpha value of the Standard CCD. The mini CCD is set up in a way that
the essences of CCD design (rotatability and symmetry) are still maintained. The appended mini CCD serves
two purposes:
1.

to capture a drastic change within the design space, if any.

2.

to provide a better response surface fit.

The two purposes seem to be conflicting in some cases where the response surface might not be as good
as that of the Standard DOE due to the limitation of a quadratic response surface in capturing a drastic
change within the design space.
The location of the generated design points for the deterministic method is based on a central composite
design. If N is the number of input parameters, then a central composite design consists of:

40

One center point.


Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Upper and Lower Locations of DOE Points

2*N axis point located at the - and + position on each axis of the selected input parameters.

2(N-f) factorial points located at the -1 and +1 positions along the diagonals of the input parameter
space.

The fraction f of the factorial design and the resulting number of design points are given in the following
table:

Table 1 Number of Generated Design Points as a Function of the Number of Input Parameters
Number of input parameters

Factorial number f

Number of
design points

15

25

27

45

79

81

147

10

149

11

151

12

281

13

283

14

285

15

287

16

289

17

291

18

549

19

10

551

20

11

553

Upper and Lower Locations of DOE Points


The upper and lower levels of the DOE points depend on whether the input variable is a design variable
(optimization) or an uncertainty variable (Six Sigma Analysis). Generally, a response surface will be more
accurate when closer to the DOE points. Therefore, the points should be close to the areas of the input
space that are critical to the effects being examined.
For example, for Goal Driven Optimization, the DOE points should be located close to where the optimum
design is determined to be. For a Six Sigma Analysis, the DOE points should be close to the area where
failure is most likely to occur. In both cases, the location of the DOE points depends upon the outcome of
the analysis. Not having that knowledge at the start of the analysis, you can determine the location of the
points as follows:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

41

Using Design Exploration

For a design variable, the upper and lower levels of the DOE range coincide with the bounds specified
for the input parameter.
It often happens in optimization that the optimum point is at one end of the range specified for one
or more input parameters.

For an uncertainty variable, the upper and lower levels of the DOE range are the quantile values corresponding to a probability of 0.1% and 99.9%, respectively.
This is the standard procedure whether the input parameter follows a bounded (e.g., uniform) or unbounded (e.g., Normal) distribution, because the probability that the input variable value will exactly
coincide with the upper or lower bound (for a bounded distribution) is exactly zero. That is, failure can
never occur when the value of the input variable is equal to the upper or lower bound. Failure typically
occurs in the tails of a distribution, so the DOE points should be located there, but not at the very end
of the distribution.

DOE Matrix Generation


The DOE matrix is generated when you Preview or Update your DOE. This matrix consists of generated
design points that are submitted to the parent analysis systems for solution. Output parameter values are
not available until after the Update. You can monitor the progress of the design generation by clicking the
Show Progress button in the lower right corner of the window when the DOE matrix generation is prolonged;
for example, Optimal Space Filling or certain VIF or G-optimal designs.

Note
The Design Points will be solved simultaneously if the analysis system is configured to perform
simultaneous solutions; otherwise they will be solved sequentially.
To clear the design points generated for the DOE matrix, return to the Project Schematic, right click on the
DOE cell, and select Clear Generated Data. You can clear data from any Design Exploration cell in the Project
Schematic in this way, and regenerate your solution for that cell with changes to the parameters if you so
desire.

Using Response Surfaces


There is one Response Surface or Curve for every output parameter. Output parameters are represented in
terms of the input parameters, which are treated as independent variables.
For the deterministic method, response surfaces for all output parameters are generated in two steps:

Solving the output parameters for all design points as defined by a design of experiments.

Fitting the output parameters as a function of the input parameters using regression analysis techniques.

As many charts as needed can be generated and inserted in the Charts section of the Outline view for the
Response Surface. You can also save individual charts as graphic files.

Meta Model Types


Several different modeling algorithms are available to create the response surface:

Standard Response Surface-Full 2nd-Order Polynomials (Default)

Kriging

42

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

General Definitions

Non-Parametric Regression

Neural Network

Standard Response Surface - Full 2nd-Order Polynomials


This is the default.
Regression analysis is a statistical methodology that utilizes the relationship between two or more quantitative variables so that one dependent variable can be estimated from the other or others.
A regression analysis assumes that there are a total of n sampling points and for each sampling point the
corresponding values of the output parameters are known. Then the regression analysis determines the relationship between the input parameters and the output parameter based on these sample points. This relationship also depends on the chosen regression model. Typically for the regression model, a second-order
polynomial is preferred. In general, this regression model is an approximation of the true input-to-output
relationship and only in special cases does it yield a true and exact relationship. Once this relationship is
determined, the resulting approximation of the output parameter as a function of the input variables is
called the response surface.

General Definitions
The error sum of squares SSE is:
n

SSE = ( yi y^ i )2 = ({ y } { y^ })T ({ y} { y^ })
i =1

(1)

where:
yi = value of the output parameter at the ith sampling point
y^ i

= value of the regression model at the ith sampling point

The regression sum of squares SSR is:


n

SSR = ( y^ i y )2

(2)

i =1

where:
y=

1 n
yi
n i =1

The total sum of squares SST is:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

43

Using Design Exploration


n

SST = ( yi y )2

(3)

i =1

For linear regression analysis the relationship between these sums of squares is:
SST = SSR + SSE

(4)

Linear Regression Analysis


For a linear regression analysis, the regression model at any sampled location {x}i, with i = 1, ... , n in the mdimensional space of the input variables can be written as:
yi = t i {c } +

(5)

where:
t i
= row vector of regression terms of the response surface model at the ith sampled location
T

c c ...c
{c} = 1 2 p = vector of the regression parameters of the regression model
p = total number of regression parameters. For linear regression analysis, the number of regression
parameters is identical to the number of regression terms.
The regression coefficients {c} can be calculated from:
{c } = ([d]T [d])1[d]T { y }

(6)

Forward-Stepwise-Regression
In the forward-stepwise-regression, the individual regression terms are iteratively added to the regression
model if they are found to cause a significant improvement of the regression results. For DOE, a partial F
test is used to determine the significance of the individual regression terms.

Improving the regression model with transformation functions


Only in special cases can output parameters of a finite element analysis, such as displacements or stresses,
be exactly described by a second-order polynomial as a function of the input parameters. Usually a second
order polynomial provides only an approximation. The quality of the approximation can be significantly
improved by applying a transformation function input parameter as well as the output parameters. For DOE,
the Yeo-Johnson transformation is used. The parameter of the Yeo-Johnson transformation for the individual input parameters and output parameters is determined using a nonlinear optimization algorithm.

Transformation Families
You can select between Box-Cox, Yeo-Johnson and None for the transformation used in standard response
surface regression. None means that we compute the standard response surface regression without any
transformations. It is an option, on a per output basis, used to transform output parameter samples. The

44

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Transformation Families
default is Yeo-Johnson transformation since it is more numerically stable in its back-transformation. On the
contrary, Box-Cox transformation is more numerically unstable in its back-transformation, but it gives better
fit in some cases. It is recommended to try Box-Cox transformation if the goodness of fit of the response
surface is not as good. All applications use the response surface to generate samples; for example, Goal
Driven Optimization (GDO) and Design for Six Sigma (DFSS) samples. To access, click on an output parameter
in the Outline view of a response surface. In the Properties view under Output Settings, select the Transformation Type you want to apply and click Update to re-solve for a new response surface.

Kriging
Use the Kriging option to provide a comprehensive goodness of fit report of a standard response surface
to determine the adequacy to be applied in Goal Driven Optimization (GDO) and Design for Six Sigma (DFSS)
samples. To view the goodness of fit information, click on one of the output parameters in the Outline view
of your Repsonse Surface. A Goodness-of-Fit Information section will be shown in the Properties view for
the parameter. Kriging is an accurate interpolation method.
Kriging postulates a combination of a polynomial model plus departures of the form given by Equation 7 (p. 45):
y( x ) = f ( x ) + Z( x )

(7)

where y(x) is the unknown function of interest, f(x) is a polynomial function of x, and Z(x) is the realization
of a normally distributed Gaussian random process with mean zero, variance 2, and non-zero covariance.
The f(x) term in Equation 7 (p. 45) is similar to the polynomial model in a response surface and provides a
global model of the design space.
While f(x) globally approximates the design space, Z(x) creates localized deviations so that the kriging
model interpolates the N sample data points. The covariance matrix of Z(x) is given by Equation 8 (p. 45).
Cov Z( xi ) , Z( x j ) = 2R([r( xi , x j )])

(8)

In Equation 8 (p. 45) R is the correlation matrix, and r(xi, xj) is the spatial correlation of the function between
any two of the N sample points xi and xj. R is an N*N symmetric, positive definite matrix with ones along
the diagonal. The correlation function r(xi, xj) is Gaussian correlation function:
2
M
j
i
r ( xi , x j ) = exp k xk xk
k =1

(9)

The k in Equation 9 (p. 45) are the unknown parameters used to fit the model, M is the number of design
i
xj
variables, and xk and k are the kth components of sample points xi and xj. In some cases, using a single
correlation parameter gives sufficiently good results; the user can specify the use of a single correlation
parameter, or one correlation parameter for each design variable (Tools>Options>Design Exploration>Response
Surface>Kriging Options>Kernel Variation Type: Variable or Constant).

Z(x) can be wrote :

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

45

Using Design Exploration


N

Z( x ) = ir ( xi , x )

(10)

i =1

Non-Parametric Regression
Non-parametric regression provides improved response quality and is initialized with one of the available
DOE types.
The non-parametric regression (NPR) algorithm is implemented in the ANSYS Workbench Design Exploration
as a metamodeling technique prescribed for predictably high nonlinear behavior of the outputs with respect
to the inputs. The NPR feature is accessible via Tools>Options>Design Exploration>Response Surface>Type:
Non-Parametric Regression.
NPR belongs to a general class of Support Vector Method (SVM) type techniques. These are data classification
methods which use hyperplanes to separate data groups. The regression method works almost similarly,
the main difference being that the hyperplane is used to categorize a subset of the input sample vectors
which are deemed sufficient to represent the output in question. This subset is called the support vector
set. The objective is to find a minimal, sparse support vector set from the input samples provided. Support
vector methods are preferred over other non-parametric regression methods because of two inherent advantages. First, SVM-type approaches map the input space to a feature space by a kernel map before doing
the fitting; this ensures that the curse of dimensionality as suffered by high dimensional input spaces no
longer exist. Second, the training objective function is quadratic in the feature space, thus, QP based methods
may be used easily to determine the support vectors without the risk of the solution being trapped in a
local minimum.
Let the input sample (as generated from a DOE method) be X = {x1 , x2, x3, , xM}, where each xi is an Ndimensional vector and represents an input variable. The objective is to determine the equation of the form:
Y = < W , X > +b

(11)

Where, W is a weighting vector. In case of generic non-parametric case, the Equation 11 (p. 46) is rewritten
as the following:
N
r r
Y = ( A i A i* )K( Xi, X) + b
i =1

(12)

r r
*
K
(
X
i , X) is the kernel map and the quantities A and A i are Lagrange multipliers whose derivation
Where,
i
will be shown in later sections.
In order to determine the Lagrange multipliers, we start with the assumption that the weight vector W must
be minimized such that all (or most) of the sample points lie within an error zone around the fitted surface.
For a simple demonstration of this concept, please see Figure : Fitting a regression line for a group of sample
points with a tolerance of which is characterized by slack variables * and (p. 47) .

46

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Transformation Families

Figure: Fitting a regression line for a group of sample points with a tolerance of which is
characterized by slack variables * and

Thus, we write a primal optimization formulation as the following:


C N
*
(i + i )
N i =1
yi (< W, Xi > + b) + i

Minimize L = 0.5 W

(13)

S.T.

( < W, Xi > + b) - yi + i*
Where, C is an arbitrary (> 0) constant. In order to characterize the tolerance properly, we define a loss
function in the interval (- , ) which defacto becomes the residual of the solution. In the present implementation, we use the -insensitive loss function which is given by the equation:
L = 0 f ( x ) y ) <

(14)

L = f ( x ) y )

The primal problem in Equation 13 (p. 47) can thus be rewritten as the following using generalized loss
functions as:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

47

Using Design Exploration


L = 0. 5 W

+ C ( l ( i ) + l ( i* ))
i =1

S.T.

(15)

i , i* 0
< W, Xi > +b yi + i
yi < W, Xi > b + i*

In order to solve this efficiently, a Lagrangian dual formulation is done on this to yield the following expression:
L = 0 . 5* W

i =1

i =1

+ C (l(i ) + l(i* )) + A i [ < W, Xi > +b yi ( + i )]

+ A i* [ yi < W, Xi > b ( + i* )]
i =1
N

(16)

+ ( ii + i* i* )
i =1

After some simplification, the dual Lagrangian can be written as the following Equation 17 (p. 48):
N N

min L = 0.5

*
*
( A i Ai)( A j A j ) < Xi , X j >

i =1 j =1

+ [( A i* A i ) yi ( A i* Ai )]
i =1

N l( )
N l( * )

i l( ) + C
i l( * )
+ C i

i*
i
i
i
i =1
i =1
i*

S.T.
N

(*)
(Ai
i =1

A (*)
i

(17)
Ai ) = 0
C

l((i *) )
(*)
i

l( (*)
i ) A (*)
(*) = inf C

(*)

i
Equation 17 (p. 48) is a quadratic constrained optimization problem, and the design variables are the vector
A. Once this is computed, the constant b in Equation 11 (p. 46) can be obtained by the application of KarushKuhn-Tucker (KKT) conditions. Once the -insensitive loss functions are used instead of the generic loss
functions l() in Equation 17 (p. 48), this can be rewritten in a much simpler form as the following:

48

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Transformation Families

MIN

L
A, A *

N N

= 0.5 ( A i* A i )( A *j A j )K( Xi, X j )


i =1 j =1

+ [( Ai* + Ai ) yi( A i* A i )]
i =1

(18)

S.T.

0 A (*)
i C
N

*
( Ai A i ) = 0

i =1

Which is be solved by a QP optimizer to yield the Lagrange multipliers


by applying the KKT conditions.

A (*)
i

and the constant b is obtained

Suggested Best Practices


The choice of the most optimal Non-Parametric Regression meta model is done in terms of the selection of
the optimal hyperparameters, i.e. the tube diameter and the Gaussian kernel density "k" of the kernel
function chosen. The sensitivity of the meta model to the hyperparameter "C" is not very prominent. However,
in the current version, the hyperparameters are fixed to constant values and are not optimized. The values
are determined from a series of benchmark tests and strike a compromise between the meta model accuracy
and computational speed. For a large family of problems, the current settings of the hyperparameters will
provide good results; however, for some problem types (like ones dominated by flat surfaces or lower order
polynomials), some oscillations may be noticed between the DOE points. The choice of Gaussian radial basis
functions as kernel functions also contributes to these oscillations.
In order to circumvent this, you can use a larger number of DOE points or, depending on the fitness landscape
of the problem, use one of the several optimal space filling DOEs provided. In general, it is suggested that
the problems first be fitted with a quadratic response surface, and the NPR fitting adopted only when the
goodness of fit metrics from the quadratic response surface model is unsatisfactory. This will ensure that
the NPR is only used for problems where low order polynomials do not dominate.

Neural Network
This mathematical technique is based on the natural neural network in the human brain.
In order to interpolate a function, we build a network with three levels (Inp, Hidden, Out) where the connections between them are weighed, like this:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

49

Using Design Exploration

Each arrow is associated with a weight (w) and each ring is called a cell (like a neural).
If the inputs are xi, the hidden level contains function gj (xi) and the output solution is also:
fk ( xi ) = K( w jk g j ( xi ))
where K is a predefined function, such as the hyperbolic tangent or an exponential based function, in order
to obtain something similar to the binary behavior of the electrical brain signal (like a step function). The
function is continuous and differentiable.
The weight functions (wjk) are issued from an algorithm which minimizes (as the least squares method) the
distance between the interpolation and the known values (design points). This is called the learning. The
error is checked at each algorithm iteration with the design points which are not used for the learning. We
need to separate learning design points and error checking design points.
The error decreases and then increases when the interpolation order is too high. The minimization algorithm
is stopped when the error is the lowest.
This method uses a limited number of design points to build the approximation. It works better when the
number of design points and the number of intermediate cells are high. And it can give interesting results
with several parameters.

Meta Model Refinement


Once a meta model is solved, you can attempt to improve it by using one of the following refinement procedures. Review the goodness of fit information in the Properties view for each output parameter to evaluate
whether the refinement was successful.

Note
Any new Refinement Design Points that are generated by any of the refinement methods below
will be retained by the Response Surface and used for any further analysis of this Response Surface
(using a different Response Surface Type, etc.).
50

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Interactive Refinement
Change Meta Model Types
Perform a Manual Refinement
Refine the Model Using the Kriging Meta Model

Change Meta Model Types


Changing meta model types or changing the available options for the current meta model can improve the
goodness of fit. To change the meta model type, select the Response Surface cell in the Outline view.
Change the Response Surface Type in the Properties view. Click Update Response Surface in the toolbar
to run an analysis using the new meta model. Look at the goodness of fit information for the output parameters to see how the refinement improved the fitting.

Perform a Manual Refinement


By default, all response surfaces provide a manual refinement option. After solving the response surface, a
section entitled Refinement Design Points will appear in the Response Surface Table. You can directly enter
a value for any input parameter to create a new refinement design point. The other input and output values
will be calculated from the response surface, and that point will be added to your list of refinement design
points. To run a manual refinement:
1.

First do an initial solve of the response surface using any of the response surface types.

2.

Switch to the response surface type that you want to use to use for the refinement.

3.

In the Refinement Design Points section of the table, enter a value in one of the input parameter fields
in the New Refinement Design Point row of the table.

4.

Update the response surface.

5.

View the results to see how they were affected by the new point.

Refine the Model Using the Kriging Meta Model


You can use the Kriging Meta Model to perform several different types of refinement for the Response Surface.
Interactive and Auto Refinement are not supported for discrete parameters. In order to run a refinement,
you must first run an analysis of the response surface using the Kriging Meta Model with Manual selected
for Refinement Type (you may enter manual points before you do this initial analysis). After the initial
analysis, use the different refinement types as described in the following sections.

Interactive Refinement
The following Properties can be set for Kriging Interactive refinement.

Maximum Number of Points: Maximum number of design points that will be generated during the
refinement.

Maximum Relative Error: Can be set between 1% - 99%. If the user sets 5% as the maximum relative
error, it means that he will accept a Response Surface with an error less than 5%.

Output Parameter: Select an output parameter. Interactive refinement can be associated with only one
output parameter. If the user needs to refine the response surface for several output parameters, he
needs to do several interactive refinements.

The interactive refinement can be done in one step (Update Response Surface) or in several steps:
1.

Click the Preview Response Surface button in the toolbar to see the new design points that are going
to be inserted (only input parameter values).
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

51

Using Design Exploration


2.

If the user wants to delete or modify certain points before computing them, he can edit the table. To
do so, change the refinement type from Interactive to Manual, which makes the table editable.

3.

Click Update Response Surface in the toolbar to complete the refinement.

The design points generated for the refinement will appear in the Response Surface Table view under Design
Set of Response Surface. If the response surface is accurate enough, interactive refinement does not provide
new points to improve the response surface.

Auto Refinement
Auto refinement enables the user to refine the Kriging response surface automatically by choosing certain
refinement criteria and allowing an automated optimization-type procedure to add more DOE points to the
design domain where they are most needed. The number and the spread of the DOE points are determined
by the following Response Surface Properties.

Maximum Number of Points: Maximum number of Design Points that will be generated during the
refinement.

Error Minimization Criteria:

Maximum Output - the maximum mean square error of all output parameters is minimized.

All Outputs - the maximum product of the output and the mean square error is minimized.

Sum of Outputs - the maximum sum of the output and the mean square error is minimized.

Output Variable Combination:

Mean Square Error - only the output with the largest mean square error is considered.

Sum Of Outputs - mean square error of all outputs are considered, thus multiple DOE points are
generated in each iteration depending on the number of outputs.

Product Of Outputs - combined mean square error of all outputs is considered.

Crowding Distance Separation Percentage: Minimum distance between 2 new DOE points in the input
space, implemented as a constraint in the search for the DOE points.

When the refinement type is Auto, the output parameters will have an additional option, Stop Criterion,
added to their properties. If the Stop Criterion box is checked for an output, a Relative Error property can
be set that corresponds to the maximum relative error that the user will accept for that parameter (0.01 to
0.99).
The Relative Error works in the following manner. At each step of the automatic refinement:
1.

The refinement finds new Design Points to improve the response surface; the output parameter values
are evaluated with the previous response surface.

2.

The real values of the output parameters are computed.

3.

The Response Surface can be updated with these new values.

4.

For direct output parameters with Stop Criterion enabled, the values (1) are compared with the real
values (2). If the relative error is greater than the Relative Error specified, then the auto refinement is
continued (need to add new a Design Point -> go to step 1).

Once you have entered all of the properties, click Update Response Surface in the toolbar to complete the
refinement. The design points generated for the refinement will appear in the Response Surface Table view
under Design Set of Response Surface.

52

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Min-Max Search
The automatic refinement stops when the maximum number of Design Points is reached or if the response
surface is accurate enough for direct output parameters with the Stop Criterion enabled. If there is no direct
output parameter with Stop Criterion enabled, automatic refinement executes all Design Points.

Manual Refinement
If you set the refinement type to Manual you can add, delete, or modify the additional refinement design
points used to generate the response surface. If you choose manual after doing an auto or interactive refinement, you can edit the points that were generated from those refinements. If you do a manual refinement
as your initial refinement, you can enter new as many new refinement design points as you would like into
the Table view.

Goodness of Fit
You can view goodness of fit information for any of the output parameters in a response surface. To do so,
edit the Response Surface cell from the Project Schematic. Then click on one of the output parameter cells
in the Outline view. The Goodness-of-Fit Information section will be displayed in the Properties view.
The following Goodness of Fit information is displayed:

Coefficient of Determination: The percent of the variation of the output parameter that can be explained
by the response surface regression equation. That is, coefficient of determination is the ratio of the explained variation to the total variation.
The design points used to create the response surface are likely to contain variation for each output
parameter (unless all the output values are the same, which will result in a flat response surface). This
variation is explained by the response surface that the DOE generates. If the response surface were to
pass directly through each design point, the coefficient of determination would be 1 or 100%, meaning
that all variation is explained.

Adjusted Coefficient of Determination: The Adjusted Coefficient of Determination takes the sample
size into consideration when computing the Coefficient of Determination. Usually this is more reliable
than the usual coefficient of determination when the number of samples is small ( < 30).

Maximum Relative Residual: The maximum distance (relatively speaking) out of all of the generated
design points from the calculated response surface to each generated design point.

Root mean square error: This is the square root of the average square of the residuals at the DOE
points for regression methods. For Kriging based methods, this is done on a set of testing points and
the prediction error is used instead.

Relative Root Mean Square Error: This is the square root of the average square of the residuals scaled
by the actual output values at the DOE points for regression methods. For Kriging based methods, this
is done on a set of testing points and the relative prediction error is used instead.

Relative Maximum Absolute Error: This is the absolute maximum residual value relative to the
standard deviation of the actual outputs.

Relative Average Absolute Error: This is the absolute maximum residual value relative to the standard
deviation of the actual output data, modified by the number of samples. This is useful when the number
of samples is low ( < 30).

Min-Max Search
When a Response Surface is solved, if the Min-Max Search cell is enabled, it will examine the entire output
parameter space to approximate the minimum and maximum of each output parameter. If the response
surface is updated in any way, including changing the fitting for an output parameter or performing a refinement, a new Min-Max Search is performed.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

53

Using Design Exploration


The Min-Max Search uses the NLPQL algorithm to search the parameter space for the maximum and minimum
values. Assuming that all input parameters are continuous, one search is done for the minimum value and
one for the maximum, and if Number of Starting Points is more than one, two searches are done (minimum
and maximum) for each starting point. To find the minimum or maximum of an output, the generated
sample set is sorted and the "n" firsts points of the sort are used as the starting points. The NLPQL algorithm
is run twice for each starting point, once for the minimum and once for the maximum.
If there are discrete (or usability) input parameters, the number of searches increases by the number of discrete/usability parameter combinations (there will be two searches per discrete/usability parameter combination, one for minimum and one for maximum). So if there are two discrete parameters, one with 4 levels
and one with three, the number of searches done is 4 * 3 * 2 = 24. This example shows a search with only
one starting point defined. If you add multiple starting points, you will multiply the number of searches accordingly.
When you select the Min-Max Search cell in the outline, the sample points that contain the minimum and
maximum calculated values for each output parameter are shown in the Response Surface table. These
sample points can be saved to the project by selecting Insert as Design Points from the right-click menu.
The sample points can also be saved as Response Points by selecting Insert as Response Points from the
right-click menu.
You can disable the Min-Max Search cell by unchecking the box in the Enabled column in the Outline. If
you disable the Min-Max Search, no search is done when the Response Surface is updated. If you disable
the search after performing an initial search, the results from the initial search will remain in the output
parameter properties and will be shown in the Response Surface Table when you click on the Min-Max
Search cell.

Note

If no solutions have occurred yet, no Minimum or Maximum values are shown.

If the Design of Experiments object is solved, but the Response Surface is not solved, the
minimum and maximum values for each output parameter are extracted from the DOE
solutions design points and displayed in the Properties for the output parameters.

For discrete/usability parameters, there is only one Minimum and Maximum value per output
parameter even if a discrete parameter has many levels (there is not one Min-Max value set
per combination of discrete/usability parameter values).

Response Surface Charts


The various Response Surface charts allow you to graphically view the impact that parameters have on one
another. There are currently three types of charts available: response charts, spider charts, and local sensitivity charts. When you initially edit a Response Surface, you will see one of each of the chart types. For each
chart type, the parameter values for the current response point are shown in the Properties view of the
chart, initially.
When you click on a response point cell in the Outline view, the toolbox will display all available chart types
for the cell. You can drag a chart template out of the toolbox and drop it on the response point to add another chart of that type. You can have as many charts as you like under a response point. You can change
the name of a chart cell by clicking on it twice and entering the new name. (This will not affect the title of
the chart, which is set as part of the chart properties.) Charts can also be saved as graphics by right clicking
on them and selecting Save Image As.... See Saving a Chart for more information.

54

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Viewing a 2-D Contour Graph


Each chart provides the ability to visually explore the parameter space by using the input parameter sliders
available in the chart's Properties view. These sliders allow you to modify an input parameter value and
view its effect on the displayed output parameter. Spider charts for Response Points are available from the
Outline view. Spider charts are useful for visualizing all output parameter values simultaneously.

You can change the parameter values to explore other designs by moving the slider bars, or entering specific values in the accessible value boxes.
The triad control at the bottom left in the 3D Response chart window allows you to rotate the chart in
freehand mode or quickly view the chart from a particular plane. To zoom any part of the chart, use the
normal zooming controls (Shift-middle mouse button, or scroll wheel). See Using the Triad for more information on chart manipulation.

Using the Response Chart


The Response chart can be viewed as a 2-D or 3-D contour graph. In the Properties view for the chart, set
the Chart Mode to be 2D or 3D, and choose the parameters to be shown on the X, Y, and Z (for 3D) axes.
Use the sliders to explore how changing the values of the parameters that are not displayed on the chart
affect the shape and position of the curve.
Also indicated in the Properties view are the values of the remaining output parameters. Thus, instantaneous
design and analysis is possible, leading to the generation of additional design points.

Viewing a 2-D Contour Graph


A 2-D contour graph is a flat 3-D response chart. To access this feature, set the Chart Mode to be 2D, and
choose the parameters to be shown on the X and Y axes.
An example of a 2-D response chart is shown below.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

55

Using Design Exploration

Viewing a 3-D Contour Graph


Set the Chart > Mode to be 3D, and choose the parameters to be shown on the X, Y, and Z axes. A smooth
3-D contour of Z versus X and Y is presented. In the Chart view, the 3-D contour can be rotated by clicking
and dragging the mouse. Moving the cursor over the response surface shows the values of Z, X, and Y. The
values of other input parameters can also be adjusted in the Properties view using the sliders, showing different contours of Z (and the corresponding values of other output parameters can be seen in the Properties
view). Thus, instantaneous design and analysis is possible, leading to the generation of additional design
points.
The actual appearance of the graph will vary depending on the input parameter pair you select discrete
vs. discrete, continuous vs. continuous, or discrete vs. continuous. Two examples of this are shown below.

Using the Spider Chart


Spider charts allow you to visualize the impact that changing the input parameter(s) has on all of the output
parameters simultaneously. When you solve a Response Surface, a Spider chart will appear in the Outline
view for the default Response Point.

56

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Viewing a 3-D Contour Graph

You can use the slider bars in the chart's Properties view to adjust the value for the input parameter(s) to
visualize different designs. You can also enter specific values in the value boxes. The parameter legend box
at the top left in the Chart view allows you to select the parameter that is in the primary (top) position. Only
the axis of the primary parameter will be labeled with values.

Using the Local Sensitivity Chart


Local Sensitivity charts allow you to see the impact of the input parameters on the output parameters. When
you solve a Response Surface, a Local Sensitivity chart will appear in the Outline view for the default Response
Point. The Local Sensitivity chart can be displayed as a bar chart or a pie chart. At this level, the sensitivity
charts are Single Parameter Sensitivities. This means that Design Exploration calculates the change of the
output based on the change of each input independently at the current value of each input parameter. The
larger the change of the output, the more significant is the input parameter that was varied. As such, single
parameter sensitivities are local sensitivities. Changing the input parameter values using the sliders in the
chart's Properties view will update the sensitivities.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

57

Using Design Exploration

For discrete parameters, you can change the parameter values to explore other designs by moving the slider
bars, or entering specific values in the value boxes.
The Local Sensitivity chart can be a powerful exploration tool. On the Local Sensitivity chart, you see for
each output the weight of the different inputs. By default, all outputs are shown on the chart. If you consider
only one output, the resulting chart provides an independent sensitivity analysis for each single input
parameter.

The following is an explanation of how the bar chart is built. To determine the sensitivity of P1 vs. P4, we
plot the response curve P4 = f(P1), and compute Max(P4)-Min(P4)/Avg(P4) ~= 0.72. If P4 increases while P1
increases, the sign is positive; otherwise its negative. In our example we get -0.72 which is the red bar above.

Now we plot P4 = f(P2), Max(P4)-Min(P4)/Avg(P4) ~= 1.35 which is the yellow bar above.

58

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Related Topics:

On each of these curves, only one input is varying and the other input parameters are constant, but you
can change the value of any input and get an updated curve. The same possibility applies to the Local
Sensitivity chart: all the sensitivity values are recalculated when you change the value of an input parameter.
If parameters are correlated, then youll see the sensitivity varying; in other words, the relative weights of
inputs may vary depending on the design.

Local Sensitivity Chart Limitations

Sensitivity charts are not available if ALL input parameters are discrete.

Sensitivities are calculated for continuous parameters only.

Changes in sensitivities due to different discrete parameter combinations are supported.

Global Sensitivities
The sensitivity plots available in Design Exploration allow you to efficiently improve your design toward a
more reliable and better quality design, or to save money in the manufacturing process while maintaining
the reliability and quality of your product. You can request a sensitivity plot for any output parameter in
your model.
The sensitivities available under the Six Sigma Analysis and the Goal Driven Optimization views are statistical
sensitivities. Statistical sensitivities are global sensitivities, whereas the single parameter sensitivities available
under the Responses view are local sensitivities. The global, statistical sensitivities are based on a correlation
analysis using the generated sample points, which are located throughout the entire space of input parameters. The local single parameter sensitivities are based on the difference between the minimum and
maximum value obtained by varying one input parameter while holding all other input parameters constant.
As such, the values obtained for single parameter sensitivities depends on the values of the input parameters
that are held constant. Global, statistical sensitivities do not depend on the settings for the input parameters,
because all possible values for the input parameters are already taken into account when determining the
sensitivities.

Global Sensitivity Chart Limitations

Sensitivity charts are not available if ALL input parameters are discrete.

Sensitivities are calculated for continuous parameters only.

Related Topics:
Using the Local Sensitivity Chart (p. 57)
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

59

Using Design Exploration


Using the Sensitivities Chart (GDO) (p. 69)
Statistical Sensitivities in a Six Sigma Analysis (p. 107)

Adding Design Points and Response Points


There are several ways to add Design Points and Response Points to the tables in your project.

Design Points can be exported from your Parameter Set to the DOE Table or imported to the DOE Table
from the Parameter Set.

Design Points can be added to the Parameter Set from various tables and charts in the DOE, DOE (SSA),
Response Surface, Response Surface (SSA), or Goal Driven Optimization workspaces.

Response Points can be added to the Response Surface Table from the tables and charts in the Response
Surface or Goal Driven Optimization workspaces.

In addition, Design Points can be added by hand or inserted by copy/paste into a Custom DOE Table.

Importing and Exporting Design Points


Design Points can be imported to a custom DOE Table from the Parameter Set, or exported from the Parameter Set to a specific custom DOE cell.

To Import all Design Points from the Parameter Set into a DOE Table:
1.

From the Project Schematic, double click on the Design of Experiments cell where you want to import
the Design Points.

2.

Click on the Design of Experiments object in the Outline.

3.

Set the Design of Experiments Type to Custom.

4.

Right click on the Design of Experiments object in the Outline and select Copy all Design Points from
the Parameter Set. This option is only available if the DOE Type is Custom.

All Design Points from the Parameter Set will be copied to the DOE Table if the values of the parameters in
the Design Points fall within the defined ranges of the parameters in the DOE. If some of the parameter
values in the Design Points are out of range, you will be given the option to automatically expand the
parameter ranges in the DOE to accommodate the out of range values. If you choose not to expand the
parameter ranges, only the Design Points where all parameters fall within the existing ranges will be copied
into the DOE Table. You may also cancel the import action.

To Export Design Points from the Parameter Set into a DOE Table:
1.

From the Project Schematic, double click on the Parameter Set.

2.

Select one or more Design Points from the Table of Design Points.

3.

Right click and select Copy Design Points to: and then select one of the DOE cells listed in the submenu. Only DOE cells with DOE Type of Custom will be listed.

All selected Design Points will be copied from the Parameter Set to the selected DOE Table if the values of
the parameters in the Design Points fall within the defined ranges of the parameters in the DOE. If some of
the parameter values in the Design Points are out of range, you will be given the option to automatically
expand the parameter ranges in the DOE to accommodate the out of range values. If you choose not to
expand the parameter ranges, only the Design Points containing parameters that all fall within the existing
ranges will be copied into the DOE Table. You may also cancel the export action.

60

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Using Parameters Correlation

Note
If an unsolved Design Point was previously copied to a custom DOE, and subsequently this Design
Point is solved in the Parameter Set, you can copy it to the custom DOE again to push its output
values to the Custom DOE Table.

Inserting Design Points and Response Points


Design Points can be added to the project and Response Points can be added to a response surface table
from a number of results views. You can either Insert as Response Points, Insert as Design Points or even
copy values to the current Response Point. These options are available from the right mouse button menu
on the rows of the Table view, or from various charts. The same mechanism is used, except that availability
of the three operations depends on the context. Some of the different charts and tables that allow this operation are: Optimization candidate table, Response Surface Min-Max Search table, Samples chart, Tradeoff
chart, and Response chart.
Selected points can be stored as Response Points or Design Points. Right click on a table row or chart point
and select either Insert as Response Points, Copy Values to Response Point, or Insert as Design Points
from the menu.

Insert as Response Points inserts new Response Points into the Table of Response Surface by copying
the input and output parameter values of the selected Candidate Design(s).

Copy Values to Response Point copies the input and output parameter values of the selected Candidate
Design(s) to the current Response Point.

Insert as Design Points creates new Design Points at the project level (edit the Parameter Set in the
Schematic view to list the existing Design Points) by copying the input parameter values of the selected
Candidate Design(s). The output parameter values are not copied because they are approximated values
provided by the Response Surface.

The above insertion operations are available from Optimization charts, depending on the context. For instance,
it is possible to right click on a point in a Tradeoff chart in order to insert the corresponding sample as a
Response Point or a Design Point. The same operation is available from a Samples chart, unless it is being
displayed as a spider chart.

Note
If your cell is out of date, the information (charts and tables) you see is out of date, but you can
still manipulate those charts and table with the old data (change the navigator to explore different
Response Points, Insert Response points, etc.).

Using Parameters Correlation


This section contains information about using the Parameters Correlation analysis.
Running a Parameters Correlation
Purpose of Parameters Correlation
Sample Generation
Viewing the Quadratic Correlation Information
Determining Significance
Viewing Significance and Correlation Values

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

61

Using Design Exploration

Running a Parameters Correlation


To create a Parameters Correlation system on your project schematic, do the following:
1.

With the Project Schematic displayed, drag a Parameters Correlation template from the Design Exploration area of the toolbox directly under the Parameter Set bar of the schematic.

2.

After placing the Parameters Correlation system on the Schematic, double click on the Parameters
Correlation cell to open the workspace and set up the correlation options that are described in Parameters Correlation (p. 20).

3.

Click the Update Parameters Correlation button on the toolbar. When the update is finished, examine
the results using the various charts shown in the Outline view. See Parameters Correlation (p. 20) for
more information about the available charts.

Purpose of Parameters Correlation


The application of Goal Driven Optimization (GDO) and Six Sigma Analysis (SSA) in a finite element based
analysis framework is always a challenge in terms of solving time, especially when the finite element model
is large. For example, hundreds or thousands of finite element simulation runs in SSA is not uncommon. If
one simulation run takes hours to complete, it is almost impractical to perform SSA at all with thousands or
even hundreds of simulations.
To perform GDO or SSA in finite element based analysis framework, it is always recommended to perform
a Design of Experiments (DOE) study. From the DOE study, a response surface is built within the design
space of interest. After a response surface is created, all simulation runs of GDO or SSA will then become
function evaluations.
In the DOE study, however, the sampling points increase dramatically as number of input parameters increases.
For example, a total of 149 sampling points (finite element evaluations) are needed for 10 input variables
using Central Composite Design with fractional factorial scheme. As the number of input variables increases,
the analysis becomes more and more intractable. In this case, one would like to exclude unimportant input
parameters from the DOE sampling in order to reduce unnecessary sampling points. A correlation matrix is
a tool to help users identify input parameters deemed to be unimportant, and thus treated as deterministic
parameters in SSA.

Sample Generation
The importance of an input to an output is determined from their correlation. The samples used for correlation
calculation are generated with Latin Hypercube Sampling method. The Latin Hypercube samples are generated
in such a way that the correlations among the input parameters are less than or equal to 5%. Also, the Latin
Hypercube samples are generated such a way that each sample has equal probability of pick (uniformly
distributed). Two correlation calculation methods are provided for the user to select from; Spearmans rank
correlation and Pearsons linear correlation. Pearsons uses actual data for correlation evaluation while
Spearmans uses ranks of data, which is deemed to be more accurate, and therefore is recommended.
If Auto Stop Type is set to Enable Auto Stop for the correlation, the number of samples required to calculate
correlation is determined according to convergences of means and standard deviations of output parameters.
At each iteration of simulation runs (finite element runs in this case), convergences of means and standard
deviations are checked against accuracy provided by the user. The iteration stops when accuracy is met, or
number of samples exceeds the maximum value set by the user, whichever comes first.

62

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Optimization Options

Viewing the Quadratic Correlation Information


Quadratic correlation between a parameter pair is proposed by its coefficient of determination (R2 ) of
quadratic regression. The (R2) between all parameter pairs is displayed in the Determination Matrix. The
closer R2 is to 1, the better the quadratic regression is. The Determination Matrix is not symmetric, unlike
the Correlation Matrix. The Determination Matrix is displayed in the table view below the Correlation Matrix.
Like the Correlation Matrix, there is a Chart associated with the Determination Matrix where any parameter
can be disabled.
In addition, the Correlation Scatter chart displays a quadratic trend line for the selected parameter pair.

Determining Significance
Significance of an input parameter to an output parameter is determined using a statistical hypothesis test.
In the hypothesis test, a NULL hypothesis of insignificance is assumed, and is tested if it is statistically true
according to significance level (or acceptable risk) set by the user. From the hypothesis test, a p-Value
(probability value) is calculated, and is compared with the significance level. If the p-Value is greater than
the significance level, it is concluded that the NULL hypothesis is true, and that the input parameter is insignificant to the output parameter, and vice versa. See the Six Sigma Theory section for more information.

Viewing Significance and Correlation Values


If you select Sensitivities from the Outline view in the Six Sigma workspace, you can review the sensitivities
derived from the samples generated for the Parameters Correlation. The Parameters Correlation sensitivities
are global sensitivities. In the Properties view for the Sensitivities chart, you can choose the output parameters for which you want to review sensitivities, and the input parameters that you would like to evaluate
for the output parameters.
The default setting for the Significance Level (found in the Design Exploration section of the Tools >
Options dialog) is 0.025. Parameters with a sensitivity above this significance will be shown with a flat line
on the Sensitivity chart, and the value displayed for those parameters when you mouse over them on the
chart will be 0. In order to view the actual correlation value of the insignificant parameter pair, you can
either select the Correlation Matrix from the Outline and mouse over the square for that pair in the matrix,
or you can set the Significance Level to 1, which bypasses the significance test and displays all input
parameters on the Sensitivities chart with their actual correlation values.

Using Goal Driven Optimization


This section contains information about running a Goal Driven Optimization analysis. For more information,
see "Understanding Goal Driven Optimization" (p. 73).
Optimization Options
Defining Design Objectives
Creating New Design Points and Response Points from Candidates
Goal Driven Optimization Charts

Optimization Options
To create a GDO system on your project schematic, do the following:
1.

With the Project Schematic displayed, drag a Goal Driven Optimization template from the Design
Exploration area of the toolbox to the schematic. You may drag it:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

63

Using Design Exploration

directly under the Parameter Set bar, in which case it will not share any data with any other systems
on the schematic.

onto the Design of Experiments cell of a Response Surface, in which case it will share all of the
data generated by the DOE.

onto the Response Surface cell of a Response Surface, in which case it will share all of the data
generated for the DOE and Response Surface cells.

2.

After placing the GDO system on the Schematic, if you are not sharing the DOE and Response Surface
cells, edit the DOE and set it up as described previously in Design of Experiments (p. 18) and solve the
DOE and Response Surface cells.

3.

Edit the Optimization cell. Make sure that cell A2 is selected in the Outline view, and set up the options
for your optimization in the Properties view.

The following optimization options are available.


Screening
MOGA
NLPQL

Screening
The Screening option allows you to generate a new sample set and sort its samples based on objectives. It
is available for all types of input parameters. Usually the Screening approach is used for preliminary design,
which may lead you to apply the MOGA or NLPQL options for more refined optimization results. See the
Principles section under Goal Driven Design for more information.
To perform a Screening optimization, in the Properties view of the Optimization:
1.

Enter the following settings in the Properties view:

Select Screening for the Optimization Method.

Enter the Number of Initial Samples to generate for the optimization.

Specify the Constraint Handling (GDO). This option can be used for any optimization application
and is best thought of as a "constraint satisfaction" filter on samples generated from the Screening,
MOGA, or NLPQL runs. This is especially useful for Screening samples to detect the edges of solution
feasibility for highly constrained nonlinear optimization problems. The following choices are available:

As Goals (default): Implies that the upper, lower, and equality constrained objectives of the
candidate designs shown in the Table of Optimization are treated as goals; thus any violation
of the objectives is still considered feasible.

As Hard Constraints: When chosen, the upper, lower and equality constrained objectives are
treated as hard constraints; that is, if any of them are violated then the candidate is no longer
displayed. So, in some cases no candidate designs may be displayed depending on the extent
of constraint violation.

2.

Set input and output parameter Objectives in the Table of Optimization view. See Defining Design
Objectives (p. 67) for more information.

3.

Click the Update Optimization toolbar button.


The result is a group of points or sample set. The points that are most in alignment with the objectives
are displayed in the table as the Candidate Designs for the optimization. The result also displays the
read-only field Size of Generated Sample Set in the Properties view.

64

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Optimization Options

Note
The Tradeoff chart for a screening optimization where some objectives are constraints will
show all of the feasible points (that meet the constraints) as the first Pareto front. Points
from the sample set that do not meet the constraints will be shown as infeasible points.
The Samples chart will also indicate the infeasible points by color.

MOGA
The MOGA (Multi-Objective Genetic Algorithm) option allows you to generate a new sample set or use an
existing set for providing a more refined approach than the Screening method. It is available for continuous
input parameters only and can handle multiple goals. See the MOGA (Multi-objective Genetic Algorithm)
section for more information on this algorithm.
To perform a MOGA optimization:
1.

Enter the following settings in the Properties view:

Optimization Method: Set to MOGA.

Number of Initial Samples: Complete this field if you wish to start from a new set. This number
should be at least 10 times the number of continuous input parameters, but not more than 300.
If you want to use an existing Screening sample set, you cannot edit the Number of Initial Samples
field.

Number of Samples Per Iteration: The number of samples that are iterated and updated at each
iteration. This setting must be less than or equal to the number of initial samples.

Maximum Allowable Pareto Percentage: The approximate maximum percentage of first Pareto
front points (computed with respect to the Number of Samples Per Iteration) that you wish to
compute. For example, if the Maximum Allowable Pareto Percentage is set to 70 and the Number
of Samples Per Iteration is set to 200, then you can expect to have, at most, approximately 140
first Pareto front points. If the Maximum Allowable Pareto Percentage is too low (below 30), then
there is a chance that the iterations will converge slowly. If the value is too high (above 80), then
the problem may converge prematurely. The value of the Maximum Allowable Pareto Percentage
depends on the number of output and input parameters of the problem, in addition to the nature
of the design space itself. Using a value between 55 and 75 works best for most problems.

Maximum Number of Iterations: The maximum possible number of iterations the algorithm executes. If convergence happens before this number is reached, the iterations will cease. This also
provides an idea of the maximum possible number of function evaluations that are needed for the
full cycle, as well as the maximum possible time it may take to run the optimization. For example,
the absolute maximum number of evaluations is given by: Number of Initial Samples + Number of
Samples Per Iteration * (Maximum Number of Iterations - 1).

Constraint Handling (GDO): This option can be used for any optimization application and is best
thought of as a "constraint satisfaction" filter on samples generated from the Screening, MOGA, or
NLPQL runs. This is especially useful for Screening samples to detect the edges of solution feasibility
for highly constrained nonlinear optimization problems. The following choices are available:

As Goals (default): Implies that the upper, lower, and equality constrained objectives of the
candidate designs shown in the Table of Optimization are treated as goals; thus any violation
of the objectives is still considered feasible.

As Hard Constraints: When chosen, the upper, lower, and equality constrained objectives are
treated as hard constraints; that is, if any of them are violated then the candidate is no longer
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

65

Using Design Exploration


displayed. So, in some cases no candidate designs may be displayed depending on the extent
of constraint violation.
2.

Set input and output parameter objectives in the Table of Optimization view. At least one of the
output parameter objectives must be a goal. Multiple output parameter goals can be set. See Defining
Design Objectives (p. 67) for more information.

3.

Click the Update Optimization toolbar button.


The result is a group of points or sample set. The points that are most in alignment with the objectives
are displayed in the table as the Candidate Designs for the optimization. The result also displays the
read-only field Size of Generated Sample Set in the Properties view.

NLPQL
The NLPQL (Non-Linear Programming by Quadratic Lagrangian) option allows you to generate a new sample
set or use an existing set to provide a more refined approach than the Screening method. It is available for
continuous input parameters only and can only handle one output parameter goal (other output parameters
can be defined as constraints). See the Non-Linear Programming by Quadratic Lagrangian section for more
information on this algorithm.
To generate samples and perform a NLPQL optimization:
1.

2.

66

Enter the following settings in the Properties view:

Optimization Method: Set to NLPQL.

Allowable Convergence Percentage: The tolerance to which the Karush-Kuhn-Tucker (KKT) optimality criterion is generated during the NLPQL process. (See the NLPQL theory section for more information.) A smaller value indicates more convergence iterations and a more accurate (but slower)
solution. A larger value indicates less convergence iterations and a less accurate (but faster) solution.
The typical default is 1.0e-06, which is consistent across all problem types since the inputs, outputs,
and the gradients are scaled during the NLPQL solution.

Maximum Number of Iterations: The maximum possible number of iterations the algorithm executes. If convergence happens before this number is reached, the iterations will cease. This also
provides an idea of the maximum possible number of function evaluations that are needed for the
full cycle, as well as the maximum possible time it may take to run the optimization. For example,
the absolute maximum number of evaluations is given by: Number of Initial Samples + Number of
Samples Per Iteration * (Maximum Number of Iterations - 1).

Constraint Handling (GDO): This option can be used for any optimization application and is best
thought of as a "constraint satisfaction" filter on samples generated from the Screening, MOGA, or
NLPQL runs. This is especially useful for Screening samples to detect the edges of solution feasibility
for highly constrained nonlinear optimization problems. The following choices are available:

As Goals (default): Implies that the upper, lower, and equality constrained objective of the
candidate designs shown in the Table of Optimization are treated as goals; thus any violation
of the objectives is still considered feasible.

As Hard Constraints: When chosen, the upper, lower, and equality constrained objectives are
treated as hard constraints; that is, if any of them are violated then the candidate is no longer
displayed. So, in some cases no candidate designs may be displayed depending on the extent
of constraint violation.

Set input and output parameter objectives in the Table of Optimization view. See Defining Design
Objectives (p. 67) for more information. Remember, you can only have one output parameter goal for
NLPQL.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Defining Design Objectives


3.

Because NLPQL is a gradient based method, the candidates found depend on the starting point in the
parameter space. NLPQL could find a local optimum which is not the global optimum. To give control
to the user, the Optimization table has an additional row titled Initial Value when using NLPQL. This
allows the user to determine the starting point of the optimization by entering the value of each input
parameter in the Initial Value row.

4.

Click the Update Optimization toolbar button.


The result is a group of points or sample set. The points that are most in alignment with the objectives
are displayed in the table as the Candidate Designs for the optimization. The result also displays the
read-only field Size of Generated Sample Set in the Properties view. This value is always equal to 1
for an NLPQL result.

Defining Design Objectives


When you edit the Optimization cell of a Goal Driven Optimization that you've placed on your schematic,
you can state a series of design objectives, which will be used to generate optimized designs. You will see
a Table of Optimization view which will allow you to set objectives for all input and output parameters as
described below.
The optimization approach used in the Design Exploration environment departs in many ways from traditional optimization techniques. Many objectives are allowed and can be assigned to each output parameter.
Objectives may be weighted in terms of importance.
This departure from traditional optimization also allows objectives to be set for input parameters, which are
independent variables, allowing flexibility in obtaining the desired design configuration.
For both the input and output parameters, you can also specify an importance for each parameter, with
possible values of Default, Higher, or Lower.
The following tables show the objectives that may be applied to input and output parameters.

Table 2 Objectives and Their Meanings for Continuous Input Parameters


Goal

Meaning

Mathematical Meaning

No Objective

Keep the input parameter within its upper XLower <= X <= XUpper
and lower bounds

Minimize

Minimize the input parameter within its


range

X XLower

Seek Midpoint

Attain the average value of the input


parameter within its range

X (XLower + XUpper)

Maximize

Maximize the input parameter within its


range

X XUpper

(X = an input parameter, XLower = lower limit, XUpper = Upper Limit)

Table 3 Objectives and Their Meanings for Discrete or Usability Input Parameters
Goal

Meaning

Mathematical Meaning

No Objective

Uses the full discrete range allowed for the XLower <= X <= XUpper
parameter

(X = an input parameter, XLower = lower limit, XUpper = Upper Limit, XTarget = Target Value)

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

67

Using Design Exploration


Goal

Meaning

Mathematical Meaning

Values = Target

Set the objective to in-range values close


to the target value

X XTarget

Values <= Target

Set the objective to in-range values below


the target value

X <= XTarget

Values >= Target

Set the objective to in-range values above


the target value

X >= XTarget

(X = an input parameter, XLower = lower limit, XUpper = Upper Limit, XTarget = Target Value)

Table 4 Objectives and Their Meanings for Output Parameters


Goal

Meaning

Mathematical Meaning

GDO Treatment

No Objective

No objective is specified

N/A

N/A

Minimize

Achieve the lowest possible


Minimize Y
value for the output parameter

Goal

Maximize

Achieve the highest possible


Maximize Y
value for the output parameter

Goal

Seek Target

If a Target Value is specified,


Y YTarget
then achieve an output parameter value that is close to the
Target Value

Goal

Values <= Target

If a Target Value is specified,


then achieve an output parameter value that is less than
the Target Value

Y <= YTarget

Inequality constraint

Values >= Target

If a Target Value is specified,


then achieve an output parameter value that is greater
than the Target Value

Y > =YTarget

Inequality constraint

Values = Target

If a Target Value is specified,


Y YTarget
then achieve an output parameter value that is close to the
Target Value

Constraint

(Y = output parameter, YTarget = Target Value)


When the MOGA or NLPQL options are used for Goal Driven Optimization, the search algorithm uses the
Objective and Target Value properties of the output parameters and ignores the similar properties of the
input parameters. However, when the Candidate Designs are reported, the first Pareto fronts (in case of
MOGA) or the best solution set (in case of NLPQL) are filtered through the Decision Support Process, which
applies the input and output parameter properties like Importance, Target, and Objective, and reports the
3 best candidate designs. This implies that the candidate designs may not necessarily correspond to the
best input targets, but definitely reports the optimal designs with respect to the output properties; this is
consistent with the mechanism of optimization.
The Screening option is not strictly an optimization approach; it uses the properties of the input as well the
output parameters, and uses the Decision Support Process on a sample set generated by the Shifted Hammersley technique to rank and report the candidate designs. Thus, the candidate designs in this case correspond to both the input and output objectives specified by the user, although the Candidate Designs will
not correspond to the best optimal solutions.
68

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Goal Driven Optimization Charts

Creating New Design Points and Response Points from Candidates


Once the objectives are stated and Candidate Designs are generated, Candidate Designs are listed in the
Optimization table view. They can be stored as Response Points or Design Points. Right click on a Candidate
Design row and select either Insert as Response Points or Insert as Design Points from the menu.

Insert as Response Points inserts new Response Points into the Table of Response Surface by copying
the input and output parameter values of the selected Candidate Design(s).

Insert as Design Points creates new Design Points at the project level (edit the Parameter Set in the
Schematic view to list the existing Design Points) by copying the input parameter values of the selected
Candidate Design(s). The output parameter values are not copied because they are approximated values
provided by the Response Surface.

Both of the above insertion operations are also available from Optimization charts, depending on the context.
For instance, it is possible to right click on a point in a Tradeoff chart in order to insert the corresponding
sample as a Response Point or a Design Point. The same operation is available from a Samples chart.

Goal Driven Optimization Charts


After solving an Optimization cell, several charts are created by default in the Charts section of the Outline
tree.

Using the Sensitivities Chart (GDO)


The Sensitivities chart shows the global sensitivities of the output parameters with respect to the input
parameters. The chart can be displayed as a bar chart or a pie chart by changing the Mode in the chart's
Properties view. You can also select which parameters you want to display on the chart from the Properties
panel by selecting or deselecting the checkbox next to the parameter in the Properties view.
Various generic chart properties can be changed for this chart.

Note
If the p-Value calculated for a particular input parameter is above the Significance Level specified
in the Design Exploration section of the Tools > Options dialog, the bar for that parameter will
be shown as a flat line on the chart. See Determining Significance (p. 63) for more information.

Using the Tradeoff Chart


After solving an Optimization cell, several charts are created by default in the Charts folder of the Outline
tree. The Tradeoff chart is a 2D or 3D scatter chart representing the generated samples. The colors applied
to the points represents the Pareto front they belongs to, from red (the worst) to blue (the best points).
The chart is controlled by changing its properties in the Property view.

The chart can be displayed as in 2D or 3D by changing the Mode in the chart's Properties view.

You can also select which parameter to display on each axis of the chart by selecting the parameter
from the drop down menu next to the axis name in the Properties view.

Limit the pareto fronts shown by moving the slider or entering a value in the field above the slider in
the chart's Properties view.

Various generic chart properties can be changed for this chart.


Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

69

Using Design Exploration


If there are several equivalent solutions for the defined Optimization Study, the Tradeoff chart can be used
to visualize them. They belong to the first Pareto front which is represented in blue. By exploring the samples
for different parameters, you can determine the best compromise by varying secondary parameters.
For further information on results, see Postprocessing GDO Results (p. 76).

Using the Samples Chart


The Samples chart has two modes, to display the candidates with the full sample set, or to display the
samples Pareto front by Pareto front (same as the Tradeoff chart). Compared to the Tradeoff chart, the
Samples chart has the advantage of showing all the parameters at once, whereas the Tradeoff chart can
only show 3 parameters at a time. The Samples chart is better for exploring the parameter space. With the
Samples chart axis sliders, you can filter for each parameter very easily, providing an intuitive way to explore
the alternative designs.
Various generic chart properties can be changed for this chart.

Using Six Sigma Analysis


This section contains information about running a Six Sigma Analysis. In this section you will find information
on the following:

Performing a Six Sigma Analysis (p. 70)

Using Statistical Postprocessing (p. 70)

For more information, see Six Sigma Analysis (p. 28) and "Understanding Six Sigma Analysis" (p. 95).

Performing a Six Sigma Analysis


To perform a six sigma analysis, you must first add a Six Sigma Analysis template to your schematic from
the Design Exploration section of the toolbox. You then edit the Design of Experiments (SSA) cell of the
Six Sigma Analysis and specify which input parameters to evaluate, assign distribution functions to them
as described in Setting Up Uncertainty Variables (p. 33), and specify the distribution attributes (this will generate your samples for each input variable). Once this is completed, the procedure for a Six Sigma Analysis
is:
Solve the Six Sigma Analysis by:

updating each individual cell in the analysis either from the right menu in the Project Schematic or
the Update button in the workspace toolbar while editing the cell.

right clicking on the Six Sigma Analysis cell in the project schematic and selecting Update to update
the entire analysis at once.

clicking the Update Project button in the toolbar to update the entire project.

Using Statistical Postprocessing


From the Six Sigma Analysis workspace you can see the statistics in the Properties view, the probability
tables in the Table view (Tables (SSA) (p. 71)), and statistics charts in the Chart view (Using Parameter Charts
(SSA) (p. 71)) associated with each parameter by selecting a parameter in the Outline. You can open the
Sensitivities chart by selecting it from the Charts section of the Outline (see Using the Sensitivities Chart
(SSA) (p. 71)).

70

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Statistical Measures

Tables (SSA)
In the Six Sigma Analysis workspace, you are able to view the probability tables of any input or output
variable. To select which table view is displayed, select the desired value (Quantile-Percentile or PercentileQuantile) for the Probability Table field in the parameter's Properties view.
You can modify both types of tables by adding or deleting values. To add a value to the Quantile-Percentile
table, type the desired value into the New Parameter Value cell at the end of the table. A row with the
value you entered will be added to the table in the appropriate location. To add a new value to the PercentileQuantile table, type the desired value into the appropriate cell (New Probability Value or New Sigma Level)
at the end of the table. To delete a row from either table, right click on the row and select Remove Level
from the menu. The row will be deleted from the table. You can also overwrite any value in an editable
column and corresponding values will be displayed in the other columns in that row.

Using Parameter Charts (SSA)


You can review the statistical results of the analysis by selecting an input or output parameter to view its
Chart. The results of a Six Sigma Analysis are visualized using histogram plots and cumulative distribution
function plots.
Various generic chart properties can be changed for this chart.

Using the Sensitivities Chart (SSA)


If you select Sensitivities from the Outline view in the Six Sigma workspace, you can review the sensitivities
derived from the samples generated for the Six Sigma Analysis. The Six Sigma Analysis sensitivities are
Global Sensitivities (p. 59), not Using the Local Sensitivity Chart (p. 57). In the Properties view for the Sensitivities chart, you can choose the output parameters for which you want to review sensitivities, and the input
parameters that you would like to evaluate for the output parameters.
Various generic chart properties can be changed for this chart.

Note
If the p-Value calculated for a particular input parameter is above the Significance Level specified
in the Design Exploration section of the Tools > Options dialog, the bar for that parameter will
be shown as a flat line on the chart. See Determining Significance (p. 63) for more information.

Statistical Measures
When the Six Sigma Analysis is updated, the following statistical measures are displayed in the Properties
view for each parameter

Mean

Standard Deviation

Skewness

Kurtosis

Shannon Entropy (Complexity)

Signal-Noise Ratio (Smaller is Better)

Signal-Noise Ratio (Nominal is Best)


Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

71

Using Design Exploration

Signal-Noise Ratio (Larger is Better)

Sigma Minimum

Sigma Maximum

72

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Goal Driven Optimization


Goal Driven Optimization (GDO) is a set of constrained, multi-objective optimization (MOO) techniques in
which the "best" possible designs are obtained from a sample set given the goals you set for parameters.
There are three available optimization methods: Screening, MOGA, and NLPQL. MOGA and NLPQL can only
be used when all input parameters are continuous.
The GDO process allows you to determine the effect on input parameters with certain objectives applied
for the output parameters. For example, in a structural engineering design problem, you may want to determine which set of designs (in terms of geometric problem dimensions and material types) best satisfy
minimum mass, maximum natural frequency, maximum buckling and shear strengths, and minimum cost,
with maximum value constraints on the von Mises stress and maximum displacement.
This section describes the GDO system and its use in performing MOO.

Principles (GDO)
GDO can be used for design optimization in three ways: the Screening approach, the MOGA approach, or
the NLPQL approach. The Screening approach is a non-iterative direct sampling method by a quasi-random
number generator based on the Hammersley algorithm. The MOGA approach is an iterative Multi-Objective
Genetic Algorithm, which can optimize problems with continuous input parameters. NLPQL is a gradient
based single objective optimizer which is based on quasi-Newton methods.
MOGA is better for calculating the global optima while NLPQL is a gradient-based algorithm ideally suited
for local optimization. So you can start with Screening or MOGA to locate the multiple tentative optima and
then refine with NLPQL to zoom in on the individual local maximum or minimum value. Problems with
mixed parameter types (i.e., usability, discrete, or scenario parameters with continuous parameters) or discrete
problems cannot currently be handled by the MOGA or NLPQL techniques, and in these cases you will only
be able to use the Screening technique.
The GDO framework uses a Decision Support Process (DSP) based on satisficing criteria as applied to the
parameter attributes using a weighted aggregate method. In effect, the DSP can be viewed as a postprocessing
action on the Pareto fronts as generated from the results of the MOGA, NLPQL, or Screening process.
Usually the Screening approach is used for preliminary design, which may lead you to apply the MOGA or
NLPQL approaches for more refined optimization results.
In either approach, the TradeOff chart, as applied to the resulting sample set, shows the Pareto-dominant
solutions. However, in the MOGA approach, the Pareto fronts are better articulated and most of the feasible
solutions lie on the first front, as opposed to the usual results of the Screening approach where the solutions
are distributed across all the Pareto fronts. This is illustrated in the following two figures. Figure : 6,000 Sample
Points Generated by Screening Method (p. 74) shows the sample as generated by the Screening method,
and Figure : Final Sample Set After 5,100 Evaluations by MOGA Method (p. 74) shows a sample set generated
by the MOGA method for the same problem.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

73

Understanding Goal Driven Optimization

Figure: 6,000 Sample Points Generated by Screening Method

Figure: Final Sample Set After 5,100 Evaluations by MOGA Method

Figure : Pareto Optimal Front Showing Two Non-dominated Solutions (p. 74) shows the necessity of generating
Pareto fronts. The two axes represent two output parameters that are conflicting by nature. Two optimal
solutions, marked "1" and "2," are shown. Both of these solutions are "non-dominated." That is, they are
equally good in terms of Pareto optimality. Point 1 corresponds to a better solution in terms of the output
marked "v." Point 2 corresponds to a better solution in terms of the output marked "buck."

Figure: Pareto Optimal Front Showing Two Non-dominated Solutions

Guidelines and Best Practices (GDO)


Typically, the use of NLPQL is suggested for continuous problems when there is only one objective function.
The problem may or may not be constrained and must be analytic (i.e. must be defined only by continuous
74

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Guidelines and Best Practices (GDO)


input parameters, and the objective functions and constraints should not exhibit sudden 'jumps' in their
domain.) The main difference from MOGA lies in the fact that MOGA is designed to work with multiple objectives and does not require full continuity of the output parameters. However, for continuous single objective problems, the use of NLPQL gives greater accuracy of the solution as gradient information and line
search methods are used in the optimization iterations. MOGA is a global optimizer designed to avoid local
optima traps and NLPQL is a local optimizer designed for accuracy.
The default convergence rate for the NLPQL algorithm is set to 1.0E-06. This is computed based on the
(normalized) Karush-Kuhn-Tucker (KKT) condition. This implies that the fastest convergence rate of the derivatives or the functions (objective function and constraint) determine the termination of the algorithm.
The advantage of this approach is that for large problems, it is possible to get a near-optimal feasible solution
quickly without being trapped into a series of iterations involving small solution steps near the optima.
However, if the user prefers a more numerically accurate solution which may involve these iterations, then
the convergence rate can be set to as low as 1.0E-12. The range of (1.0E-06 - 1.0E-12) covers a vast majority
of all optimization problems and the choice of the convergence rate is based on the user's preference.
At least one of the output parameters should have an Objective of Maximize, Minimize, or Seek Target
in order to do optimization with the MOGA or NLPQL methods (only one output can have an objective for
the NLPQL method).
If this is not done, then the optimization problem is either undefined (Objective for each parameter set to
No Objective) or is merely a constraint satisfaction problem (Objective set to >= Target or <= Target or
= Target . When the problem is not defined, as in Figure : Case Where the Optimization Cannot be Run (p. 75),
the MOGA or NLPQL analysis cannot be run. If the problem defined is only one of constraint satisfaction,
as in Figure : Case Where GDO Solves a Constraint Satisfaction Problem (p. 76), the single Pareto fronts displayed
(from the Tradeoff chart) represents the feasible points that meet the constraints. The infeasible points that
do not satisfy the constraints can also be displayed on the chart. This is a good way to demonstrate the
feasibility boundaries of the problem in the output space (i.e., to determine the boundaries of the design
envelope that reflects your preferences). Infeasible points can also be displayed on any chart where the
optimization contains at least one constraint objective.
As shown in Figure : Case Where the Optimization Cannot be Run (p. 75), if the entire Objective column is set
to "No Objective," the optimization process cannot be run, as no outputs have goals specified which could
drive the algorithm. The fact that user input is required before the algorithm can be run is reflected both
by the state icon and the Quick Help of the Optimization cell on the Schematic view. If you try to update
the optimization with no goals set, you will also get an error message in the Message view.

Figure: Case Where the Optimization Cannot be Run

As shown in Figure : Case Where GDO Solves a Constraint Satisfaction Problem (p. 76), some combinations of
Objective and Target Value settings specify that the problem is one of constraint satisfaction. The results
obtained from these settings indicate only the boundaries of the feasible region in the design space. Currently
the MOGA and NLPQL options cannot be used to solve a pure constraint satisfaction problem.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

75

Understanding Goal Driven Optimization

Figure: Case Where GDO Solves a Constraint Satisfaction Problem

The Target and Objective settings of the input parameters do not affect the sample generation part of the
MOGA and NLPQL methods. However, these settings will affect the ranking of the samples which generate
the three candidate designs, if the Optimization is updated.
Sample generation using the Screening method is not affected by any parameter setting. However, the
parameter settings do still affect the generation of candidate designs.
Setting the Objective to Maximize, Minimize , or Seek Target, will always treat the objective as a goal.
Setting the Objective to = Target, >= Target or <= Target treats the objectives as constraints if Constraint
Handling (GDO) in the Optimization Properties view is set to As Hard Constraints. If Constraint Handling
(GDO) is set to As Goals, those objectives are treated as goals and not constraints. If all objectives are constraints, Screening is your only optimization option. If at least one of the objectives is a goal, MOGA and
NLPQL are optimization options.
Typically, the Screening method is best suited for conducting a preliminary design study, because it is a
low-resolution, fast, and exhaustive study that can be useful in locating approximate solutions quickly. The
solutions may then be used as a starting sample for the MOGA and NLPQL methods. Since the Screening
method does not depend on any parameter objectives, you can change the objectives after performing the
Screening analysis to view the candidates that meet different objective sets, allowing you to quickly perform
preliminary design studies. It's easy to keep changing the goals and constraints to view the different corresponding candidates, which are drawn from the original sample set.
For example, you may run the Screening process for 3,000 samples, then use the Tradeoff or Samples
charts to view the Pareto fronts. The solutions slider can be used in the chart Properties to display only the
prominent points you may be interested in (usually the first few fronts). When you do a MOGA optimization,
you can limit the number of Pareto fronts that are computed in the analysis.
Once all of your objectives are defined, click Update Optimization to generate up to three candidate designs.
You may save any of the candidates by right-clicking and selecting Insert as Response Points or Insert as
Design Points . See Adding Design Points and Response Points (p. 60) for more information.
Because the GDO algorithm works off of response surfaces, you should validate the best obtained candidate
results by saving the corresponding Design Points, solving them at the Project level, and comparing the
results.

Postprocessing GDO Results


The following topics are discussed in this section:
Postprocessing Overview
Understanding Sensitivities Charts
Understanding Tradeoff Charts
Understanding Samples Charts

76

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Tradeoff Charts

Postprocessing Overview
Postprocessing of GDO optimization results is explained below. The postprocessing phase follows the generation of samples. Once you have Updated your optimization, candidates are displayed in the table and
one instance of each available chart is created for you in the outline (you can create more charts by doubleclicking in the toolbox). Select a chart and view the results, changing the chart's options to see different
views of the parameter space.

Understanding Sensitivities Charts


The sensitivities charts available for Goal Driven Optimization display statistical, global sensitivities. For more
information about the Sensitivities charts, see:

Global Sensitivities (p. 59)

Using the Sensitivities Chart (GDO) (p. 69)

Sensitivities Chart (GDO) (p. 28)

Understanding Tradeoff Charts


In theTradeoff chart produced after the optimization analysis, the samples are ranked by non-dominated
Pareto fronts and the tradeoffs which concern you may be viewed. To make sense of the true nature of the
tradeoffs, the plots must be viewed with the output parameters as the axes. This shows which output goals
can be achieved and whether this entails sacrificing the goal attainment of other outputs. Typically, a tradeoff
chart shows you a choice of possible, non-dominated solutions from which to choose.
When an optimization is updated, you can view the best candidates (up to three) from the sample set based
on the stated objectives. However, these results are not truly representative of the solution set, as this approach obtains results by ranking the solution by an aggregated weighted method. Schematically, this represents only a section of the available Pareto fronts. Changing the weights (using the Importance option
in the Optimization Table) will display different such sections. This postprocessing step helps in selecting
solutions if you are sure of your preferences for each parameter.
Figure : Typical TradeOff Study of Sample Showing First Pareto Front (p. 78) shows the results of the tradeoff
study performed on a MOGA sample set. The first Pareto front (non-dominated solutions) is shown as blue
points on the output-axis plot. The slider in the Properties view can be moved to the right to add more
fronts, effectively adding more points to the tradeoff chart. The additional points added this way are inferior
to the points in the first Pareto front in terms of the goals you specified, but in some cases, where there are
not enough first Pareto front points, these may be necessary to obtain the final design. You can right-click
on individual points and save them as Design Points or Response Points.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

77

Understanding Goal Driven Optimization

Figure: Typical TradeOff Study of Sample Showing First Pareto Front

It is also important to discuss the representation of feasible and infeasible points (infeasible points are
available if any of the objectives are defined as constraints). The MOGA algorithm always ensures that
feasible points are shown as being of better quality than the infeasible points, and different markers are
used to indicate them in the chart. In the 2-D and 3-D tradeoff chart, rectangles denote feasible points and
cirlcles denote infeasible ones. You can enable or disable the display of infeasible points in the Chart Properties view.
Also, in both types of charts, the best Pareto front is blue and the fronts gradually transition to red (worst
Pareto front). Figure : Two-dimensional TradeOff Plot Showing Feasible and Infeasible Points (p. 78) is a typical
2-D tradeoff chart with feasible and infeasible points.

Figure: Two-dimensional TradeOff Plot Showing Feasible and Infeasible Points

Understanding Samples Charts


The Samples chart is a postprocessing feature that allows you to explore a sample set given defined goals.
After solving an Optimization cell, a Samples chart will appear in the Outline under Charts.
The aim of this chart is to provide a multidimensional graphical representation of the parameter space you
are studying. The chart uses the parallel Y axes to represent all of the inputs and outputs. Each sample is
displayed as a group of line curves where each point is the value of one input or output parameter. The
color of the curve identifies the Pareto front that the sample belongs to, or the chart can be set so that the
curves display the best candidates and all other samples.
The Samples chart is a powerful exploration tool because of its interactivity. It allows you to first pick a Y
axis with the mouse then slide the yellow arrows that appear on the bottom and the top of each axis up or
down in order to increase or decrease the axis bounds. Samples are dynamically hidden if they fall outside
78

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Shifted Hammersley Sampling Method


of the bounds. Repeating the same operation with each axis allows you to manually explore and find tradeoffs.

Goal Driven Optimization Theory


In this section, theoretical aspects of goal driven optimization are briefly discussed.

Shifted Hammersley Sampling Method (p. 79)

Pareto Dominance in Multi-Objective Optimization (p. 80)

MOGA (Multi-Objective Genetic Algorithm) (p. 81)

Decision Support Process (p. 81)

Non-linear Programming by Quadratic Lagrangian (NLPQL) (p. 85)

Shifted Hammersley Sampling Method


The Shifted Hammersley method is the sampling strategy used for all sample generation, except for the
NLPQL method. The conventional Hammersley sampling algorithm is a quasi-random number generator
which has very low discrepancy and is used for quasi-Monte Carlo simulations. A low-discrepancy sequence
is defined as a sequence of points that approximate the equidistribution in a multi-dimensional cube in an
optimal way. In other words, the design space is populated almost uniformly by these sequences and, due
to the inherent properties of Monte Carlo sampling, dimensionality is not a problem (i.e., the number of
points does not increase exponentially with an increase in the number of input parameters). The conventional
Hammersley algorithm is constructed by using the radical inverse function. Any integer n can be represented
as a sequence of digits n0, n1, n2, ..., nm by the following equation:
n = n0 n1 n2 n3 nm

(1)

For example, consider the integer 687459, which can be represented this way as n0 = 6, n1 = 8, and so on.
Because this integer is represented with radix 10, we can write it as 687459 = 9 + 5 * 10 + 4 * 100 and so
on. In general, for a radix R representation, the equation is:
n = nm + nm 1 * R + + n0

(2)

The inverse radical function is defined as the function which generates a fraction in (0, 1) by reversing the
order of the digits in Equation 2 (p. 79) about the decimal point, as shown below.
R (n) = 0 nm nm 1 nm 2 n0
= nm * R 1 + nm 1 * R 2 + + n0 * R (m 1)

(3)

Thus, for a k-dimensional search space, the Hammersley points are given by the following expression:
Hk (i) = [i / N, R1(i), R2 (i),, Rk 1(i)]

(4)

where i = 0, ..., N indicates the sample points. Now, from the plot of these points, it is seen that the first row
(corresponding to the first sample point) of the Hammersley matrix is zero and the last row is not 1. This
implies that, for the k-dimensional hypercube, the Hammersley sampler generates a block of points that are
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

79

Understanding Goal Driven Optimization


skewed more toward the origin of the cube and away from the far edges and faces. To compensate for this
bias, a point-shifting process is proposed that shifts all Hammersley points by the amount below:
=

1
N
2

(5)

This moves the point set more toward the center of the search space and avoids unnecessary bias. Thus,
the initial population always provides unbiased, low-discrepancy coverage of the search space.

Pareto Dominance in Multi-Objective Optimization


The concept of Pareto dominance is of extreme importance in multi-objective optimization, especially where
some or all of the objectives are mutually conflicting. In such a case, there is no single point that yields the
"best" value for all objectives (i.e., the Utopia Point). Instead, the best solutions, often called a Pareto or nondominated set, are a group of solutions such that selecting any one of them in place of another will always
sacrifice quality for at least one objective, while improving at least one other. Formally, the description of
such Pareto optimality for generic optimization problems can be formulated as in the following equations.
Taking a closer, more formal look at the multi-objective optimization problem, let the following denote the
set of all feasible (i.e., do not violate constraints) solutions:
X := { x : g( x ) 0, h( x ) = 0, xi x xu }

(6)

The problem can then be simplified to:


r
min f ( x )
xX
If there exists x'

(7)

X such that for all objective functions x* is optimal. This, for i = 1, ..., k, is expressed:

f i( x *) f i( x )x X,

(8)

This indicates that x* is certainly a desirable solution. Unfortunately, this is a utopian situation that rarely
exists, as it is unlikely that all f1(x) will have minimum values for X at a common point (x*). The question is
left: What solution should be used? That is, how should an "optimal" solution be defined? First, consider the
so-called ideal (utopian) solution. In order to define this solution, separately attainable minima must be
found for all objective functions. Assuming there is one, let x'1 be the solution of the scalar optimization
problem:
min f i( x ) = f i*
xX

(9)

*
*
* i
Here f1' is called the individual minimum for the scalar problem i; the vector f = ( f i ,, fk ) is called ideal
for a multi-objective optimization problem; and the points in X which determined this vector is the ideal
solution.

80

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Decision Support Process


It is usually not true that Equation 10 (p. 81) holds, although it would be useful, as the multi-objective
problem would have been solved by considering a sequence for scalar problems. It is necessary to define a
new form of optimality, which leads to the concept of Pareto Optimality. Introduced by V. Pareto in 1896,
it is still the most important part of multi-objective optimization.
f * = ( f i*,, fk* )i
A point x'
i = 1, ..., k,

(10)

X is said to be Pareto Optimal for the problem if there is no other vector x

X such that for all

f i( x ) f i( x * )
and, for at least one i

(11)
{1, ..., k}:

f i( x ) < f i( X* )

(12)

This definition is based on the intuitive conviction that the point x* X is chosen as the optimal if no criterion
can be improved without worsening at least one other criterion. Unfortunately, the Pareto optimum almost
always gives not a single solution, but a set of solutions. Usually Pareto optimality is spoken of as being
global or local depending on the neighborhood of the solutions X, and in this case, almost all traditional
algorithms can at best guarantee a local Pareto optimality. However, this MOGA-based system, which incorporates global Pareto filters, yields the global Pareto front.

MOGA (Multi-Objective Genetic Algorithm)


The MOGA used in GDO is a hybrid variant of the popular NSGA-II (Non-dominated Sorted Genetic AlgorithmII) based on controlled elitism concepts. Currently, only continuous problems can be solved. The Pareto
ranking scheme is done by a fast, non-dominated sorting method that is an order of magnitude faster than
traditional Pareto ranking methods. The constraint handling uses the same non-dominance principle as the
objectives, thus penalty functions and Lagrange multipliers are not needed. This also ensures that the feasible
solutions are always ranked higher than the infeasible solutions.
The first Pareto front solutions are archived in a separate sample set internally and are distinct from the
evolving sample set. This ensures minimal disruption of Pareto front patterns already available from earlier
iterations. You can control the selection pressure (and, consequently, the elitism of the process) to avoid
premature convergence by altering the parameter Percent Pareto.

Decision Support Process


The decision support process is a goal-based, weighted, aggregation-based design ranking technique.
Given n input parameters, m output parameters, and their individual targets, the collection of objectives is
combined into a single, weighted objective function, , which is sampled by means of a direct Monte Carlo
method using uniform distribution. The candidate designs are subsequently ranked by ascending magnitudes
of the values of . The proposed function for (where all continuous input parameters have usable values
of type "continuous") is given by the following:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

81

Understanding Goal Driven Optimization


n

i =1

j =1

w iNi + w jM j

(13)

where:
wi and wj = weights defined in Equation 16 (p. 82)
Ni and Mi = normalized objectives for input and output parameters, respectively
The normalized objectives (metrics) are:
xt x
Ni=
x x
l i
u

(14)

yt y
Mj=

y
max ymin j

(15)

where:
x = current value for input parameter i
xt, yt = corresponding "target value"
y = current value for output parameter j
xl and xu = lower and upper values, respectively, for input parameter i
ymin and ymax = corresponding lower and upper bounds, respectively, for output parameter j
The fuzziness of the combined objective function derives from the weights w, which are simply defined as
follows:
1.000,

w i = w j = 0.666,
0.333,

if the Importance is "Higher"


if the Impo
ortance is "Default"

(16)

if the Importance is "Lower"

The labels used are defined in Defining Design Objectives (p. 67).
The targets represent the desired values of the parameters, and are defined for the continuous input parameters as follows:
x,
x ,
l
xt = 1
2 (x l + xu ),
x ,
u

if Objective is "No Objective"


if Objective is "Min
nimize"
if Objective is "Seek Midpoint"
if Objective is "Maximize"

and, for the output parameters we have the following desired values:

82

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

(17)

Decision Support Process


y, if Objective is "No Objective"
y , if Objective is "Minimize" and a "Target" value is not defined
min
y * , if Objective is "Values < = Target" and "Target" is defined and y y *
t
t
y, if Objective is "Values < = Target" and "Target" is defined and y y *
t

yt = *
y t , if Objective is "Seek Target" or "Values = Target"

*
y, if Objective is "Values > = Target" and "Target" is defined and y y t
*
*
y t , if Objective is "Values > = Target" and "Target" is defined and y y t
ymax , if Objective is "Maximize" and a "Target" value is not defined

(18)

where:
*

yt

= user-specified target value

Thus, Equation 16 (p. 82) and Equation 17 (p. 82) constitute the input parameter goals for the continuous
input parameters and Equation 16 (p. 82) and Equation 18 (p. 83) constitute the output parameter goals.
The following section considers the case where the continuous input parameters have discrete Usability
values ("Levels" in the parameters' Table view) and there may be discrete input parameters of type "Discrete."
Let us consider the case where the Usable Values of the continuous input parameter are defined as the following:
{ X} = { x0 , x1,..., xl 1}

(19)

with l usable values, the following metric is defined:


Pk =

xt x

(20)

xMAX xMIN

where, as before,
xMAX = upper bound of the usable values
xMIN = lower bound of the usable values
The target value xt is given by the following:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

83

Understanding Goal Driven Optimization


x,
*
xt ,

x,
xt = *
xt ,

x,
*
x t ,

if Objective is "No Objective"


if Objective is "Values <= Target" and "Target" is defined and x x *t
if Objective is "Values <= Target" and "Target" is defined and x x *t
if Objective is "Seek Target"

(21)

if Objective is "Values >= Target" and "Target" is defined and x x *t


if Objective is "Values >= Target" and "Target" is defined and x x *t

Thus, the GDO objective equation becomes the following (for parameters with discrete Usable Values).
n

i =1

j =1

l =1

w iN i + w jM j + w lP l

(22)

Therefore, Equation 15 (p. 82), Equation 16 (p. 82), and Equation 20 (p. 83) constitute the input parameter
goals for parameters which may be continuous or possess discrete usable values.
The norms and goals as in equations Equation 20 (p. 83) and Equation 21 (p. 84) are also adopted to define
the input goals for the discrete input parameters of the type "Discrete"; i.e., those discrete parameters whose
Usable Alternatives indicate a whole number of some particular design feature (number of holes in a plate,
number of stiffeners, etc.).
Thus, equations Equation 15 (p. 82), Equation 16 (p. 82), and Equation 20 (p. 83) constitute the input parameter goals for "Discrete" parameters.
Therefore, the GDO objective function equation for the most general case (where there are continuous and
discrete parameters) can be written as the following:
n

w iN i
i =1

+ w jM j
( Continuous Input )

j =1

( Continuous Output )

(23)

+ w f Pf
f =1

(Usability + Discrete )

where:
n = number of Continuous Input parameters
m = number of Continuous Output parameters
l = number of Continuous Input parameters with Usability Levels and "Discrete" parameters
From the normed values it is obvious that the lower the value of , the better the design with respect to
the desired values and importances. Thus, a quasi-random uniform sampling of design points is done by a
Hammersley algorithm and the samples are sorted in ascending order of . The desired number of designs
are then drawn from the top of the sorted list. A crowding technique is employed to ensure that any two
sampled design points are not very close to each other in the space of the input parameters.

84

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Newton's iterative method

Rating Candidate Design Points


Each parameter range is divided into 6 zones, or rating scales. The location of a design candidate value in
the range is measured according to the rating scales. For example, for parameter X with a range of 0.9 to
1.1, the rating scale for a design candidate value of 1.0333 is calculated as follows:
(((Absolute(1.0333-1.1))/(1.1-0.9))*6)-(6/2) = 2.01-3 = -1 [one star] (as 0 indicating neutral, negative values
indicating closer to the target, up to -3; positive value indicating farther away from the target, up to +3)
Following the same procedures, you will get rating scale for design candidate value of 0.9333 as 5.001-3 =
+2 [two crosses] (away from target). Therefore, the extreme cases are as follows:
1.

Design Candidate value of 0.9 (the worst), the rating scale is 6-3 = +3 [three crosses]

2.

Design Candidate value of 1.1 (the best), the rating scale is 0-3 = -3 [three stars]

3.

Design Candidate value of 1.0 (neutral), the rating scale is 3-3 = 0 [dash]

Non-linear Programming by Quadratic Lagrangian (NLPQL)


NLPQL (Non-linear Programming by Quadratic Lagrangian) is a mathematical optimization algorithm as developed by Klaus Schittkowski. This method solves constrained nonlinear programming problems of the
form
Minimize:
f = f ({ x })
Subject to:
gk ({ x }) 0, k = 1, 2,..., K
hl ({ x}) 0, l = 1, 2,..., L
where
{ xL } { x} { xU }
It is assumed that objective function and constraints are continuously differentiable. The idea is to generate
a sequence of quadratic programming subproblems obtained by a quadratic approximation of the Lagrangian
function and a linearization of the constraints. Second order information is updated by a quasi-Newton formula
and the method is stabilized by an additional (Armijo) line search.
The method presupposes that the problem size is not very large (less than 2000 input variables) and that it
is well scaled. Also, the accuracy of the methods depends on the accuracy of the gradients. Since, for most
practical problems, analytical gradients are unavailable, it is imperative that the numerical (finite difference
based) gradients are as accurate as possible.

Newton's iterative method


Before the actual derivation of the NLPQL equations, Newtons iterative method for the solution of nonlinear
equation sets is reviewed. Let f(x) be a multivariable function such that it can be expanded about the point
x in a Taylors series.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

85

Understanding Goal Driven Optimization


1
f ( x + x ) f ( x ) + {x }T {f ( x )} + {x}T [ f ( x )]{x}
2

(24)

where, it is assumed that the Taylor series actually models a local area of the function by a quadratic approximation. The objective is to devise an iterative scheme by linearizing the vector Equation 24 (p. 86). To this
end, it is assumed that at the end of the iterative cycle, the Equation 24 (p. 86) would be exactly valid. This
implies that the first variation of the following expression with respect to x must be zero.

1
( x ) = f ( x + x ) f ( x ) + {x} T {f ( x )} + {x }T [ f ( x )]{x}
2

(25)

which implies that


f ( x + x ),x ({f ( x )} + [ f ( x )]{x }) = 0

(26)

The first expression indicates the first variation of the converged solution with respect to the increment in
the independent variable vector. This gradient is necessarily zero since the converged solution clearly does
not depend on the step-length. Thus, Equation 26 (p. 86) can be written as the following:
{ x j +1} = { x j } [ f ( x j )]1{f ( x j )}

(27)

where, the index "j" indicates the iteration. Equation 27 (p. 86) is thus used in the main quadratic programming
scheme.

NLPQL derivation:
Consider the following single-objective nonlinear optimization problem. It is assumed that the problem is
smooth and analytic throughout and is a problem of N decision variables.
Minimize:
f = f ({ x })
Subject to:
gk ({ x }) 0, k = 1, 2,..., K
hl ({ x}) 0, l = 1, 2,..., L
where
{ xL } { x} { xU }

(28)

Where, K and L are the numbers of inequality and equality constraints. In many cases the inequality constraints
are bound above and below, in such cases, it is customary to split the constraint into two inequality con86

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

NLPQL derivation:
straints. In order to approximate the quadratic sub-problem assuming the presence of only equality constraints,
the Lagrangian for Equation 28 (p. 86) as:
({ x },{ }) = f ({ x }) + { }T {h({ x })}

(29)

Where, {} is a L dimensional vector (non-zero) which are used as Lagrange multipliers. Thus, Equation 29 (p. 87) becomes a functional, which depends on two sets of independent vectors. In order to minimize
this expression, we seek the stationarity of this functional with respect to the two sets of vectors. These expressions give rise to two sets of vector expressions as the following:
({ x },{ }),{ x } = { } = {f } + [H]{ } = {0}
({ x },{ }),{ } = {h({ x })} = {0}

(30)

Equation 30 (p. 87) defines the Karush-Kuhn-Tucker (KKT) conditions which are the necessary conditions for
the existence of the optimal point. The first equation comprises "N" nonlinear algebraic equations and the
second one comprises "L" nonlinear algebraic equations. The matrix [H] is a (N*L) matrix defined as the following:
[H](N*L ) = [{h1({ x })}, {h2 ({ x})}, {h3 ({ x})}, ...]

(31)

Thus, in Equation 31 (p. 87), each column is a gradient of the corresponding equality constraint. For our
convenience, the nonlinear equations of Equation 30 (p. 87) can be written as the following:
{F({ Y })} = {0}

(32)

Where, Equation 32 (p. 87) is a (N+L) system of nonlinear equations. The independent variable set {Y} can
be written as:
{x}
{ Y} =
{ }

(33)

while the functional set {F} is written as the following:


{}
{F} =

{h}

(34)

Referring to the section on Newton based methods, the vector {Y} in Equation 33 (p. 87) is updated as the
following:
{ Yj +1} = { Yj } + {Yj )

(35)

The increment of the vector is given by the iterative scheme in Equation 27 (p. 86). Referring to Equation 32 (p. 87), Equation 33 (p. 87), and Equation 34 (p. 87), the iterative equation is expressed as the following:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

87

Understanding Goal Driven Optimization


Fj {Yj } = {F({ Yj })}

(36)

Note that this is only a first order approximation of the Taylor expansion of Equation 32 (p. 87). This is in
contrast to Equation 27 (p. 86) where a quadratic approximation is done. This is because in Equation 33 (p. 87),
a first order approximation has already been done. The matrices and the vectors in Equation 36 (p. 88) can
be expanded as the following:
{ x }
{ Y } =

{ }

(37)

and
[ 2 ]

(N*N) [H](N*L )

[ F ] =
[H]T(L *N) [0]

(L *L ) ((N+L *(N +L ))

(38)

This is obtained by taking the gradients of Equation 34 (p. 87) with respect to the two variable sets. The submatrix

[ 2 ](N*N)

is the (N*N) Hessian of the Lagrange function in implicit form.

To demonstrate how Equation 38 (p. 88) is formed, let us consider the following simple case. Given a vector
of two variables x and y, we write:
a1( x, y )

{a( x, y )} a2( x, y )
{V} =
=

{b( x, y )} b1( x, y )
b2( x, y )
It is required to find the gradient (i.e. Jacobian of the vector function V ). To this effect, the derivation of the
Jacobian is evident because in the present context, the vector V indicates a set of nonlinear (algebraic)
equations and the Jacobian is the coefficient matrix which "links" the increment of the independent variable
vector to the dependent variable vector. Thus, we can write:
a1

x
a2

{a( x, y )} x
{ V } =
=

{b( x, y )} b1

x
b2
x

a1
y

a2
y x

b1 y

y
b2

Thus, the Jacobian matrix is formed. In some applications, the equation is written in the following form:

88

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

NLPQL derivation:
a1 a2
{a( x, y )} x
x
{ V } =
=
{b( x, y )} a1 a2
y
y

b1 b2
x
x x

b1 b2 y
y
y

Where, the column "i" indicates the gradient of the "i-th" component of the dependent variable vector with
respect to the independent vector. This is the formalism we use in determining Equation 38 (p. 88).
Equation 36 (p. 88) may be rewritten as the following:
[ 2 ]

(N*N) [H](N*L )

[H]T(L *N) [0]

(L * L )

( j)

( j)

{x }

{ }

( j)

{ }
=

{h}

(39)

Solution of Equation 39 (p. 89) iteratively will solve Equation 40 (p. 89) in a series of linear steps till the point
when the increment is negligible. The update schemes for the independent variable and Lagrange multiplier
vectors x and are written as the following:
{ x j +1} = { x j } + {x j }

(40)

{ j +1} = { j } + { j }

The individual equations of the Equation 39 (p. 89) are written separately now. The first equation (corresponding to minimization with respect to x) may be written as:
[ 2 ] j {x} j + [H] j { } j = { } j
[ 2 ] j {x} j + [H] j { } j = {f } j [H] j {} j

(41)

[ 2 ] j {x} j + [H] j { } j +1 = {f } j

The last step in Equation 41 (p. 89) is done by using Equation 40 (p. 89). Thus, using Equation 41 (p. 89) and
Equation 39 (p. 89), the iterative scheme can be rewritten in a simplified form as:
[ 2 ]

(N*N) [H](N*L )

[H]T(L *N) [0]

(L * L )

( j)

( j)
{x}( j)
{f }
( j +1) =

{h}
{ }

(42)

Thus, Equation 42 (p. 89) can be used directly to compute the (j+1)th value of the Lagrange multiplier vector.
Note that by using Equation 42 (p. 89), it is possible to compute the update of x and the new value of the
Lagrange multiplier vector in the same iterative step. Equation 42 (p. 89) shows the general scheme by
which the KKT optimality condition can be solved iteratively for a generalized optimization problem.
Now, consider the following quadratic approximation problem as given by:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

89

Understanding Goal Driven Optimization


1
Q = {f }T {x} + {x }T [2 ]{x}
2

(43)

Subject to the equality constraints given by the following:


{h}(L *1) + [H]T(L *N) {x } = {0}

(44)

Where the definition of the matrices in Equation 43 (p. 90) and Equation 44 (p. 90) are given earlier. In order
to solve the quadratic minimization problem let us form the Lagrangian as:
1
= {f } T {x } + {x}T [ 2 ]{x} + { }T {h} + { }T [H]T {x }
2

(45)

Now, the KKT conditions may derived (as done earlier) by taking gradients of the Lagrangian in Equation 45 (p. 90) as the following:
,{ x } = {f } + [2 ]{x } + [H]{ } = {0}
,{ } = {h} + [H]T {x} = {0}

(46)

In a condensed matrix form Equation 46 (p. 90) may be written as the following:
[ 2 ] [H] {x}
{f }

[H]T [0] { }
{h}

(47)

Equation 47 (p. 90) is the same as Equation 42 (p. 89) this implies that the iterative scheme in Equation 47 (p. 90) which actually solves a quadratic subproblem (Equation 43 (p. 90) and Equation 44 (p. 90)) in
the domain x. In case the real problem is quadratic then this iterative scheme solves the exact problem.
On addition of inequality constraints, the Lagrangian of the actual problem can be written as the following:
({ x },{ },{},{ y }) = f ({ x}) + { }T {h({ x })} + {}T {g({ x})} + {} T { y 2 }

(48)

Note that the inequality constraints have been converted to equality constraints by using a set of slack
variables y as the following:
gk ({ x }) + ( yk )2 = 0, k = 1, 2,..., M

(49)

The squared term is used to ensure that the slack variable remains positive which is required to satisfy
Equation 49 (p. 90). The Lagrangian in Equation 48 (p. 90) acts as an enhanced objective function. It is seen
that the only case where the additional terms may be active is when the constraints are not satisfied.

90

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

NLPQL derivation:
The KKT conditions as derived from Equation 48 (p. 90) (by taking first variations with respect to the independent variable vectors) are:
,{ x } = { } = {f } + [H]{ } + [G]{} = {0}(N*1)
,{ } = {h({ x })} = {0}(L *1)
,{} = {g({ x})} = { y 2 } = {0}(M*1)

(50)

,{ y } = {2y} = {0}(M*1)

From the KKT conditions in Equation 50 (p. 91), it is evident that there are (N + L + 2*M) equations for a
similar number of unknowns, thus this equation set possesses an unique solution. Let this (optimal) solution
be marked as x. At this point, a certain number of constraints will be active and some others will be inactive.
Let the number of active inequality constraints be p and the total number of active equality constraints be
q. By an active constraint it is assumed that the constraint is at its threshold value of zero. Thus, let J1 and
J2 be the sets of active and inactive equality constraints (respectively) and K1 and K2 be the sets of active
and inactive inequality constraints respectively. Thus, we can write the following relations:
J1 J2 = ; meas( J1 J2 ) = L; meas(J1) = q; meas( J2 = (L q)
K1 K 2 = ; meas(K1 K 2 ) = M; meas(K1) = p; meas(K 2 = (M p)

(51)

Where, meas() indicates the count of the elements of the set under consideration. These sets thus partition
the constraints into active and inactive sets. Thus, we can write:
hk ({ x}) = 0, k J2
g j ({ x}) = 0, y j = 0; j 0; j K1

(52)

g j ({ x}) < 0, y j 0; j = 0; j K1

Thus, it can be stated that the last 3 equations in Equation 50 (p. 91) may be represented by the Equation 52 (p. 91). These are the optimality conditions for constraint satisfaction. From these equations, we can
now eliminate y such that the Lagrangian in Equation 48 (p. 90) will depend on only 3 independent variable
vectors. From the last two conditions in Equation 52 (p. 91), we can write the following condition, which is
always valid for a optimal point:
jg j ({ x}) = 0

(53)

Using Equation 53 (p. 91) in the set Equation 50 (p. 91), the KKT optimality conditions may be written as the
following:
{} = {f } + [H]{ } + [G]{} = {0}(N*1)
{h({ x})} = {0}(L *1)

(54)

{g({ x})} = {0}(M*1)

Thus, the new set contains only (N + L + M) unknowns. Now, following the same logic as in Equation 33 (p. 87)
as done earlierlet us express Equation 54 (p. 91) in the same form as in Equation 32 (p. 87). This represents
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

91

Understanding Goal Driven Optimization


a (N + L + M) system of nonlinear equations. The independent variable set may be written in vector for as
the following:
{x}

{ Y } = { }
{}

(55)

Newtons iterative scheme is also used here, thus, the same equations as in Equation 35 (p. 87) and Equation 36 (p. 88) also apply here. Thus, following Equation 36 (p. 88), we can write:
{ x }

{ Y } = { }
{ }

(56)

Taking the first variation of the KKT equations in Equation 54 (p. 91) and equating to zero, the sub-quadratic
equation is formulated as the following:
[ 2 ] [H]T

0
[H]
[G]
0

[G]T x
{ }

0 = {h}
{g}
g

(57)


Where indicates a diagonal matrix.
At the jth step, the first equation can be written as (by linearization)
[ 2 ] j {x} j + [H] j { } j + [G] j {} j = {f } j [H] j { } j [G] j {} j

(58)

Which simplifies to
[ 2 ] j {x} j + [H] j { } j +1 + [G] j {} j +1 = {f } j

(59)

Thus, the linearized set of equations for Newtons method to be applied can be written in an explicit form
as:

92

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

NLPQL derivation:
[ 2 ] [H]T
j

[H] j
0

0
j [G] j

{ f } j
[G]Tj {x } j

0 { } j +1 = {h} j

2{g} j
g j {} j +1

(60)

So, in presence of both equality and inequality constraints, Equation 60 (p. 93) can be used in a quasi Newton
framework to determine the increments {x}j and Lagrange multipiers {}j+1 and {}j+1 when stepping from
iteration j to j+1.
2
The Hessian matrix [ ] is not computed directly but is estimated and updated in a BFGS type line search.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

93

94

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Understanding Six Sigma Analysis


A Six Sigma Analysis allows you to determine the extent to which uncertainties in the model affect the results
of an analysis. An uncertainty (or random quantity) is a parameter whose value is impossible to determine
at a given point in time (if it is time-dependent) or at a given location (if it is location-dependent). An example
is ambient temperature; you cannot know precisely what the temperature will be one week from now in a
given city.
A Six Sigma Analysis uses statistical distribution functions (such as the Gaussian, or normal, distribution, the
uniform distribution, etc.) to describe uncertain parameters.
Six Sigma Analysis allows you to determine whether your product satisfies Six Sigma quality criteria. A
product has Six Sigma quality if only 3.4 parts out of every 1 million manufactured fail. This quality definition
is based on the assumption that an output parameter relevant to the quality and performance assessment
follows a Gaussian distribution, as shown below.
An output parameter that characterizes product performance is typically used to determine whether a
product's performance is satisfactory. The parameter must fall within the interval bounded by the lower
specification limit (LSL) and the upper specification limit (USL). Sometimes only one of these limits exists.

An example of this is a case when the maximum von Mises stress in a component must not exceed the yield
strength. The relevant output parameter is, of course, the maximum von Mises stress and the USL is the
yield strength. The lower specification limit is not relevant. The area below the probability density function
falling outside the specification interval is a direct measure of the probability that the product does not
conform to the quality criteria, as shown above. If the output parameter does follow a Gaussian distribution,
then the product satisfies a Six Sigma quality criterion if both specification limits are at least six standard
deviations away from the mean value.
In reality, an output parameter rarely exactly follows a Gaussian distribution. However, the definition of Six
Sigma quality is inherently probabilistic -- it represents an admissible probability that parts do not conform
to the quality criteria defined by the specified limits. The nonconformance probability can be calculated no
matter which distribution the output parameter actually follows. For distributions other than Gaussian, the

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

95

Understanding Six Sigma Analysis


Six Sigma level is not really six standard deviations away from the mean value, but it does represent a
probability of 3.4 parts per million, which is consistent with the definition of Six Sigma quality.
This section describes the Six Sigma Analysis tool and how to use it to perform a Six Sigma Analysis.

Principles (SSA)
Computer models are described with specific numerical and deterministic values: material properties are
entered using certain values, the geometry of the component is assigned a certain length or width, etc. An
analysis based on a given set of specific numbers and values is called a deterministic analysis. The accuracy
of a deterministic analysis depends upon the assumptions and input values used for the analysis.
While scatter and uncertainty naturally occur in every aspect of an analysis, deterministic analyses do not
take them into account. To deal with uncertainties and scatter, use Six Sigma Analysis, which allows you to
answer the following questions:

If the input variables of a finite element model are subject to scatter, how large is the scatter of the
output parameters? How robust are the output parameters? Here, output parameters can be any parameter that ANSYS Workbench can calculate. Examples are the temperature, stress, strain, or deflection
at a node, the maximum temperature, stress, strain, or deflection of the model, etc.

If the output is subject to scatter due to the variation of the input variables, then what is the probability
that a design criterion given for the output parameters is no longer met? How large is the probability
that an unexpected and unwanted event takes place (i.e., what is the failure probability)?

Which input variables contribute the most to the scatter of an output parameter and to the failure
probability? What are the sensitivities of the output parameter with respect to the input variables?

Six Sigma Analysis can be used to determine the effect of one or more variables on the outcome of the
analysis. In addition to the Six Sigma Analysis techniques available, ANSYS Workbench offers a set of strategic
tools to enhance the efficiency of the Six Sigma Analysis process. For example, you can graph the effects of
one input parameter versus an output parameter, and you can easily add more samples and additional
analysis loops to refine your analysis.
In traditional deterministic analyses, uncertainties are either ignored or accounted for by applying conservative
assumptions. You would typically ignore uncertainties if you know for certain that the input parameter has
no effect on the behavior of the component under investigation. In this case, only the mean values or some
nominal values are used in the analysis. However, in some situations, the influences of uncertainties exists
but is still neglected, as for the thermal expansion coefficient, for which the scatter is usually ignored.

Example 1 Accounting for Uncertainties


If you are performing a thermal analysis and want to evaluate the thermal stresses, the equation is:
therm = E T
because the thermal stresses are directly proportional to the Young's modulus as well as to the thermal expansion coefficient of the material.
The table below shows the probability that the thermal stresses will be higher than expected, taking uncertainty variables into account.

96

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Choosing and Defining Uncertainty Variables


Uncertainty variables taken into
account

Probability that the


thermal stresses are more
than 5% higher than expected

Probability that the


thermal stresses are more
than 10% higher than expected

Young's modulus (Gaussian distribution with 5% standard deviation)

~16%

~2.3%

Young's modulus and thermal expansion coefficient (each with Gaussian


distribution with 5% standard deviation)

~22%

~8%

Reliability, Quality, and Safety Issues


Use Six Sigma Analysis when issues of reliability, quality, and safety are paramount.
Reliability is typically a concern when product or component failures have significant financial consequences
(costs of repair, replacement, warranty, or penalties) or worse, can result in injury or loss of life.
If you use a conservative assumption, the difference in thermal stresses shown above tells you that uncertainty
or randomness is involved. Conservative assumptions are usually expressed in terms of safety factors.
Sometimes regulatory bodies demand safety factors in certain procedural codes. If you are not faced with
such restrictions or demands, then using conservative assumptions and safety factors can lead to inefficient
and costly over-design. By using Six Sigma Analysis methods, you can avoid over-design while still ensuring
the safety of the component.
Six Sigma Analysis methods even enable you to quantify the safety of the component by providing a probability that the component will survive operating conditions. Quantifying a goal is the necessary first step
toward achieving it.

Guidelines for Selecting SSA Variables


This section presents useful guidelines for defining your Six Sigma Analysis variables.

Choosing and Defining Uncertainty Variables


First, you should:

Specify a reasonable range of values for each uncertainty variable.

Set reasonable limits on the variability for each uncertainty variable.

More information about choosing and defining uncertainty variables can be found in the following sections.
Uncertainty Variables for Response Surface Analyses
Choosing a Distribution for a Random Variable
Distribution Functions

Uncertainty Variables for Response Surface Analyses


The number of simulation loops that are required for a Response Surface analysis depends on the number
of uncertainty variables. Therefore, you want to select the most important input variable(s), the ones you
know have a significant impact on the result parameters. If you are unsure which uncertainty variables are
important, include all of the random variables you can think of and then perform a Monte Carlo analysis.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

97

Understanding Six Sigma Analysis


After you learn which uncertainty variables are important and should be included in your Response Surface
Analysis, you can eliminate those that are unnecessary.

Choosing a Distribution for a Random Variable


The type and source of the data you have determines which distribution functions can be used or are best
suited to your needs.
Measured Data
Mean Values, Standard Deviation, Exceedance Values
No Data

Measured Data
If you have measured data, then you must first know how reliable that data is. Data scatter is not just an
inherent physical effect, but also includes inaccuracy in the measurement itself. You must consider that the
person taking the measurement might have applied a "tuning" to the data. For example, if the data measured
represents a load, the person measuring the load may have rounded the measurement values; this means
that the data you receive are not truly the measured values. The amount of this tuning could provide a deterministic bias in the data that you need to address separately. If possible, you should discuss any bias that
might have been built into the data with the person who provided that data to you.
If you are confident about the quality of the data, then how you proceed depends on how much data you
have. In a single production field, the amount of data is typically sparse. If you have only a small amount of
data, use it only to evaluate a rough figure for the mean value and the standard deviation. In these cases,
you could model the uncertainty variable as a Gaussian distribution if the physical effect you model has no
lower and upper limit, or use the data and estimate the minimum and maximum limit for a uniform distribution.
In a mass production field, you probably have a lot of data. In these cases you could use a commercial
statistical package that will allow you to actually fit a statistical distribution function that best describes the
scatter of the data.

Mean Values, Standard Deviation, Exceedance Values


The mean value and the standard deviation are most commonly used to describe the scatter of data. Frequently, information about a physical quantity is given as a value such as "1005.5." Often, this form means
that the value "100" is the mean value and "5.5" is the standard deviation. Data in this form implies a Gaussian distribution, but you must verify this (a mean value and standard deviation can be provided for any
collection of data regardless of the true distribution type). If you have more information, for example, you
know that the data is lognormal distributed, then Six Sigma Analysis allows you to use the mean value and
standard deviation for a lognormal distribution.
Sometimes the scatter of data is also specified by a mean value and an exceedance confidence limit. The
yield strength of a material is sometimes given in this way; for example, a 99% exceedance limit based on
a 95% confidence level is provided. This means that, from the measured data, we can be sure by 95% that
in 99% of all cases the property values will exceed the specified limit and only in 1% of all cases will they
drop below the specified limit. The supplier of this information is using the mean value, the standard deviation,
and the number of samples of the measured data to derive this kind of information. If the scatter of the
data is provided in this way, the best way to pursue this further is to ask for more details from the data
supplier. Because the given exceedance limit is based on the measured data and its statistical assessment,
the supplier might be able to provide you with the details that were used.

98

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Material Data
If the data supplier does not give you any further information, then you could consider assuming that the
number of measured samples was large. If the given exceedance limit is denoted with x1 - /2 and the given
mean value is denoted with x, then the standard deviation can be derived from the equation:

x1 / 2 x

where the values for the coefficient C are:


Exceedance Probability

99.5%

2.5758

99.0%

2.3263

97.5%

1.9600

95.0%

1.6449

90.0%

1.2816

No Data
In situations where no information is available, there is never just one right answer. Below are hints about
which physical quantities are usually described in terms of which distribution functions. This information
might help you with the particular physical quantity you have in mind. Also below is a list of which distribution
functions are usually used for which kind of phenomena. Keep in mind that you might need to choose from
multiple options.

Geometric Tolerances

If you are designing a prototype, you could assume that the actual dimensions of the manufactured
parts would be somewhere within the manufacturing tolerances. In this case it is reasonable to use a
uniform distribution, where the tolerance bounds provide the lower and upper limits of the distribution
function.

If the manufacturing process generates a part that is outside the tolerance band, one of two things may
happen: the part must either be fixed (reworked) or scrapped. These two cases are usually on opposite
ends of the tolerance band. An example of this is drilling a hole. If the hole is outside the tolerance
band, but it is too small, the hole can just be drilled larger (reworked). If, however, the hole is larger
than the tolerance band, then the problem is either expensive or impossible to fix. In such a situation,
the parameters of the manufacturing process are typically tuned to hit the tolerance band closer to the
rework side, steering clear of the side where parts need to be scrapped. In this case, a Beta distribution
is more appropriate.

Often a Gaussian distribution is used. The fact that the normal distribution has no bounds (it spans
minus infinity to infinity), is theoretically a severe violation of the fact that geometrical extensions are
described by finite positive numbers only. However, in practice, this lack of bounds is irrelevant if the
standard deviation is very small compared to the value of the geometric extension, as is typically true
for geometric tolerances.

Material Data

Very often the scatter of material data is described by a Gaussian distribution.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

99

Understanding Six Sigma Analysis

In some cases the material strength of a part is governed by the "weakest-link theory." The "weakestlink theory" assumes that the entire part will fail whenever its weakest spot fails. For material properties
where the "weakest-link" assumptions are valid, the Weibull distribution might be applicable.

For some cases, it is acceptable to use the scatter information from a similar material type. For example,
if you know that a material type very similar to the one you are using has a certain material property
with a Gaussian distribution and a standard deviation of 5% around the measured mean value, then
you can assume that for the material type you are using, you only know its mean value. In this case,
you could consider using a Gaussian distribution with a standard deviation of 5% around the given
mean value.

Load Data
For loads, you usually only have a nominal or average value. You could ask the person who provided the
nominal value the following questions: Out of 1000 components operated under real life conditions, what
is the lowest load value any one of the components sees? What is the most likely load value? That is, what
is the value that most of these 1000 components are subject to? What is the highest load value any one
component would be subject to? To be safe you should ask these questions not only of the person who
provided the nominal value, but also to one or more experts who are familiar with how your products are
operated under real-life conditions. From all the answers you get, you can then consolidate what the minimum, the most likely, and the maximum value probably is. As verification, compare this picture with the
nominal value that you would use for a deterministic analysis. The nominal value should be close to the
most likely value unless using a conservative assumption. If the nominal value includes a conservative assumption (is biased), then its value is probably close to the maximum value. Finally, you can use a triangular
distribution using the minimum, most likely, and maximum values obtained.

Distribution Functions
Beta Distribution
fX(x)
r,t

xmin
xmax
x
You provide the shape parameters r and t and the lower and the upper limit xmin and xmax of the random
variable x.
The Beta distribution is very useful for random variables that are bounded at both sides. If linear operations
are applied to random variables that are all subjected to a uniform distribution, then the results can usually
be described by a Beta distribution. For example, if you are dealing with tolerances and assemblies where
the components are assembled and the individual tolerances of the components follow a uniform distribution
(a special case of the Beta distribution), the overall tolerances of the assembly are a function of adding or
subtracting the geometrical extension of the individual components (a linear operation). Hence, the overall
tolerances of the assembly can be described by a Beta distribution. Also, as previously mentioned, the Beta
distribution can be useful for describing the scatter of individual geometrical extensions of components as
well.
100

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Gaussian (Normal) Distribution

Exponential Distribution
fX(x)

xmin
x
You provide the decay parameter and the shift (or lower limit) xmin of the random variable x.
The exponential distribution is useful in cases where there is a physical reason that the probability density
function is strictly decreasing as the uncertainty variable value increases. The distribution is mostly used to
describe time-related effects; for example, it describes the time between independent events occurring at
a constant rate. It is therefore very popular in the area of systems reliability and lifetime-related systems reliability, and it can be used for the life distribution of non-redundant systems. Typically, it is used if the lifetime
is not subjected to wear-out and the failure rate is constant with time. Wear-out is usually a dominant lifelimiting factor for mechanical components that would preclude the use of the exponential distribution for
mechanical parts. However, where preventive maintenance exchanges parts before wear-out can occur, then
the exponential distribution is still useful to describe the distribution of the time until exchanging the part
is necessary.

Gaussian (Normal) Distribution


fX(x)

x
You provide values for the mean value and the standard deviation of the random variable x.
The Gaussian, or normal, distribution is a fundamental and commonly-used distribution for statistical matters.
It is typically used to describe the scatter of the measurement data of many physical phenomena. Strictly
speaking, every random variable follows a normal distribution if it is generated by a linear combination of
a very large number of other random effects, regardless which distribution these random effects originally
follow. The Gaussian distribution is also valid if the random variable is a linear combination of two or more
other effects if those effects also follow a Gaussian distribution.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

101

Understanding Six Sigma Analysis

Lognormal Distribution
fX(x)

x
You provide values for the logarithmic mean value and the logarithmic deviation . The parameters and
are the mean value and standard deviation of ln(x):

f ( x, , ) =

1 ln x - 2
exp -

2

The lognormal distribution is another basic and commonly-used distribution, typically used to describe the
scatter of the measurement data of physical phenomena, where the logarithm of the data would follow a
normal distribution. The lognormal distribution is suitable for phenomena that arise from the multiplication
of a large number of error effects. It is also used for random variables that are the result of multiplying two
or more random effects (if the effects that get multiplied are also lognormally distributed). It is often used
for lifetime distributions such as the scatter of the strain amplitude of a cyclic loading that a material can
endure until low-cycle-fatigue occurs.

Uniform Distribution
fX(x)

xmin

xmax
x

You provide the lower and the upper limit xmin and xmax of the random variable x.
The uniform distribution is a fundamental distribution for cases where the only information available is a
lower and an upper limit. It is also useful to describe geometric tolerances. It can also be used in cases where
any value of the random variable is as likely as any other within a certain interval. In this sense, it can be
used for cases where "lack of engineering knowledge" plays a role.

102

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Truncated Gaussian (Normal) Distribution

Triangular Distribution
fX(x)

xmin

xmlv

xmax
x

You provide the minimum value xmin, the most likely value limit xmlv and the maximum value xmax.
The triangular distribution is most helpful to model a random variable when actual data is not available. It
is very often used to capture expert opinions, as in cases where the only data you have are the well-founded
opinions of experts. However, regardless of the physical nature of the random variable you want to model,
you can always ask experts questions like "Out of 1000 components, what are the lowest and highest load
values for this random variable?" and other similar questions. You should also include an estimate for the
random variable value derived from a computer program, as described above. For more details, see Choosing
a Distribution for a Random Variable (p. 98).

Truncated Gaussian (Normal) Distribution


fX(x)

2G

xmin

xmax
G

You provide the mean value and the standard deviation of the non-truncated Gaussian distribution and
the truncation limits xmin and xmax.
The truncated Gaussian distribution typically appears where the physical phenomenon follows a Gaussian
distribution, but the extreme ends are cut off or are eliminated from the sample population by quality control
measures. As such, it is useful to describe the material properties or geometric tolerances.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

103

Understanding Six Sigma Analysis

Weibull Distribution
fX(x)

m,xchr

xmin
x
You provide the Weibull characteristic value xchr , the Weibull exponent m, and the minimum value xmin.
There are several special cases. For xmin = 0 the distribution coincides with a two-parameter Weibull distribution. The Rayleigh distribution is a special case of the Weibull distribution with = xchr - xmin and m = 2.
In engineering, the Weibull distribution is most often used for strength or strength-related lifetime parameters,
and is the standard distribution for material strength and lifetime parameters for very brittle materials (for
these very brittle materials, the "weakest-link theory" is applicable). For more details, see Choosing a Distribution for a Random Variable (p. 98).

Sample Generation
For Six Sigma Analysis, the sample generation by default is based on the Latin Hypercube Sampling (LHS)
technique. (You can also select Weighted Latin Hypercube Sampling (WLHS) as the Sampling Type in the
Six Sigma Properties view.) The LHS technique is a more advanced and efficient form of Monte Carlo analysis methods. The only difference between LHS and the Direct Monte Carlo Sampling technique is that LHS
has a sample "memory," meaning it avoids repeating samples that have been evaluated before (it avoids
clustering samples). It also forces the tails of a distribution to participate in the sampling process. Generally,
the LHS technique requires 20% to 40% fewer simulations loops than the Direct Monte Carlo analysis technique
to deliver the same results with the same accuracy. However, that number is largely problem-dependent.

Weighted Latin Hypercube Sampling


In some cases, a very small probability of failure (Pf ), e.g., in the order of 10-6, is of interest in engineering
design. It would require 100/Pf simulations/runs of regular Latin Hypercube samples. To reduce number of
runs, the Weighted Latin Hypercube Sampling (WLHS) may be used to generate biased/more samples in the
region.
In WLHS, the input variables are discretized unevenly/unequally in their design space. The cell (or hypercube,
in multiple dimensions) size of probability of occurrence is evaluated according to topology of output/response. The cell size of probability of occurrence is discretized such that it is smaller (relatively) around
minimum and maximum of output/response. As evaluation of cell size is output/response oriented, WLHS
is somewhat unsymmetrical/biased.
In general, the WLHS is intended to stretch distribution farther out in the tails (lower and upper) with less
number of runs than Latin Hypercube Sampling (LHS). In other words, given same number of runs, WLHS is
expected to reach a smaller Pf when compared to LHS. Due to biasness, however, the evaluated Pf may
subject to some difference compared to LHS.

104

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Cumulative Distribution Function

Postprocessing SSA Results

Histogram (p. 105)

Cumulative Distribution Function (p. 105)

Probability Table (p. 106)

Statistical Sensitivities in a Six Sigma Analysis (p. 107)

Six Sigma Analysis Theory (p. 109)

Histogram
A histogram plot is most commonly used to visualize the scatter of a Six Sigma Analysis variable. A histogram
is derived by dividing the range between the minimum value and the maximum value into intervals of equal
size. Then Six Sigma Analysis determines how many samples fall within each interval, that is, how many
"hits" landed in each interval.
Six Sigma Analysis also allows you to plot histograms of your uncertainty variables so you can double-check
that the sampling process generated the samples according to the distribution function you specified. For
uncertainty variables, Six Sigma Analysis not only plots the histogram bars, but also a curve for values derived
from the distribution function you specified. Visualizing histograms of the uncertainty variables is another
way to verify that enough simulation loops have been performed. If the number of simulation loops is sufficient, the histogram bars will:

Be close to the curve that is derived from the distribution function

Be "smooth" (without large "steps")

Not have major gaps (no hits in an interval where neighboring intervals have many hits)

However, if the probability density function is flattening out at the far ends of a distribution (for example,
the exponential distribution flattens out for large values of the uncertainty variable) then there might logically
be gaps. Hits are counted only as positive integer numbers and as these numbers gradually get smaller, a
zero hit can happen in an interval.

Cumulative Distribution Function


The cumulative distribution function is a primary review tool if you want to assess the reliability or the failure
probability of your component or product. Reliability is defined as the probability that no failure occurs.
Hence, in a mathematical sense, reliability and failure probability are different ways of examining the same
problem and numerically they complement each other (they sum to 1.0). The cumulative distribution function
value at any given point expresses the probability that the respective parameter value will remain below
that point.
The value of the cumulative distribution function at the location x0 is the probability that the values of X
stay below x0. Whether this probability represents the failure probability or the reliability of your component
depends on how you define failure; for example, if you design a component such that a certain deflection
should not exceed a certain admissible limit, then a failure event occurs if the critical deflection exceeds this
limit. Thus, for this example, the cumulative distribution function is interpreted as the reliability curve of the
component. On the other hand, if you design a component such that the eigenfrequencies are beyond a
certain admissible limit, then a failure event occurs if an eigenfrequency drops below this limit. So for this
example, the cumulative distribution function is interpreted as the failure probability curve of the component.
The cumulative distribution function also lets you visualize what the reliability or failure probability would
be if you chose to change the admissible limits of your design.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

105

Understanding Six Sigma Analysis


A cumulative distribution function plot is an important tool to quantify the probability that the design of
your product does or does not satisfy quality and reliability requirements. The value of a cumulative distribution function of a particular output parameter represents the probability that the output parameter will
remain below a certain level as indicated by the values on the x-axis of the plot.

Example 2 Illustration of Cumulative Distribution Function


The probability that the Shear Stress Maximum will remain less than a limit value of 1.71E+5 is about 93%,
which also means that there is a 7% probability that the Shear Stress Maximum will exceed the limit value
of 1.71E+5.

Figure: Cumulative Distribution Function

For more information, see Six Sigma Analysis Theory (p. 109).

Probability Table
Instead of reading data from the cumulative distribution chart, you can also obtain important information
about the cumulative distribution function in tabular form. A Probability Table is available that is designed
to provide probability values for an even spread of levels of an input or output parameter. You can view
the table in either Quantile-Percentile (Probability) mode or Percentile-Quantile (Inverse Probability) mode.
The Probability Table lets you find out the parameter levels corresponding to probability levels that are
typically used for the design of reliable products. If you want to see the probability of a value that is not
listed, you can add it to the table. Likewise, you can add a probability or sigma-level and see the corresponding
values. You can also delete values from the table. For more information, see Using Statistical Postprocessing (p. 70).

106

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Statistical Sensitivities in a Six Sigma Analysis

Figure: Probability Tables

Note
Both tables will have more rows, i.e., include more data, if the number of samples is increased. If
you are designing for high product reliability, i.e., a low probability that it does not conform to
quality or performance requirements, then the sample size must be adequately large to address
those low probabilities. Typically, if your product does not conform to the requirements denoted
with "Preq," then the minimum number of samples should be determined by: Nsamp = 10.0 /
Preq.
For example, if your product has a probability of Preq=1.0 e-4 that it does not conform to the
requirements, then the minimum number of samples should be Nsamp = 10.0 / 1.0e-4 = 10e+5.

Statistical Sensitivities in a Six Sigma Analysis


The available sensitivities plots allow you to efficiently improve your design toward a more reliable and
better quality design, or to save money in the manufacturing process while maintaining the reliability and

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

107

Understanding Six Sigma Analysis


quality of your product. You can view a sensitivities plot for any output parameter in your model. See Sensitivities Chart (SSA) (p. 31) for more information.
The sensitivities available under the Six Sigma Analysis and the Goal Driven Optimization views are statistical sensitivities. Statistical sensitivities are global sensitivities, whereas the single parameter sensitivities
available under the Responses view are local sensitivities. The global, statistical sensitivities are based on a
correlation analysis using the generated sample points, which are located throughout the entire space of
input parameters. The local single parameter sensitivities are based on the difference between the minimum
and maximum value obtained by varying one input parameter while holding all other input parameters
constant. As such, the values obtained for single parameter sensitivities depends on the values of the input
parameters that are held constant. Global, statistical sensitivities do not depend on the settings for the input
parameters, because all possible values for the input parameters are already taken into account when determining the sensitivities.
Design Exploration displays sensitivities as both a bar chart and pie chart. The charts describe the sensitivities
in an absolute fashion (taking the signs into account); a positive sensitivity indicates that increasing the
value of the uncertainty variable increases the value of the result parameter for which the sensitivities are
plotted. Conversely, a negative sensitivity indicates that increasing the uncertainty variable value reduces
the result parameter value.
Using a sensitivity plot, you can answer the following important questions.

How can I make the component more reliable or improve its quality?
If the results for the reliability or failure probability of the component do not reach the expected levels, or
if the scatter of an output parameter is too wide and therefore not robust enough for a quality product,
then you should make changes to the important input variables first. Modifying an input variable that is
insignificant would be a waste of time.
Of course, you are not in control of all uncertainty parameters. A typical example where you have very limited
means of control involves material properties. For example, if it turns out that the environmental temperature
(outdoor) is the most important input parameter, then there is probably nothing you can do. However, even
if you find out that the reliability or quality of your product is driven by parameters that you cannot control,
this data has importance -- it is likely that you have a fundamental flaw in your product design! You should
watch for influential parameters like these.
If the input variable you want to tackle is a geometry-related parameter or a geometric tolerance, then improving the reliability and quality of your product means that it might be necessary to change to a more
accurate manufacturing process or use a more accurate manufacturing machine. If it is a material property,
then there might be nothing you can do about it. However, if you only had a few measurements for a material property and consequently used only a rough guess about its scatter, and the material property turns
out to be an important driver of product reliability and quality, then it makes sense to collect more raw data.

How can I save money without sacrificing the reliability or the quality of the product?
If the results for the reliability or failure probability of the component are acceptable or if the scatter of an
output parameter is small and therefore robust enough for a quality product, then there is usually the
question of how to save money without reducing the reliability or quality. In this case, you should first make
changes to the input variables that turned out to be insignificant, because they do not effect the reliability
or quality of your product. If it is the geometrical properties or tolerances that are insignificant, you can
consider applying a less expensive manufacturing process. If a material property turns out to be insignificant,
then this is not typically a good way to save money, because you are usually not in control of individual

108

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Statistical Postprocessing
material properties. However, the loads or boundary conditions can be a potential for saving money, but in
which sense this can be exploited is highly problem-dependent.

Six Sigma Analysis Theory


The purpose of a Six Sigma Analysis is to gain an understanding of the impact of uncertainties associated
with the input parameter of your design. This goal is achieved using a variety of statistical measures and
postprocessing tools.

Statistical Postprocessing
Convention: Set of data xi.
1.

Mean Value
Mean is a measure of average for a set of observations. The mean of a set of n observations is defined
as follows:
^ =

2.

1 n
yi
n i =1

(1)

Standard Deviation
Standard deviation is a measure of dispersion from the mean for a set of observations. The standard
deviation of a set of n observations is defined as follows:
^ =

3.

1 n
( yi ^ )2
(n 1) i =1

(2)

Sigma Level
Sigma level is calculated as the inverse cumulative distribution function of a standard Gaussian distribution at a given percentile. Sigma level is used in conjunction with standard deviation to measure
data dispersion from the mean. For example, for a pair of quantile X and sigma level n, it means that
value X is about n standard deviations away from the sample mean.

4.

Skewness
Skewness is a measure of degree of asymmetry around the mean for a set of observations. The observations are symmetric if distribution of the observations looks the same to the left and right of the
mean. Negative skewness indicates the distribution of the observations being left-skewed. Positive
skewness indicates the distribution of the observations being right-skewed. The skewness of a set of
n observations is defined as follows:

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

109

Understanding Six Sigma Analysis

^
n
yi
^
=

(n 1)(n 2) i =1

5.

(3)

Kurtosis
Kurtosis is a measure of relative peakedness/flatness of distribution for a set of observations. It is
generally a relative comparison with the normal distribution. Negative kurtosis indicates a relatively
flat distribution of the observations compared to the normal distribution, while positive kurtosis indicates
a relatively peaked distribution of the observations. As such, the kurtosis of a set of n observations is
defined with calibration to the normal distribution as follows:
4


^
n

n
(
n
+
1
)
y

3(n 1)2

i

^ =


(n 1)(n 2)(n 3) i =1 (n 2)(n 3)

(4)

^
^
where and represent mean and standard deviation, respectively.

Table 1 Different Types of Kurtosis

<0

=0

>0

< 0 for flat distribution


= 0 for normal distribution
> 0 for peaked distribution
6.

Shannon Entropy
Entropy is a concept first introduced in classical thermodynamics. It is a measure of disorder in a system.
The concept of entropy was later extended and recognized by Claude Shannon in the domain of information/communication as a measure of uncertainty over the true content of a message. In statistics,
the entropy is further reformulated as a function of mass density, as follows:
H = Pi ln Pi

(5)

where Pi represents mass density of a parameter. In the context of statistics/probability, entropy becomes
a measure of complexity and predictability of a parameter. A more complex, and less predictable,
parameter carries higher entropy, and vice versa. For example, a parameter characterized with uniform
distribution has higher entropy than that with truncated triangular distribution (within the same

110

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Statistical Postprocessing
bounds). Hence, the parameter characterized with uniform distribution is less predictable compared
to the triangular distribution.
Pi =

Ni
N w

(6)

The probability mass is used in a normalized fashion, such that not only the shape, but the range of
variability, or the distribution is accounted for. This is shown in Equation 6 (p. 111), where Ni/N is the
relative frequency of the parameter falling into a certain interval, and w is the width of the interval.
As a result, Shannon entropy can have a negative value. Below are some comparisons of the Shannon
entropy, where S2 is smaller than S1.

7.

Taguchi Signal-to-Noise Ratios


Three signal-to-noise ratios are provided in the statistics of each output in your SSA. These ratios are
as follows:

Nominal is Best

Smaller is Better

Larger is Better

Signal-to-noise (S/N) ratios are measures used to optimize control parameters in order to achieve a
robust design. These measures were first proposed by Dr. Taguchi of Nippon Telephone and Telegraph
Company, Japan, to reduce design noises in manufacturing processes. These design noises are normally
expressed in statistical terms such as mean and standard deviation (or variance). In computer aided
engineering (CAE), these ratios have been widely used to achieve a robust design in computer simulations. For a design to be robust, the simulations are carried out with an objective to minimize the
variances. The minimum variance of designs/simulations can be done with or without targeting a
certain mean. In Design Exploration, minimum variance targeted at a certain mean (called Nominal is
Best) is provided, and is given as follows:

= 20log

^
^

(7)

^
^
where and represent mean and standard deviation, respectively.

Nominal is Best is a measure used for characterizing design parameters such as model dimension in
a tolerance design, in which a specific dimension is required, with an acceptable standard deviation.
In some designs, however, the objectives are to seek a minimum or a maximum possible at the price
of any variance.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

111

Understanding Six Sigma Analysis

For the cases of minimum possible (Smaller is Better), which is a measure used for characterizing
output parameters such as model deformation, the S/N is expressed as follows:
1 n

= 10 log yi2
n i =1

(8)

For the cases of maximum possible (Larger is Better), which is a measure used for characterizing
output parameters such as material yield, the S/N is formulated as follows:
1 n 1
= 10 log

n i =1 yi2

(9)

These three S/N ratios are mutually exclusive. Only one of the ratios can be optimized for any given
parameter. For a better design, these ratios should be maximized.
8.

Minimum and Maximum Values


The minimum and maximum values of a set of n observations are:
xmin = min( x1, x 2 ,..., xn )

(10)

xmax = max( x1, x 2 ,..., xn )

(11)

Note
The minimum and maximum values strongly depend on the number of samples. If you
generate a new sample set with more samples, then chances are that the minimum value
will be lower in the larger sample set. Likewise, the maximum value of the larger sample
set will most likely be higher than for the original sample set. Hence, the minimum and
maximum values should not be interpreted as absolute physical bounds.
9.

Statistical Sensitivity Measures


The sensitivity charts displayed under the Six Sigma Analysis view are global sensitivities based on
statistical measures (see Single Parameter Sensitivities). Generally, the impact of an input parameter
on an output parameter is driven by:

The amount by which the output parameter varies across the variation range of an input parameter.

The variation range of an input parameter. Typically, the wider the variation range is, the larger
the impact of the input parameter will be.

The statistical sensitivities used under Six Sigma Analysis are based on the Spearman-Rank Order Correlation coefficients that take both those aspects into account at the same time.

112

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Statistical Postprocessing
Basing sensitivities on correlation coefficients follows the concept that the more strongly an output
parameter is correlated with a particular input parameter, the more sensitive it is with respect to
changes of that input parameter.
10. Spearman Rank-Order Correlation Coefficient
The Spearman rank-order correlation coefficient is:
n

rs =

(Ri R)(Si S)
i

(12)

(Ri R) (Si S)
2

where:
x x ...x
Ri = rank of xi within the set of observations 1 2 n

y y ...y
Si = rank of yi within the set of observations 1 2 n

R, S = average ranks of a R and S respectively


i
i

11. Significance of Correlation Coefficients


Since the sample size n is finite, the correlation coefficient rp is a random variable itself. Hence, the
correlation coefficient between two random variables X and Y usually yields a small, but nonzero value,
even if X and Y are not correlated at all in reality. In this case, the correlation coefficient would be insignificant. Therefore, we need to find out if a correlation coefficient is significant or not. To determine
the significance of the correlation coefficient, we assume the hypothesis that the correlation between
X and Y is not significant at all, i.e., they are not correlated and rp = 0 (null hypothesis). In this case
the variable:
t = rP

n2

(13)

1 rP2

is approximately distributed like the student's t-distribution with = n - 2 degrees of freedom. The
cumulative distribution function student's t-distribution is:

A( t | ) =

+1

1
x2 2
dx
1 +


1
B , t
2 2
t

(14)

where:
B(...) = complete Beta function

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

113

Understanding Six Sigma Analysis


There is no closed-form solution available for Equation 14 (p. 113). See Abramowitz and Stegun (Pocketbook of Mathematical Functions, abridged version of the Handbook of Mathematical Functions, Harry
Deutsch, 1984) for more details.
The larger the correlation coefficient rp, the less likely it is that the null hypothesis is true. Also, the
larger the correlation coefficient rp, the larger is the value of t from Equation 13 (p. 113) and consequently
also the probability A(t|) is increased. Therefore, the probability that the null hypothesis is true is
given by 1-A(t|). If 1-A(t|) exceeds a certain significance level, for example 2.5%, then we can assume
that the null hypothesis is true. However, if 1-A(t|) is below the significance level, then it can be assumed that the null hypotheses is not true and that consequently the correlation coefficient rp is significant. This limit can be changed in Design Exploration Options (p. 11).
12. Cumulative Distribution Function
The cumulative distribution function of sampled data is also called the empirical distribution function.
To determine the cumulative distribution function of sampled data, you need to order the sample
values in ascending order. Let xi be the sampled value of the random variable X having a rank of i,
i.e., being the ith smallest out of all n sampled values. The cumulative distribution function Fi that
corresponds to xi is the probability that the random variable X has values below or equal to xi. Since
we have only a limited amount of samples, the estimate for this probability is itself a random variable.
According to Kececioglu (Reliability Engineering Handbook, Vol. 1, 1991, Prentice-Hall, Inc.), the cumulative distribution function Fi associated with xi is:
n

n!

(n k )! k ! Fki (1 Fi )n k = 50%

k =i

(15)

Equation 15 (p. 114) must be solved numerically.


13. Probability and Inverse Probability
The cumulative distribution function of sampled data can only be given at the individual sampled
values x1, x2, ..., xi, xi+1, ..., xn using Equation 15 (p. 114). Hence, the evaluation of the probability that
the random variable is less than or equal to an arbitrary value x requires an interpolation between the
available data points.
If x is, for example, between xi and xi+1, then the probability that the random variable X is less or equal
to x is:
P( X x ) = Fi + (F i +1 Fi )

x xi
xi +1 xi

(16)

The cumulative distribution function of sampled data can only be given at the individual sampled
values x1, x2, ..., xi, xi+1, ..., xn using Equation 15 (p. 114). Hence, the evaluation of the inverse cumulative
distribution function for any arbitrary probability value requires an interpolation between the available
data points.
The evaluation of the inverse of the empirical distribution function is most important in the tails of
the distribution, where the slope of the empirical distribution function is flat. In this case, a direct interpolation between the points of the empirical distribution function similar to Equation 16 (p. 114) can
lead to inaccurate results. Therefore, the inverse standard normal distribution function -1 is applied
for all probabilities involved in the interpolation. If p is the requested probability for which we are

114

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Statistical Postprocessing
looking for the inverse cumulative distribution function value, and p is between Fi and Fi+1, then the
inverse cumulative distribution function value can be calculated using:
x = xi + ( xi +1 xi )

1(p) 1(Fi )

1(Fi + 1) 1(Fi )

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

(17)

115

116

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Troubleshooting
Table view and Charts view become unreadable because of the number of parameters and the length
of their names
In order to make your charts and tables more readable, consider using the short parameter name option:
Open the Tools>Options dialog, select Design Exploration, and uncheck the Display Parameter Full
Name preference.
A Design Exploration Update operation returns an error message saying that one or several Design
Points have failed to update
Click the Show Details button in the error dialog to see the list of Design Points that failed to update
and the associated error messages. Each failed Design Point is automatically preserved for you at the
project level, so you can Return to the Project and Edit the Parameter Set cell in order to see the failed
Design Points and investigate the reason for the failures.
This error means that the project failed to update correctly for the parameter values defined in the listed
Design Points. There may be a variety of failure reasons, from a lack of a license to a CAD model generation failure, for instance. If you try to update the system again, only the failed Design Points will be
attempted for a new Update, and if the update is successful, the operation will complete. If there are
persistent failures with a Design Point, copy its values to the Current Design Point, attempt a Project
Update, and edit the cells in the project that are not correctly updated in order to investigate.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

117

118

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Appendices

Glossary of General Terms (p. 119)

Glossary of General Terms


Design of Experiments
(DOE)

Optimization method (used in Design Exploration) where each change of the


value of any input variable requires a new analysis. To perform a "what-if
study" where several input variables are varied in a certain range, a considerable number of analyses may be required to satisfactorily evaluate the results
over the space of the input variables. The DOE method generates a Response
Surface using curve and surface fitting algorithms to "fit" output data as a
function of input data. This requires a group of design points where each
point is generated via a solve.

Design Point

A design point in which Parameter values were calculated directly in an


Analysis System, a CAD system, or DesignModeler.

Input Parameters

Those model parameters exposed by the Analysis System (geometry, boundary


conditions, materials, etc.) for Design Exploration to use in the parametric
analysis.

Mean Value

A measure of location often used to describe the general location of the bulk
of the scattering data of a random output parameter or of a statistical distribution function.
Mathematically, the mean value is the arithmetic average of the data. The
mean value also represents the center of gravity of the data points. Another
name for the mean value is the expected value.

Median Value

The statistical point where 50% of the data is below the value and 50% is
above.
For symmetrical distribution functions (Gaussian, uniform, etc.) the median
value and the mean value are identical, but for nonsymmetrical distributions,
they are different.

Output Parameters

Parameters that define the response outputs (e.g., volume) from the analysis.

Pareto Set

A concept used in multi-objective optimization, especially where some or all


of the objectives are mutually conflicting. In such a case, there is no single
point which simultaneously yields the "best" value of all the objectives. Instead,
the best solutions, called a Pareto or non-dominated set, are a group of
solutions such that selecting any one of them in place of another will always
sacrifice the quality of at least one objective while improving at least one
other.

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

119

Appendices
Response Point

A design point which has parameter values calculated in Design Exploration.


As such, the parameter values are approximate and calculated from Response
Surfaces.

Response Surface

The empirical relationship between a variable of interest, y, and a set of independent variables, x1, x2, x3, ... Usually the function y = F(x1, x2, x3, . . . ) is a
polynomial or some other well-defined relationship, which forms the response
surface model of y. When applied to design and analysis in the realm of CAD
and CAE, a response surface is the representation of the physical behavior of
a structure in terms of its independent variables. For example, a response
surface can be devised for the fundamental frequency (F) of a structure as a
function of CAD geometry parameters (R, L) and the modulus of elasticity (E);
that is, F = G(R,L,E).

Sample

A unique set of parameter values that represents a particular model configuration.


A sample is characterized by uncertainty variable values. Think of a sample
as one virtual prototype. Every component manufactured represents one
sample, because you can measure its particular properties (material, geometry,
etc.) and obtain specific values for each.
In statistics, however, sample also has a wider and more general use. For example, any single measured value of any physical property is considered to
be one sample. Because a Six Sigma Analysis is based on a statistical evaluation
of the result parameters, the values of the result parameters are also called
samples.

Standard Deviation

A measure of variability (i.e., dispersion or spread) about the arithmetic mean


value, often used to describe the width of the scatter of a random output
parameter or of a statistical distribution function.
The larger the standard deviation, the wider the scatter, and the more likely
it is that there are data values further apart from the mean value.

Uncertainty Variables

Quantities that influence the result of an analysis.


In a Six Sigma Analysis, uncertainty variables are often called "drivers" because
they drive the result of an analysis. You must specify the type of statistical
distribution the uncertainty variables follow and the parameter values of their
distribution functions.

120

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Index
Symbols
2-D contour graph , 55
3-D contour graph , 56

measured data, 98
no data, 99
types, 100
DOE
definition, 119
DOE solution, 36

C
contour graphs
2-D, 55
3-D, 56
correlation
parameters, 20
correlation matrix
object reference, 21
correlation scatter chart
object reference, 21
correlation sensitivities chart
object reference, 22
cumulative distribution function, 105

D
database
resuming, 4
decision support, 81
Design Exploration, 17
introduction, 1
options, 11
overview, 17
user interface, 5
workflow, 9
design goals
defining, 67
design of experiments, 18
definition, 119
six sigma, 29
design points, 9
adding, 60, 69
definition, 119
exporting, 60
importing, 60
uses, 60
using, 35
design points vs parameter plot
object reference, 20
design values, 33
determination matrix
object reference, 22
distribution
choosing for random variables, 98
data scatter values, 98

error messages, 117

G
genetic algorithm, 81
glossary, 119
goal driven optimization, 73
creating design points, 69
decision support, 81
defining goals, 67
genetic algorithm, 81
guidelines, 74
MOGA option, 65
NLPQL option, 66
options, 63
Pareto dominance, 80
postprocessing, 76
principles, 73
sampling method, 79
screening option, 64
sensitivities, 69
theory, 79
tradeoff studies, 69
using, 63
goodness of fit, 53

H
histogram , 105

I
input parameters
definition, 119
using, 32

L
local sensitivities
description, 57
local sensitivity chart
description, 57
object reference, 25

M
mean value
definition, 119

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

121

Index
median value
definition, 119
min-max search, 53
object reference, 24

optimization, 63
defining goals, 67
MOGA option, 65
NLPQL option, 66
object reference, 26
screening option, 64
options, 11
output parameters, 34
definition, 119

response surface
definition, 119
Kriging, 45
meta model refinement, 50
meta model types, 42
neural network, 49
non-parametric regression, 46
object reference, 22
standard, 43
responses
min-max search, 53
overview, 42
response charts, 54
result parameter
definition, 119

parallel coordinate plot


description, 78
parameters, 7, 31
design variables, 33
input, 32
output, 34
uncertainty variables, 33
parameters correlation, 20, 62
parameters parallel plot
object reference, 19
Pareto dominance, 80
Pareto set
definition, 119
postprocessing overview, 77
postprocessing six sigma analysis results , 105
cumulative distribution function, 105
histogram, 105
probability table, 106
sensitivities, 107
probability table, 106

R
response chart
object reference, 25
response charts, 54
2-D contour graph, 55
3-D contour graph, 56
goodness of fit, 53
local sensitivities, 57
spider charts, 56
x-y graph, 55
response points, 9
adding, 60, 69
definition, 119

122

sample
definition, 119
sample generation
MOGA, 65
NLPQL, 66
screening, 64, 79
six sigma analysis, 104
samples chart
object reference, 28
sensitivities
global, 59, 69, 71
goal driven optimization, 69
local, 57, 59
six sigma analysis, 71, 107
sensitivities chart
global, 28, 31
goal driven optimization, 28, 69
object reference, 28, 31
six sigma, 31
six sigma analysis, 71
sensitivity chart
description, 77
local, 25, 57
response surface, 25, 57
six sigma
object reference, 28
six sigma analysis, 30, 95
choosing variables, 97
guidelines, 97
performing, 70
postprocessing, 70, 105
sample generation, 104
statistical measures, 71
theory, 109
understanding, 96

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Index
using, 70
six sigma analysis postprocessing, 70
charts, 71
sensitivities, 71
tables, 71
solution
DOE, 36
spider chart
description, 56
object reference, 26
standard deviation
definition, 119

T
tradeoff chart
description, 77
object reference, 27
tradeoff studies, 69
troubleshooting, 117

U
uncertainty variable
definition, 119
uncertainty variables , 33
choosing, 97
choosing distribution, 98
response surface analyses, 97
user interface, 5

W
workflow, 9

X
x-y graph, 55

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

123

124

Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.

Das könnte Ihnen auch gefallen