Beruflich Dokumente
Kultur Dokumente
ANSYS, Inc.
Southpointe
275 Technology Drive
Canonsburg, PA 15317
ansysinfo@ansys.com
http://www.ansys.com
(T) 724-746-3304
(F) 724-514-9494
Release 12.1
November 2009
ANSYS, Inc. is
certified to ISO
9001:2008.
Disclaimer Notice
THIS ANSYS SOFTWARE PRODUCT AND PROGRAM DOCUMENTATION INCLUDE TRADE SECRETS AND ARE CONFIDENTIAL
AND PROPRIETARY PRODUCTS OF ANSYS, INC., ITS SUBSIDIARIES, OR LICENSORS. The software products and documentation are furnished by ANSYS, Inc., its subsidiaries, or affiliates under a software license agreement that contains provisions concerning non-disclosure, copying, length and nature of use, compliance with exporting laws, warranties,
disclaimers, limitations of liability, and remedies, and other provisions. The software products and documentation may
be used, disclosed, transferred, or copied only in accordance with the terms and conditions of that software license
agreement.
ANSYS, Inc. is certified to ISO 9001:2008.
Third-Party Software
See the legal information in the product help files for the complete Legal Notice for ANSYS proprietary software and
third-party software. If you are unable to access the Legal Notice, please contact ANSYS, Inc.
Published in the U.S.A.
Table of Contents
Overview .................................................................................................................................................... 1
Introduction to Design Exploration ......................................................................................................... 1
Available Tools .................................................................................................................................. 1
ANSYS Design Exploration: a Typical Session, What to Look For? ......................................................... 2
Initial Simulation: Analysis of the Product Under all Operating Conditions .......................................... 2
Identify Design Candidates ............................................................................................................... 2
Assess the Robustness of the Design Candidates ............................................................................... 3
Determine the Number of Parameters ............................................................................................... 4
Increase the Response Surface Accuracy ........................................................................................... 4
Limitations ............................................................................................................................................. 4
Design Exploration Licensing Requirements ............................................................................................ 5
The User Interface .................................................................................................................................. 5
Parameters ............................................................................................................................................. 7
Design Points ......................................................................................................................................... 9
Response Points ..................................................................................................................................... 9
Workflow ................................................................................................................................................ 9
Adding Design Exploration Templates to the Schematic .................................................................... 9
Duplicating Existing Design Exploration Systems ............................................................................. 10
Running Design Exploration Analyses ............................................................................................. 10
Monitoring Design Exploration Analyses ......................................................................................... 11
Design Exploration Options .................................................................................................................. 11
Using Design Exploration ......................................................................................................................... 17
What is Design Exploration? .................................................................................................................. 17
Design of Experiments .......................................................................................................................... 18
Parameters Parallel Chart ................................................................................................................ 19
Design Points vs Parameter Chart .................................................................................................... 20
Parameters Correlation ......................................................................................................................... 20
Correlation Scatter Chart ................................................................................................................. 21
Correlation Matrix ........................................................................................................................... 21
Determination Matrix ..................................................................................................................... 22
Sensitivities Chart ........................................................................................................................... 22
Response Surface ................................................................................................................................. 22
Min-Max Search .............................................................................................................................. 24
Response Chart .............................................................................................................................. 25
Local Sensitivity Chart ..................................................................................................................... 25
Spider Chart ................................................................................................................................... 26
Goal Driven Optimization ..................................................................................................................... 26
Tradeoff Chart ................................................................................................................................ 27
Samples Chart ................................................................................................................................ 28
Sensitivities Chart (GDO) ................................................................................................................. 28
Six Sigma Analysis ................................................................................................................................ 28
Design of Experiments (SSA) ........................................................................................................... 29
Six Sigma Analysis .......................................................................................................................... 30
Sensitivities Chart (SSA) .................................................................................................................. 31
Working with Parameters ...................................................................................................................... 31
Input Parameters ............................................................................................................................ 32
Setting Up Design Variables ............................................................................................................ 33
Setting Up Uncertainty Variables ..................................................................................................... 33
Output Parameters ......................................................................................................................... 34
Design Points ....................................................................................................................................... 35
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
iii
Design Exploration
The Design of Experiments Solution ...................................................................................................... 36
Design Criteria Considered in DOE .................................................................................................. 36
DOE Using a Central Composite Design ........................................................................................... 39
Upper and Lower Locations of DOE Points ....................................................................................... 41
DOE Matrix Generation ................................................................................................................... 42
Using Response Surfaces ...................................................................................................................... 42
Meta Model Types .......................................................................................................................... 42
Standard Response Surface - Full 2nd-Order Polynomials ........................................................... 43
Kriging ..................................................................................................................................... 45
Non-Parametric Regression ....................................................................................................... 46
Suggested Best Practices .................................................................................................... 49
Neural Network ........................................................................................................................ 49
Meta Model Refinement ................................................................................................................. 50
Change Meta Model Types ........................................................................................................ 51
Perform a Manual Refinement ................................................................................................... 51
Refine the Model Using the Kriging Meta Model ........................................................................ 51
Goodness of Fit ............................................................................................................................... 53
Min-Max Search .............................................................................................................................. 53
Response Surface Charts ................................................................................................................. 54
Using the Response Chart ......................................................................................................... 55
Using the Spider Chart .............................................................................................................. 56
Using the Local Sensitivity Chart ............................................................................................... 57
Global Sensitivities ............................................................................................................................... 59
Adding Design Points and Response Points ........................................................................................... 60
Importing and Exporting Design Points ........................................................................................... 60
Inserting Design Points and Response Points ................................................................................... 61
Using Parameters Correlation ................................................................................................................ 61
Running a Parameters Correlation ................................................................................................... 62
Purpose of Parameters Correlation .................................................................................................. 62
Sample Generation ......................................................................................................................... 62
Viewing the Quadratic Correlation Information ............................................................................... 63
Determining Significance ................................................................................................................ 63
Viewing Significance and Correlation Values .................................................................................... 63
Using Goal Driven Optimization ............................................................................................................ 63
Optimization Options ..................................................................................................................... 63
Screening ................................................................................................................................. 64
MOGA ...................................................................................................................................... 65
NLPQL ...................................................................................................................................... 66
Defining Design Objectives ............................................................................................................. 67
Creating New Design Points and Response Points from Candidates ................................................. 69
Goal Driven Optimization Charts ..................................................................................................... 69
Using the Sensitivities Chart (GDO) ........................................................................................... 69
Using the Tradeoff Chart ........................................................................................................... 69
Using the Samples Chart ........................................................................................................... 70
Using Six Sigma Analysis ....................................................................................................................... 70
Performing a Six Sigma Analysis ...................................................................................................... 70
Using Statistical Postprocessing ...................................................................................................... 70
Tables (SSA) .............................................................................................................................. 71
Using Parameter Charts (SSA) .................................................................................................... 71
Using the Sensitivities Chart (SSA) ............................................................................................. 71
Statistical Measures ........................................................................................................................ 71
Understanding Goal Driven Optimization ............................................................................................... 73
iv
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Design Exploration
Principles (GDO) ................................................................................................................................... 73
Guidelines and Best Practices (GDO) ...................................................................................................... 74
Postprocessing GDO Results ................................................................................................................. 76
Postprocessing Overview ................................................................................................................ 77
Understanding Sensitivities Charts .................................................................................................. 77
Understanding Tradeoff Charts ....................................................................................................... 77
Understanding Samples Charts ....................................................................................................... 78
Goal Driven Optimization Theory .......................................................................................................... 79
Shifted Hammersley Sampling Method ........................................................................................... 79
Pareto Dominance in Multi-Objective Optimization ......................................................................... 80
MOGA (Multi-Objective Genetic Algorithm) ..................................................................................... 81
Decision Support Process ................................................................................................................ 81
Non-linear Programming by Quadratic Lagrangian (NLPQL) ............................................................. 85
Understanding Six Sigma Analysis ........................................................................................................... 95
Principles (SSA) ..................................................................................................................................... 96
Guidelines for Selecting SSA Variables ................................................................................................... 97
Choosing and Defining Uncertainty Variables .................................................................................. 97
Uncertainty Variables for Response Surface Analyses ................................................................. 97
Choosing a Distribution for a Random Variable .......................................................................... 98
Measured Data ................................................................................................................... 98
Mean Values, Standard Deviation, Exceedance Values ........................................................... 98
No Data .............................................................................................................................. 99
Distribution Functions ............................................................................................................ 100
Sample Generation ............................................................................................................................. 104
Weighted Latin Hypercube Sampling .................................................................................................. 104
Postprocessing SSA Results ................................................................................................................. 105
Histogram .................................................................................................................................... 105
Cumulative Distribution Function .................................................................................................. 105
Probability Table ........................................................................................................................... 106
Statistical Sensitivities in a Six Sigma Analysis ................................................................................ 107
Six Sigma Analysis Theory ................................................................................................................... 109
Troubleshooting ..................................................................................................................................... 117
Appendices ............................................................................................................................................. 119
Glossary of General Terms ................................................................................................................... 119
Index ........................................................................................................................................................ 121
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
vi
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Overview
The following links provide quick access to information concerning Design Exploration and its use:
Limitations (p. 4)
Parameters (p. 7)
Workflow (p. 9)
Available Tools
ANSYS Design Exploration software has a powerful suite of DOE schemes: Central Composite Design (CCD)
or Optimal Space-Filling. CCD provides a traditional DOE sampling set, while Optimal Space-Fillings objective
is to gain the maximum insight with the fewest number of points. This feature is very useful when the
computation time available to the user is limited.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Overview
After sampling, ANSYS Design Exploration software provides four different meta models to represent the
simulations responses: Full Second-Order Polynomial, Kriging, Non-Parametric Regression and Neural Network.
These meta-models can accurately represent highly nonlinear responses such as those found in high frequency
electromagnetics.
Once the simulations responses are characterized, ANSYS Design Exploration supplies three different types
of optimization algorithms: Screening (Shifted Hammersley), Multi-Objective Genetic Algorithm (MOGA) and
Non-linear Programming by Quadratic Lagrangian (NLPQL).
Several graphical tools are available to investigate a design: sensitivity plots, correlation matrices, curves,
surfaces, trade-off plots and parallel charts with Pareto Front display, and spider charts.
Correlation matrix techniques are also provided to help the user identify the key parameters of a design
before creating response surfaces.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Overview
differences due to manufacturing imperfections quantified by manufacturing tolerances. Probabilistic characterization provides a probability of success or failure and not just a simple yes/no evaluation. For instance,
a probabilistic analysis could determine that one part in 1 million would fail, or the probability of a product
surviving its expected useful life.
Inserting a Six-Sigma system in the schematic will provide the robustness analysis. The process here is very
similar to the response surface creation. The main difference is in the definition of the parameters: instead
of giving minimum and maximum values defining the parameter's ranges, the parameters will be described
in terms of statistical curves and their associated parameters. Once the parameters are defined, a new DOE
will be computed, as well as the corresponding response surfaces. Additional results in the form of probabilities and statistical distributions of the output parameters are provided, along with the sensitivity charts
and the response surfaces.
Limitations
Suppressed properties are not available for the Design of Experiments (DOE) method.
Note
You cannot resume a DesignXplorer database that was created in a previous release of Workbench.
You can resume a project that was created in an earlier version of Workbench, but any associated
DesignXplorer database will be ignored. However, all parameters originally defined by the project
will still be available to Design Exploration, and the Design Exploration components can be used
to rebuild and re-solve the DesignXplorer analyses.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Project Schematic
when the user Updates the cells that generate Design Points (Design of Experiments, Parameters Correlation, Response Surface Refinement)
when the user does a New or Open command to switch to a project without any Design Exploration
systems
Note
If you do not have an ANSYS DesignXplorer license, you can successfully resume an existing
Design Exploration project and review the already generated results, with some interaction
possible on charts. But as soon as a Design Exploration Update or a Response Surface evaluation is needed, a license is required and you get an error dialog.
In order to use the ANSYS Simultaneous Design Point Update capability you must have one
solver license for each simultaneous solve. Be aware that the number of Design Points that
can be solved simultaneously may be limited by hardware, by your RSM configuration, or by
available solver licenses.
Project Schematic
You will initially create your project containing analysis systems, models, and other systems in the Project
Schematic. Then you can choose which of the parameters in those systems will be exposed to the Design
Exploration systems through the Parameter Set bar. Once the analysis systems and Parameter Set are defined,
you can add the Design Exploration systems that you need. A project schematic containing Design Exploration
systems would look something like the following:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Overview
Toolbox
When you are viewing the Project Schematic, the following system templates are available from the Design
Exploration Toolbox.
To perform a particular type of analysis, drag the template from the Toolbox onto your Project Schematic
below the Parameter Set bar.
When you are viewing a workspace, the Toolbox will display items that can be added to the Outline based
on the currently selected cell. For example, if you are in a Response Surface workspace and you select a
Response Point cell in the outline, the Toolbox will contain chart templates that you can add to that Response
Point. Double click on template or drag it onto the selected cell in the Outline view to add it to the Outline.
Workspace
Once a Design Exploration system is added to the schematic, you can edit the cells to open the Workspace,
which allows you to set up the analysis options, run the analysis, and view the results. For example, if you
right click on the Design of Experiments cell and select Edit ..., your workspace will show the following
configuration:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Parameters
You will see this window configuration for any Design Exploration system cell that you edit. The four panels
visible are:
Outline: provides a summary of all of the item that make up the cell you are editing.
Table: provides a list of either the design points or response points associated with that cell.
Properties: provides access to the properties of the various cells in the currently displayed Outline. For
example, you can set parameter limits, analysis types, and chart options from the properties panel.
Chart: displays the various charts available for the different Design Exploration system cells.
In general, you select a cell in the Outline view and either set up its Properties or review the Chart and/or
Table associated with that cell.
Toolbar
When you are editing a Design Exploration cell, additional buttons are added to the Workbench toolbar:
a button that allows you to update the particular object you are editing (for example, Update Design
of Experiments)
for some objects, a button that allows you to preview the results of the update (for example, Preview
Design of Experiments)
Parameters
The following types of parameters are used in Design Exploration:
Overview
See Working with Parameters and Design Points for more information about parameters.
Input Parameters
Input parameters are those parameters that define the inputs to the analysis for the model under investigation.
Input parameters have predefined ranges which can be changed. These include (and are not limited to) CAD
parameters, Analysis parameters, and DesignModeler parameters. CAD and DesignModeler input parameters
may include length, radius, etc.; Analysis input parameters may include pressure, material properties, materials, sheet thickness, etc.
When you start a Design of Experiments analysis in Design Exploration, a default of +/- 10% of the current
value of each input parameter is used to initialize the parameter range. If any parameter has a current value
of 0.0, then the initial parameter range will be computed as 0.0 10.0. Since Design Exploration is not aware
of the physical limits of parameters, you will need to check that the assigned range is compatible with the
physical limits of the parameter. Initially, the current value of the parameter within Design Exploration will
be the midpoint between the upper bound and lower bound.
Input parameters can be discrete or continuous, and each of these have specific forms.
Parameter Type
Description
Example
Discrete Parameters
Continuous Parameters
Continuous non-interrupted
range of values
Continuous parameters physically vary in a continuous manner between a lower and an upper bound defined
by the user. Examples are CAD dimension or load magnitude.
Discrete parameters physically represent different configurations or states of the model. An example is the
number of holes in a geometry. You can set a parameter to be discrete by selecting Discrete as the Classification for the parameter in the Properties view. You would then enter the values for the Levels that you
want the parameter to have in the Table view for the parameter. Levels are limited to integer values and
will not be sorted as you enter them.
Usability parameters are continuous parameters for which a post-processing filter is defined. You can enter
a restricted list of levels that are the only values authorized for post-processing. You can change a continuous
parameter to a usability parameter by selecting Usability as the Classification for the parameter in the
Properties view. You would then enter the values for the Levels that you want the parameter to have in the
Table view for the parameter. Levels are sorted numerically as you enter them. You are not restricted to only
entering integers for Usability Levels. Once you have the levels defined, all of the analysis done on the
sample set, from charts to optimization, will only use the usability levels. For instance, optimization won't
suggest a design where the usability parameter has a value not contained in the list of levels.
Output Parameters
Output parameters are those parameters that result from the geometry or are the response outputs from
the analysis. These include (and are not limited to) volume, mass, frequency, stress, heat flux, and so forth.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Derived Parameters
Derived parameters are specific output parameters defined as an analytical combination of output or input
parameters. As the definition suggests, derived parameters are calculated from other parameters by using
equations that you provide. They are created in the analysis system and passed into Design Exploration as
output parameters. See Working with Parameters and Design Points for more information.
Design Points
A Design Point is defined by a snapshot of parameter values where output parameter values were calculated
directly by a project update. Design Points are created by Design Exploration; for instance, when processing
a Design of Experiments or a Correlation Matrix, or refining a Response Surface.
It is also possible to insert a Design Point at the project level from an optimization Candidate design, in order
to perform a validation update. Note that the output parameter values are not copied to the created Design
Point since they were calculated by Design Exploration and are, by definition, approximated. Actual output
parameters will be calculated from the Design Point input parameters when a project update is done. See
Adding Design Points and Response Points (p. 60) for more information on adding Design Points to the project.
Response Points
A Response Point is defined by a snapshot of parameter values where output parameter values were calculated
in Design Exploration from a Response Surface. As such, the parameter values are approximate and calculated
from response surfaces, and the most promising designs should be verified by a solve in the analysis system
using the same parameter values.
Workflow
To run a parametric analysis in Design Exploration, you will need to:
create your model (either in DesignModeler or other CAD software) and load it into one of the Analysis
systems available in Workbench
add to your project the Design Exploration features that you want to use
Overview
1.
Create parts and assemblies in either the DesignModeler solid modeler or any of the supported CAD
systems. Features in the geometry that will be important to the analysis should be exposed as parameters. Such parameters can be passed to Design Exploration.
2.
Drag an analysis system into the Project Schematic , connecting it to the DesignModeler or CAD file.
3.
4.
In the Outline view, click on the various input parameters that you need to modify. Set the limits of
the selected parameter in the Properties view.
5.
Drag the Design Exploration template that you would like to use into the Project schematic below the
Parameter Set bar.
For any Design Exploration system, right click on the B1 cell of the system and select Duplicate. A new
system of that type will be added to the schematic under the Parameter Set bar. No data is shared with
the original system.
For a Design Exploration system that has a DOE cell, click on the DOE cell and select Duplicate. A new
system of that type will be added to the schematic under the Parameter Set bar. No data is shared with
the original system.
For a Design Exploration system that has a Response Surface cell, click on the Response Surface cell
and select Duplicate. A new system of that type will be added to the schematic under the Parameter
Set bar. The DOE data will be shared from the original system.
For a GDO or SSA Design Exploration system, if you click on those cells and select Duplicate, a new
system of that type will be added to the schematic under the Parameter Set bar. The DOE and Response
Surface data will be shared from the original system.
Any cell in the duplicated Design Exploration system that contains data that is not shared from the original
system will be marked as Update Required.
Right click on each cell in each of the Design Exploration systems to open the workspace for that cell.
2.
In the workspace, set up any analysis options that are needed (parameter limits, optimization objectives,
optimization type, six sigma parameter distributions, etc.).
3.
To run the analysis for a cell from its workspace, click on the Update button in the toolbar for that cell
type (Update Optimization, Update Design of Experiments, etc.). You can also click on the Update
Project button to update the entire project.
4.
Make sure that you set up and solve each cell in a Design Exploration system to complete the analysis
for that system.
5.
View the results of each analysis from the workspaces of the various cells in the Design Exploration
systems (charts, tables, statistics, etc.).
10
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Progress View
There is a Progress view that you can open from the View menu or from Show Progress button at the
lower right corner of the window. During execution, a progress execution status bar appears in the Progress
cell. The name of the task that is executing appears in the Status cell and information about the execution
appears in the Details cell. This view continuously reports the status of the execution, until it is complete.
You can stop an execution by clicking the Stop button to the right of the progress bar and you can restart
it at a later time by using any of the Update methods that you would use to start a new Update for the cell
or workspace.
Messages View
If solution errors exist and the Messages view is not open, the Show Messages button in the lower right
of the window will flash orange and indicate the number of messages. Click this button to examine solution
error information. You can also open the Messages view from the View menu in the toolbar.
From the main menu, choose Tools>Options. An Options dialog box appears. Expand the Design
Exploration item in the tree.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
11
Overview
2.
3.
Change any of the option settings by clicking directly in the option field on the right. You may need
to scroll down to see all of the fields. You will see a visual indication for the kind of interaction required
in the field (examples are drop-down menus, and direct text entries). Some sections of the screens
may be grayed out unless a particular option above them is selected.
4.
Design Exploration
The Design Points category includes:
12
Preserve Design Points After DX Run: If checked, after the solution of a Design Exploration cell completes, save the design points created for the solution to the project Table of Design Points. If this
option is selected, the following option is available:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Design of Experiments
Retain Files for Preserved Design Points: If checked, in addition to saving the design points to
the project Table of Design Points, exports each of the Design Exploration design points as a separate project.
Number of Gridlines: Changes the number of points used by the individual continuous input parameter
axes in the 2D/3D response surface charts. This enhances the viewing resolution of these charts. The
range is from 2 to 100. The default is 10.
Number of Colors: Controls the number of colors used by 2D and 3D Tradeoff charts. Choosing more
colors provides a better color quality display at the expense of slower chart rendering. Choosing less
colors provides faster chart rendering at the expense of lower color quality. The following choices are
available:
[8]
[16]
[32] (default)
[64]
Significance Level: Controls the relative importance or significance of input variables. The allowable
range is from 0.0, meaning all input variables are "assumed" to be insignificant, to 1.0, meaning all input
variables are "assumed" to be significant. The default is 0.025.
Correlation Coefficient Calculation Type: Specifies the calculation method for determining sensitivity
correlation coefficients. The following choices are available:
Rank Order (Spearman) (default): Correlation coefficients are evaluated based on the rank of
samples.
Linear (Pearson): Correlation coefficients are evaluated based on the sample values.
Display Parameter Full Name: If checked (default), displays the full parameter everywhere in the Design
Exploration user interface. If unchecked, displays the short parameter name everywhere in the Design
Exploration user interface, except for the Outline view which always displays the full parameter name.
Parameter Naming Convention: Sets the naming style of input parameters within Design Exploration.
The following choices are available:
Taguchi Style: Names the parameters as Continuous Variables and Noise Variables.
Uncertainty Management Style (default): Names the parameters as Design Variables and Uncertainty
Variables.
Reliability Based Optimization Style: Names the parameters as Design Variables and Random
Variables.
Design of Experiments
The Design of Experiments Type category includes the following settings. Descriptions of each setting are
included in the Design Points (p. 35) section under "Using Design Exploration" (p. 17).
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
13
Overview
Central Composite Design (default): Note that changing the CCD type in the Options dialog box will
generate new design points provided the study has not yet been solved.
G-Optimality
VIF-Optimality
Rotatable
Face Centered
Enhanced Template: Select this option to enable the enhanced template for the Rotatable and Face
Centered design types. Selecting it with other design types will have no effect.
Centered L2
Maximum Entropy
Max Number of Cycles: Indicates the maximum number of iterations which the base DOE undergoes
for the final sample locations to conform to the chosen DOE type.
Sample Type: : This indicates the specific method chosen to determine the number of samples. The
following choices are available:
CCD Samples (default): Number of samples is the same as that of a corresponding CCD design.
Linear Model Samples: Number of samples is the same as that of a design of linear resolution.
Pure Quadratic Model Samples: Number of samples is the same as that of a design of pure quadratic (constant and quadratic terms) resolution.
Full Quadratic Model Samples: Number of samples is the same as that of a design of full quadratic
(all constant, quadratic and linear terms) resolution.
User Defined Samples: User determines the number of DOE samples to be generated.
Number of Samples: For a Sample Type of User Defined Samples, specifies the default number of
samples.
Response Surface
The Type category includes:
14
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Non-Parametric Regression: Provides improved response quality. Initialized with one of the available
DOE types.
Neural Network: Non-linear statistical approximation method inspired from biological neural processing.
Neural Network Options include:
Number of Cells: The number of cells controls the quality of the meta model. The higher it is, the
better the network can capture parameters interactions. The recommended range is from 1 to 10.
The default is 3.
Once a meta model is solved, it is possible to switch to another meta model type or change the options for
the current meta model in the Properties view for that Response Surface workspace in the analysis system.
After changing the options in the Properties view, you must Update the Response Surface to obtain the new
fitting.
Repeatability: When checked, seeds the random number generator to the same value each time you
generate uncertainty analysis samples. With Repeatability unchecked , the random number generator
is seeded with the system time every time you generate samples. This applies to all methods in Design
Exploration where random numbers are needed, as in Six Sigma Analysis or Goal Driven Optimization.
The default is checked.
Sampling Magnification: To indicate number of times used to reduce regular Latin Hypercube samples
while achieving a certain probability of failure (Pf ). For example, the lowest probability of failure for
1000 Latin Hypercube samples is approximately 1/1000; magnification of 5 is meant to use 200
weighted/biased Latin Hypercube samples to approach the lowest probability of 1/1000. It is recommended not to use magnification greater than 5. A greater magnification may result in significant Pf error
due to highly biased samples. The default is 5.
As Goals (default): Implies that the upper, lower and equality constrained objectives of the candidate
designs shown in the Table of Optimization are treated as goals; thus any violation of the objectives
is still considered feasible.
As Hard Constraints: When chosen, the upper, lower and equality constrained objectives are treated
as hard constraints; that is, if any of them are violated then the candidate is no longer displayed.
So, in some cases no candidate designs may be displayed depending on the extent of constraint
violation.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
15
16
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
parameterize your solution and view an interpolated response surface for the parameter ranges
view the parameters associated with the minimum and maximum values of your outputs
create a correlation matrix that shows you the sensitivity of outputs to changes in your input parameters
set output objectives and see what input parameters will meet those objectives
Design of Experiments
Design of Experiments is a technique used to determine the location of sampling points and is included as
part of the Response Surface, Goal Driven Optimization, and Six Sigma systems. There are several versions
of design of experiments available in engineering literature. These techniques all have one common characteristic: they try to locate the sampling points such that the space of random input parameters is explored
in the most efficient way, or obtain the required information with a minimum of sampling points. Sample
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
17
Parameters Correlation
Probabilistic sensitivities are based on a statistical correlation analysis between the individual probabilistic
design variables. You can review the Parameters Correlation data that has been used to derive sensitivities
and decide if individual sensitivity values are significant or not. This information is collected in the correlation
matrix of the random output parameters versus the random input variables. You can also review the correlations that have been sampled between random input variables. The correlations between random output
parameters are important if you want to use the probabilistic results of your probabilistic analysis as input
for another probabilistic analysis.
Response Surface
There is one Response Surface or Curve for every output parameter. Output parameters are represented in
terms of the input parameters, which are treated as independent variables.
For the deterministic method, response surfaces for all output parameters are generated in two steps:
Solving the output parameters for all design points as defined by a design of experiments
Fitting the output parameters as a function of the input parameters using regression analysis techniques
Design of Experiments
A Design of Experiments cell is part of the Response Surface, Goal Driven Optimization, and Six Sigma systems
(although the DOE for a six sigma system treats the inputs and outputs differently and cannot share data
with the other types of DOEs). The DOE workspace allows you to preview (generates the points but does
not solve them) or generate and solve a DOE matrix. Within the DOE workspace, you can set the input
parameter limits, set the properties of the DOE solve, view the design points table, and view several para-
18
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Select the output parameters and view their minimum and maximum values.
Select one of the available charts for display. You can insert directly into the Outline as many new charts
from the chart toolbox as you want. When a chart is selected, you can change the data properties of
the chart (chart type and parameters displayed).
Properties: (See Design Criteria Considered in DOE (p. 36) for complete descriptions of DOE options.)
Preserve Design Points After DX Run check this box if you want to retain design points at the project
level from each update of this DOE you perform.
DOE Type
Custom
Table: Displays data grid of the design points that populates automatically during the solving of the points.
Displays input and output parameters for each design point. You can add points manually if the DOE Type
is set to Custom.
Chart: Displays the available charts described below.
Related Topics:
Click on a line on the chart to display the values of the input and output parameters for that line in the
Parameters section. Use the Enabled checkbox to enable or disable the display of parameter axes on
the chart.
You can also look at this chart data a different way. Right click on the chart background and select Edit
Properties, then change Chart Type to Spider Chart, which will show all input and output parameters ar-
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
19
X-Axis (top,bottom), Y-Axis (right,left): Design points (X top and bottom axes) can be plotted against
any of the input and output parameters (any of the axes).
Parameters Correlation
A linear association between parameters is evaluated using Spearmans or Pearsons product-moment
coefficient. A correlation/sensitivity matrix is generated to demonstrate correlation between input and input
parameters, and sensitivity of output to input parameters.
A nonlinear (quadratic) association between parameters is evaluated using coefficient of determination of
quadratic regression between parameters. A determination matrix is generated between the parameters to
convey information of quadratic correlation if there is any and it isnt detectable with linear Spearmans or
Pearsons correlation coefficient.
The following views are available in the Parameters Correlation workspace to allow you to customize your
search and view the results:
Outline: allows you to:
Select the Parameters Correlation cell and change its properties and view the number of samples generated for this correlation.
Select the output parameters and view their minimum and maximum values.
Select one of the available charts for display. When a chart is selected, you can change the data properties of the chart (chart type and parameters displayed).
Properties:
Preserve Design Points After DX Run check this box if you want to retain design points at the project
level from this parameters correlation.
Note
There is no way to view the design points created for a parameters correlation other than
preserving them at the project level.
Reuse the samples already generated check this box if you want to reuse the samples generated
in a previous correlation.
20
Spearman
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Correlation Matrix
Pearson
Mean Value Accuracy (if Auto Stop is enabled) set the desired accuracy for the mean value of the
sample set.
Standard Deviation Accuracy (if Auto Stop is enabled) set the desired accuracy for the standard
deviation of the sample set.
Convergence Check Frequency (if Auto Stop is enabled) number of simulations to execute before
checking for convergence.
Size of Generated Sample Set number of samples generated for the correlation solution.
Table: Displays both a correlation matrix and a determination matrix for the input and output parameters.
Chart: Displays the available charts described below.
Related Topics:
Enable or disable the display of the Quadratic and Linear trend lines.
View the Coefficient of Determination for the Quadratic and Linear trend lines.
Related Topics:
Correlation Matrix
Provides information about the linear correlation between a parameter pair, if any. The degree of correlation
is indicated by color in the matrix. Placing your cursor over a particular square will show you the correlation
value for the two parameters associated with that square. Select the Correlation Matrix chart cell in the
Outline to display the chart in the Chart view. Use the Properties view as follows:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
21
Related Topics:
Determination Matrix
Provides information about the nonlinear (quadratic) correlation between a parameter pair, if any. The degree
of correlation is indicated by color in the matrix. Placing your cursor over a particular square will show you
the correlation value for the two parameters associated with that square. Select the Determination Matrix
chart cell in the Outline to display the chart in the Chart view. Use the Properties view as follows:
Properties:
Related Topics:
Sensitivities Chart
Allows you to graphically view the global sensitivities of each output parameter with respect to the input
parameters. Select the Sensitivities chart cell under Charts in the Outline to display the chart in the Chart
view. Use the Properties view as follows:
Properties:
Related Topics:
Response Surface
Builds a response surface from the DOE Design Points input and output values based on the chosen Response
Surface Type. Within the Response Surface workspace, you can view the input parameter limits and initial
values, set the properties of the response surface algorithm, view the response points table, and view several
types of response charts. The following views are available in the Response Surface workspace to allow you
to customize your Response Surface and view the results:
Outline: Allows you to:
22
Select the Response Surface cell and change its properties, including Response Surface Type.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Response Surface
Select input parameters to view their properties. You can also enable and disable the use of input
parameters by selecting or deselecting the Enable box next to the parameter.
Select the output parameters and view their minimum and maximum values, set their transformation
type, and view their goodness of fit.
Enable or Disable the Min-Max Search and select the Min-Max Search cell to view the table of results.
Select a Response Point and view its input and output parameter properties.
You can change the input values in the Properties view and see the corresponding output values.
From the right click menu, you can reset the input parameter values of the selected Response Point
to the initial values that were used to solve for the Response Surface.
In the Properties view, you can enter Notes about the selected Response Point.
Select added Response Points and view their properties. Drag charts from the toolbox to the Response
Points to view data for the added Response Points.
Select one of the available charts for display. When a chart is selected, you can change the data properties of the chart (chart type and parameters displayed).
Properties:
Preserve Design Points After DX Run check this box if you want to retain at the project level design
points that are created when refinements are run for this Response Surface
Note
Checking this box will not preserve any design points unless you run either a manual refinement or one of the Kriging refinements, since the Response Surface uses the design points
generated by the DOE. If the DOE of the Response Surface does not preserve design points,
when you perform a refinement, only the refinement design points will be preserved at the
project level. If the DOE is set to preserve design points, and the Response Surface is set to
preserve design points, when you perform a refinement the project will contain the DOE
design points and the refinement design points.
Kriging
Non-Parametric Regression
Neural Network
Refinement Type: Manual is available for all Response Surface Types, allowing you to enter Refinement
Design Points manually; Auto and Interactive are also available options for the Kriging Response Surface
Type.
Table:
Displays a grid of the Refinement Design Points created for this Response Surface. Both manual Refinement Design Points and Refinement Design Points created by one of the Kriging refinement algorithms
can be found in this table. Add a manual Refinement Design Point by entering a value for one of the
inputs on the New Refinement Design Point row of the table.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
23
Displays a data grid of the Response Points for that response surface. Displays input and output parameters for each response point. You can change the points in the table and add new ones in several
different ways:
You can change the input values of a Response Point and see the corresponding output value.
You can reset the input parameter values of a Response Point to the initial values that were used
to solve for the Response Surface by right-clicking on the Response Point in the table and selecting
Reset Response Point.
You can add Response Points to the table by right-clicking on the Response Surface or a row in
several different tables, including the Min-Max Search and optimization tables, and selecting Insert
as Response Points.
You can create new Response Points by directly entering values into the input parameter cells of
the row designated with an asterisk (*).
Chart: Displays the available charts described below for each response point.
Related Topics:
Refine the Model Using the Kriging Meta Model (p. 51)
Min-Max Search
The Min-Max Search examines the entire output parameter space from a Response Surface to approximate
the minimum and maximum values of each output parameter. Select the Min-Max Search cell in the Outline
to display the sample points that contain the calculated minimum and maximum values for each output
parameter (they will be shown in the Response Surface Table). The maximum and minimum values in the
output parameter properties view will also be updated based on the results of the search.
The following views are available in the Min-Max Search workspace to allow you to customize your search
and view the results:
Outline: Check the Enable box to perform a Min-Max Search each time the Response Surface is updated.
Uncheck the box to disable the Min-Max Search feature. You may want to disable this feature if you have a
large number of input parameters and the search would be time consuming.
Properties:
Enter the Number of Start Points to use for the Min-Max search algorithm. The more start points
entered, the longer the search time.
24
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Related Topics:
Response Chart
Allows you to graphically view the impact that changing each input parameter has on the displayed output
parameter. Select the Response chart cell in the Outline to display the response chart in the Chart view.
You can add response points to the Table of Response Surface by right-clicking on the response chart and
selecting Insert as Response Points. You can add design points to the Project by right-clicking on the response chart and selecting Insert as Design Points. Use the Properties view as follows:
Properties:
Number of Points on X (and Y, in 3D) set the number of points that you want to appear on the response curve.
Choose the input parameter(s) to display on the X Axis and Y Axis (3D).
Choose the output parameter to display on the Z Axis (3D) or Y Axis (2D).
Use the sliders to change the values of the input parameters that are not displayed on the curve to see
how they effect the values for the parameters that are displayed on the curve. You may also enter
specific values in the boxes above the sliders.
View the interpolated output parameter values for the selected set of input parameter values.
You can view Goodness of Fit information for the output parameters on the generated response curve. Select
one of the output parameter cells in the Outline to display its properties. For each output, you can also
change the transformation function to improve the quality of the approximation.
Related Topics:
Use the sliders to change the values of the input parameters to see how the sensitivity is changed for
each output.
View the interpolated output parameter values for the selected set of input parameter values.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
25
Related Topics:
Spider Chart
Allows you visualize the impact that changing the input parameters has on all of the output parameters
simultaneously. Select the Spider chart cell in the Outline to display the chart in the Chart view. Use the
Properties view as follows:
Properties:
Use the sliders to change the values of the input parameters to see how they affect the output parameters.
View the interpolated output parameter values for the selected set of input parameter values.
You can also look at this chart data a different way. Right click on the chart background and select Edit
Properties, and change Chart Type to Parallel Coordinate. Then click on the Spider cell under Response
Point in the Outline view and use the input parameter sliders in Properties view to see how specific settings
of various input parameters change the output parameters, which are arranged in parallel 'Y' axes in the
chart.
Related Topics:
Select the Optimization cell and change its properties, and view the size of the generated sample set.
Select the inputs and outputs to view the input values used in the analysis and the calculated maximum
and minimum values of each of the outputs.
Select one of the available charts for display. When a chart is selected, you can change the data properties of the chart (x- and y-axis parameters, parameters displayed on bar chart, etc.).
Properties: (See Optimization Options (p. 63) for complete descriptions of GDO options.)
26
Optimization Method
MOGA
NLPQL
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Related Topics:
Screening
As Goal (default)
As Hard Constraints
Table: Before the update, provides fields to allow you to set your objective for each input and output, or
set a target value, and indicate the relative importance of that objective. (See Defining Design Objectives (p. 67)
for complete descriptions of GDO options.) After the update, displays up to three best candidate points for
the stated objectives. A number of gold stars or red crosses are displayed next to each objective-driven
parameter to indicate how well it meets the stated objective, from three red crosses (the worst) to three
gold stars (the best).
Chart: Displays the available charts described below.
Related Topics:
Tradeoff Chart
Allows you to view the Pareto fronts created from the samples generated in the GDO. Select the Tradeoff
chart cell under Charts in the Outline to display the chart in the Chart view. Use the Properties view as follows:
Properties:
Number of Pareto Fronts to Show Set the number of Pareto fronts that are displayed on the chart.
Show infeasible points Enable or disable the display of infeasible points. (Available when some of
the objectives are constraints.)
Click on a point on the chart to display a Parameters section that shows the values of the input and
output parameters for that point.
Related Topics:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
27
Samples Chart
Allows you to visually explore a sample set given defined objectives. Select the Samples chart cell under
Charts in the Outline to display the chart in the Chart view. Use the Properties view as follows:
Properties:
Chart Mode set to Candidates or Pareto Fronts. For Pareto Fronts, the following options can be set:
Number of Pareto Fronts to Show either enter the value or use the slider to select the number
of Pareto Fronts displayed.
Show infeasible points Enable or disable the display of infeasible points. (Available when some of
the objectives are constraints.)
Click on a line on the chart to display the values of the input and output parameters for that line in the
Parameters section. Use the Enabled checkbox to enable or disable the display of parameter axes on
the chart.
Related Topics:
Related Topics:
28
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
View the Skewness and Kurtosis properties for each input parameter distribution.
View the calculated Mean and Standard Deviation for all Distribution Types except Normal and Truncated
Normal where you can set those values.
Properties: (The properties for the uncertainty input parameters are described here.)
Uniform
Triangular
Normal
Truncated Normal
Lognormal
Exponential
Beta
Weibull
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
29
Quantile: the input parameter value point for the given PDF and CDF values.
PDF: (Probability Density Function) density of the input parameter along the X-axis.
CDF: (Cumulative Distribution Function) is the integration of PDF along the X-axis.
Chart: When an input parameter is selected in the Outline view, displays the Probability Density Function
and the Cumulative Distribution Function for the Distribution Type chosen for the input parameter. There
are also Parameters Parallel and Design Points vs Parameter charts as you would have in a standard DOE.
Related Topics:
Select each input parameter and view its distribution properties, statistics, upper and lower bounds,
initial value, and distribution chart.
Select each output parameter and view its calculated maximum and minimum values, statistics and
distribution chart.
Set the table display format for each parameter to Quantile-Percentile or Percentile-Quantile .
Properties:
For the Six Sigma Analysis, the following properties can be set:
Sampling Type Type of sampling used for the Six Sigma Analysis
LHS
WLHS
Number of Samples
Probability Table Manner in which the analysis information is displayed in the Table view.
Quantile-Percentile
Percentile-Quantile
Table: Displays a table that shows for each of the samples in the set:
Probability: Probability that the parameter will be less than or equal to the parameter value shown.
30
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
General Information
Sigma Level: The approximate number of standard deviations away from the sample mean for the
given sample value.
If Probability Table is set to Quantile-Percentile in the Statistics section of the Properties view, you can edit
a parameter value and see the corresponding Probability and Sigma Level values. If it is set to PercentileQuantile, the columns are reversed and you can enter a Probability or Sigma Level value and see the corresponding changes in the other columns.
Chart: When a parameter is selected in the Outline view, displays its Probability Density Function and the
Cumulative Distribution Function. A global Sensitivities chart is available in the outline.
Related Topics:
Related Topics:
Note
If you modify your analysis database after a Design Exploration solution (causing the parameters
to change), your Design Exploration studies may not reflect your analysis anymore. If you modify
your analysis, the Update Required symbol will appear on the Design Exploration cells to indicate
that they are out of date.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
31
Note
You can change the way that units that are displayed in your Design Exploration systems from
the Units menu. Changing the units display in this manner simply causes the existing data in
each system to be displayed in the new units system, and does not require an update of the
Design Exploration systems. If the user changes a parameter's unit from the Parameters cell on
the schematic, all Design Exploration systems become out of date, and will be marked as Update
Required.
Input Parameters
You can make changes to an input parameter in the first cell of the system (DOE, DOE (SSA), or Parameters
Correlation cell). In the Outline view of your cell workspace, select the input parameter that you want to
edit. Most changes will be made using the Properties view for that parameter. Changes made for parameters
will only affect the parameter information for the current system and any systems that share a project cell
(DOE, Response Surface) from that system.
An input parameter may be selected or unselected for use in that system by checking the box to the
right of it in the DOE Outline view.
In the Properties view, specify further attributes for the selected input parameter, such as a range (upper
and lower bounds) of values for a continuous input parameter or values for a discrete input parameter.
When you make any of the following changes to an input parameter in the first cell of the system (DOE or
Parameters Correlation cell):
change the nature of an input parameter from continuous to usability or discrete, etc.
32
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Caution
Using the DOE Method, the number of generated design points in the DOE Matrix is directly related to the number of selected input parameters. The design and analysis Workflow shows that
specifying many input parameters will make heavy demands on computer time and resources,
including system analysis, DesignModeler geometry generation, and/or CAD system generation.
Also, large input parameter ranges may lead to inaccurate results.
Uniform
Triangular
Normal
Truncated Normal
Lognormal
Exponential
Beta
Weibull
For more information on the distribution types, see Distribution Functions (p. 100).
In the example below of a beam supporting a roof with a snow load, you could measure the snow height
on both ends of the beam 30 different times. Suppose the histograms from these measurements look like
the figures given below.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
33
Relative Frequency
5.0E-01
4.0E-01
3.0E-01
2.0E-01
1.0E-01
02
E+
66
02
3.
E+
85
02
2.
2.
03
E+
02
E+
22
1.
4.
07
E+
01
0.0E+00
Snow Height H1
8.0E-01
Relative Frequency
7.0E-01
6.0E-01
5.0E-01
4.0E-01
3.0E-01
2.0E-01
1.0E-01
9.
85
E+
01
2.
95
E+
02
4.
92
E+
02
6.
89
E+
02
8.
86
E+
02
1.
08
E+
03
0.0E+00
Snow Height H2
From these histograms, you can conclude that an exponential distribution is suitable to describe the scatter
of the snow height data for H1 and H2. Suppose from the measured data we can evaluate that the average
snow height of H1 is 100 mm and the average snow height of H2 is 200 mm. The parameter can be directly
derived by 1.0 divided by the mean value, which leads to 1 = 1/100 = 0.01 for H1, and 1 = 1/200 = 0.005
for H2.
Output Parameters
Each output parameter corresponds to a response surface which is expressed as a function of the input
parameters. Some typical output parameters are: Equivalent Stress, Displacement, Maximum Shear Stress,
etc.
34
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Design Points
When you solve a Design Exploration system, Design Points are generated based on the number of input
parameters in the system. Before solving a Design of Experiments or Response Surface (if performing a refinement) cell for a Six Sigma or non-Six Sigma system, you can preview the generated Design Points by
clicking the corresponding Preview button in the toolbar. You are not able to preview the generated Design
Points for a Parameters Correlation solution.
You can solve for the output parameter values for the generated design points in several ways:
From the workspace, click the corresponding Update button in the toolbar to solve for all of the generated design points.
From the Project Schematic, right click the cell and select Update from the menu.
From the Project Schematic, click the Update Project button in the toolbar to update all of the systems
in the project.
The Design Points are sent to the analysis system for solution, and will be solved simultaneously if the analysis system is configured to perform simultaneous solutions; otherwise they will be solved sequentially.
For the deterministic method, the locations of the generated Design Points are determined according to a
design of experiments method that is by default a central composite design with a fractional factorial design.
The number of generated Design Points is calculated using the number of input parameters selected in the
DOE. Response Surfaces will be generated from the associated DOE solution.
The user has selected Preserve Design Points After DX Run in the Design Exploration cell's Properties
view
The user has selected Preserve Design Points After DX Run in the Tools>Options>Design Exploration
dialog
You may want to preserve generated design points for the following reasons:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
35
Preserving design points has the advantage of letting the user see the exact design points used by a
Design Exploration cell. In a Parameters Correlation analysis for instance, it is the only way to see the
actual design points used to calculate the correlation since there is no preview for that cell.
If you preserve the design points, a subsequent update of that Design Exploration system can reuse
the existing Design Points, which will reduce the CPU cost.
Note
When a Design Exploration design point fails to update, it is retained automatically in the project,
even if the preserve option is not checked.
In addition to preserving the Design Points in the project, you can also export the preserved Design Points
from the project. If you check the Retain Files for Preserved Design Points option in the Properties under
the Preserve Design Points After DX Run property, in addition to saving the design points to the project
Table of Design Points, each of the Design Exploration Design Points is exported as a separate project.
Note
If you run a Design Exploration analysis and export Design Points, and then you uncheck the
option to export them, any Design Points that were marked for export and are reused in ensuing
analyses will still be exported, although newly created Design Points will not be exported.
Note
If you change the Design of Experiments Type after doing an initial analysis and preview the
Design of Experiments Table, any Design Points generated for the new algorithm that are the
same as Design Points solved for a previous algorithm will appear as up-to-date. Only the Design
Points that are different from any previously submitted Design Points need to be solved.
36
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Central Composite Design (default): Provides a screening set to determine the overall trends of the
Meta Model to better guide the choice of options in the Optimal Space-Filling Design.
A Design Type Specifies a Central Composite Design (CCD) type to help improve the response surface
fit for DOE studies. For each CCD type, the alpha value is defined as the location of the sampling point
that accounts for all quadratic main effects. The following choices are available:
Auto Defined (default): Design Exploration automatically selects the Design Type based on the
number of input variables. Use of this option is recommended for most cases as it automatically
switches between the G-Optimal (if the number of input variables is 5) or VIF-optimal otherwise.
However, the Rotatable design may be used if the default option does not provide good values for
the goodness of fits from the response surface plots. Also, the Enhanced template may be used if
the default Standard template does not fit the response surfaces too well.
Face Centered: A three-level design with no rotatability. The alpha value equals 1.0. A Template
Type setting automatically appears, with Standard and Enhanced options. Choose Enhanced for
a possible better fit for the response surfaces.
Rotatable: A five-level design that includes rotatability. The alpha value is calculated based on the
number of input variables and a fraction of the factorial part. A design with rotatability has the same
variance of the fitted value regardless of the direction from the center point. A Template Type
setting automatically appears, with Standard and Enhanced options. Choose Enhanced for a possible
better fit for the response surfaces.
VIF-Optimality: A five-level design in which the alpha value is calculated by minimizing a measure
of non-orthogonality known as the Variance Inflation Factor (VIF). The more highly correlated the
input variable with one or more terms in a regression model, the higher the Variance Inflation Factor.
G-Optimality: Minimizes a measure of the expected error in a prediction and minimizes the largest
expected variance of prediction over the region of interest.
See DOE Using a Central Composite Design (p. 39) for more information.
Optimal Space-Filling Design: Creates optimal space filling design of experiments (DOE) plans according
to some specified criteria. The objective is to fill the design space efficiently with as small number of
points as possible while keeping the discrepancy as low as possible. This is a more efficient DOE scheme
when a more complex Meta Modeling technique such as Kriging, Non Parametric Regression or Neural
Networks is desired. In these situations, traditional CCD based DOEs are often not suitable as they have
poor space filling capabilities.
Note
If you have several parameters and one is discrete (having 4 values, for example) and you
ask for an Optimal Space Filling analysis, User-Defined with 20pts, you will actually have
20x4=80 pts. This is by design; a different DOE must be created for each combination of the
discrete parameters, which may quickly result in a lot of design points being generated.
The following options are available with Optimal Space-Filling Design:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
37
Maximum Number of Cycles: Determines the number of optimization loops the algorithm needs,
which in turns determines the discrepancy of the DOE. The optimization is essentially combinatorial,
so a large number of cycles will slow down the process; however, this will make the discrepancy of
the DOE smaller. For practical purposes, 10 cycles usually is good for up to 20 variables. Must be
greater than 0. The default is 10.
Samples Type: Determines the number of DOE points the algorithm should generate. This option
is suggested if you have some advance knowledge about the nature of the Meta Model. The following
choices are available:
CCD Samples (default): Generates the same number of samples a CCD DOE would generate for
the same number of inputs. You may want to use this to generate a space filling design which
has the same cost as a corresponding CCD design.
Linear Model Samples: Generates the number of samples as needed for a linear Meta Model.
Pure Quadratic Model Samples: Generates the number of samples as needed for a pure quadratic Meta Model (no cross terms).
Full Quadratic Samples: Generates the number of samples needed to generate a full quadratic
model.
User-Defined Samples: Expects the user to enter the desired number of samples. When selected
with any design type, a Number of Samples option is available below the Samples Type entry.
You must enter a positive number. The default is 10.
Custom: Allows definition of a custom DOE Table. You can manually add new Design Points, entering
the input and (optionally) output parameter values directly into the table. If you previously solved the
DOE using one of the other algorithms, those Design Points will be retained and you can add new
Design Points to the table. You can also import and export design points into the custom DOE Table
from the Parameter Set.
The table can be in one of two modes: All Outputs Calculated, or All Outputs Editable. The default
mode for the table is All Outputs Calculated. To change table modes, right click on the table and select
the desired mode. The mode behavior is as follows:
All Outputs Calculated allows you to enter values for any or all of the inputs in a row, and the outputs
for that row will be calculated when the DOE is updated. In this mode, individual rows in the table
can be overridden so that the outputs are editable. To override a row, right click on it and select
Set Output Values as Editable.
All Outputs Editable allows you to enter values for any or all of the inputs and outputs in a row. In
this mode, individual rows in the table can be overridden so that the outputs are calculated. To
override a row, right click on it and select Set Output Values as Calculated. If a row is changed
from editable to calculated, it invalidates the DOE, requiring an Update. Rows with editable outputs
are not computed during an Update of the DOE.
You can add new rows by entering values in the * row of the table.
38
If the table is in All Outputs Calculated mode, you may enter values in the input parameter columns.
Once you have entered a value in one column in the * row, the row is added to the table and the
values for the remaining input parameters will be set to the initial values of the parameters. You
can then edit that row in the table and change any of the other input parameter values if needed.
Output parameter values will be calculated when the DOE is updated.
If the table is in All Outputs Editable mode, you can enter values in the input or output parameter
columns. If you enter a value in an input column in the * row, the row is added to the table and the
values for the remaining input parameters will be set to the initial values of the parameters. The
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Note
You can generate a DOE matrix using one of the other design options provided, then
switch to Custom to modify the matrix. The opposite however is not possible; that is, if
you define a Custom matrix first, then change to one of the other design options provided,
the matrix is cleared.
The table can contain derived parameters. Derived parameters are always calculated by
the system, even if the table mode is All Output Values Editable.
Editing output values for a row changes the DOE cell's state to UpdateRequired. The DOE
will need to be updated, even though no calculations are done.
If several rows are selected in the table and all of their outputs are calculated, the right
click menu will only offer the Set Output Values as Editable option. If all selected rows
are editable, the right click menu will only offer the Set Output Values as Calculated option.
If the selected rows are both Calculated and Editable, both options will be available on
the right click menu.
If the table is in All Outputs Editable mode, once all of the outputs are entered, the DOE
will need to be Updated even though no calculations are done.
If the Design Points are solved and you change the table to All Output Values Editable
and then change it back to All Output Values Calculated without making any changes,
the outputs will be marked Out of Date and you must Update the DOE. The Design Points
will be recalculated.
You can copy/paste in the table (to/from a spreadsheet, text file, or Design Point table,
for example). The copy/paste adapts to the current state of the table, pasting only inputs
or inputs and outputs if outputs are editable.
The DOE charts do not reflect the points added manually using the Custom option until
the DOE is Updated.
It is expected that the Custom option will be used to enter DOE plans that were built
externally. If you use this feature to enter all of the points in the matrix manually, you
must make sure to enter enough points so that a good fitting can be created for the response surface. This is an advanced feature that should be used with caution; always
verify your results with a direct solve.
The degree of non-orthogonality of regression terms can inflate the variance of model coefficients.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
39
The position of sample points in the design can be influential based on its position with respect to
others of the input variables in a subset of the entire set of observations.
An optimal CCD design should minimize both the degree of non-orthogonality of term coefficients and the
opportunity of sample points having abnormal influence. In minimizing the degree of non-orthogonality,
the Variation Inflation Factor (VIF) of regression terms is used. For a VIF-Optimality design, the maximum
VIF of the regression terms is to be minimized, and the minimum value is 1.0. In minimizing the opportunity
of influential sample points, the leverage value of each sample points is used. Leverages are the diagonal
elements of the Hat matrix, which is a function of the design matrix. For a G-Optimality design, the maximum
leverage value of sample points is to be minimized.
For a VIF-Optimality design, the alpha value/level is selected such that the maximum VIF is minimum.
Likewise, for a G-Optimality design, the alpha value/level is selected such that the maximum leverage is
minimum. The rotatable design is found to be a poor design in terms of VIF- and G-Efficiencies.
For an optimal CCD, the alpha value/level is selected such that both the maximum VIF and the maximum
leverage are the minimum possible. For the Auto Defined design, the alpha value is selected from either
the VIF- or G-Optimality design that meets the criteria. Since it is a multi-objective optimization problem, in
many cases, there is no unique alpha value such that both criteria reach their minimum. However, the alpha
value is evaluated such that one criterion reaches minimum while another approaches minimum.
For the current Auto Defined setup (except for a problem with five variables that uses G-Optimality design)
all other multi-variable problems use VIF-Optimality. In some cases, despite the fact that Auto Defined
provides an optimal alpha meeting the criteria, an Auto Defined design might not give as good of a response
surface as anticipated due to the nature of the physical data used for fitting in the regression process. In
that case, you should try other design types that might give a better response surface approximation.
Note
Default values for CCD Types can be set in the Design Exploration > Design of Experiments
section of the Options dialog box, which is accessible from the Tools menu.
It is good practice to always verify some selected points on the response surface with an actual simulation
evaluation to determine its validity of use for further analyses in Design for Six Sigma. In some cases, a good
response surface does not mean a good representation of an underlying physics problem since the response
surface is generated according to the predetermined sampling points in the design space, which sometimes
misses capturing an unexpected change in some regions of the design space. In that case, you should try
Extended DOE. In Extended DOE, a mini CCD is appended to a standard CCD design, where a second alpha
value is added, and is set to half the alpha value of the Standard CCD. The mini CCD is set up in a way that
the essences of CCD design (rotatability and symmetry) are still maintained. The appended mini CCD serves
two purposes:
1.
2.
The two purposes seem to be conflicting in some cases where the response surface might not be as good
as that of the Standard DOE due to the limitation of a quadratic response surface in capturing a drastic
change within the design space.
The location of the generated design points for the deterministic method is based on a central composite
design. If N is the number of input parameters, then a central composite design consists of:
40
2*N axis point located at the - and + position on each axis of the selected input parameters.
2(N-f) factorial points located at the -1 and +1 positions along the diagonals of the input parameter
space.
The fraction f of the factorial design and the resulting number of design points are given in the following
table:
Table 1 Number of Generated Design Points as a Function of the Number of Input Parameters
Number of input parameters
Factorial number f
Number of
design points
15
25
27
45
79
81
147
10
149
11
151
12
281
13
283
14
285
15
287
16
289
17
291
18
549
19
10
551
20
11
553
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
41
For a design variable, the upper and lower levels of the DOE range coincide with the bounds specified
for the input parameter.
It often happens in optimization that the optimum point is at one end of the range specified for one
or more input parameters.
For an uncertainty variable, the upper and lower levels of the DOE range are the quantile values corresponding to a probability of 0.1% and 99.9%, respectively.
This is the standard procedure whether the input parameter follows a bounded (e.g., uniform) or unbounded (e.g., Normal) distribution, because the probability that the input variable value will exactly
coincide with the upper or lower bound (for a bounded distribution) is exactly zero. That is, failure can
never occur when the value of the input variable is equal to the upper or lower bound. Failure typically
occurs in the tails of a distribution, so the DOE points should be located there, but not at the very end
of the distribution.
Note
The Design Points will be solved simultaneously if the analysis system is configured to perform
simultaneous solutions; otherwise they will be solved sequentially.
To clear the design points generated for the DOE matrix, return to the Project Schematic, right click on the
DOE cell, and select Clear Generated Data. You can clear data from any Design Exploration cell in the Project
Schematic in this way, and regenerate your solution for that cell with changes to the parameters if you so
desire.
Solving the output parameters for all design points as defined by a design of experiments.
Fitting the output parameters as a function of the input parameters using regression analysis techniques.
As many charts as needed can be generated and inserted in the Charts section of the Outline view for the
Response Surface. You can also save individual charts as graphic files.
Kriging
42
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
General Definitions
Non-Parametric Regression
Neural Network
General Definitions
The error sum of squares SSE is:
n
SSE = ( yi y^ i )2 = ({ y } { y^ })T ({ y} { y^ })
i =1
(1)
where:
yi = value of the output parameter at the ith sampling point
y^ i
SSR = ( y^ i y )2
(2)
i =1
where:
y=
1 n
yi
n i =1
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
43
SST = ( yi y )2
(3)
i =1
For linear regression analysis the relationship between these sums of squares is:
SST = SSR + SSE
(4)
(5)
where:
t i
= row vector of regression terms of the response surface model at the ith sampled location
T
c c ...c
{c} = 1 2 p = vector of the regression parameters of the regression model
p = total number of regression parameters. For linear regression analysis, the number of regression
parameters is identical to the number of regression terms.
The regression coefficients {c} can be calculated from:
{c } = ([d]T [d])1[d]T { y }
(6)
Forward-Stepwise-Regression
In the forward-stepwise-regression, the individual regression terms are iteratively added to the regression
model if they are found to cause a significant improvement of the regression results. For DOE, a partial F
test is used to determine the significance of the individual regression terms.
Transformation Families
You can select between Box-Cox, Yeo-Johnson and None for the transformation used in standard response
surface regression. None means that we compute the standard response surface regression without any
transformations. It is an option, on a per output basis, used to transform output parameter samples. The
44
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Transformation Families
default is Yeo-Johnson transformation since it is more numerically stable in its back-transformation. On the
contrary, Box-Cox transformation is more numerically unstable in its back-transformation, but it gives better
fit in some cases. It is recommended to try Box-Cox transformation if the goodness of fit of the response
surface is not as good. All applications use the response surface to generate samples; for example, Goal
Driven Optimization (GDO) and Design for Six Sigma (DFSS) samples. To access, click on an output parameter
in the Outline view of a response surface. In the Properties view under Output Settings, select the Transformation Type you want to apply and click Update to re-solve for a new response surface.
Kriging
Use the Kriging option to provide a comprehensive goodness of fit report of a standard response surface
to determine the adequacy to be applied in Goal Driven Optimization (GDO) and Design for Six Sigma (DFSS)
samples. To view the goodness of fit information, click on one of the output parameters in the Outline view
of your Repsonse Surface. A Goodness-of-Fit Information section will be shown in the Properties view for
the parameter. Kriging is an accurate interpolation method.
Kriging postulates a combination of a polynomial model plus departures of the form given by Equation 7 (p. 45):
y( x ) = f ( x ) + Z( x )
(7)
where y(x) is the unknown function of interest, f(x) is a polynomial function of x, and Z(x) is the realization
of a normally distributed Gaussian random process with mean zero, variance 2, and non-zero covariance.
The f(x) term in Equation 7 (p. 45) is similar to the polynomial model in a response surface and provides a
global model of the design space.
While f(x) globally approximates the design space, Z(x) creates localized deviations so that the kriging
model interpolates the N sample data points. The covariance matrix of Z(x) is given by Equation 8 (p. 45).
Cov Z( xi ) , Z( x j ) = 2R([r( xi , x j )])
(8)
In Equation 8 (p. 45) R is the correlation matrix, and r(xi, xj) is the spatial correlation of the function between
any two of the N sample points xi and xj. R is an N*N symmetric, positive definite matrix with ones along
the diagonal. The correlation function r(xi, xj) is Gaussian correlation function:
2
M
j
i
r ( xi , x j ) = exp k xk xk
k =1
(9)
The k in Equation 9 (p. 45) are the unknown parameters used to fit the model, M is the number of design
i
xj
variables, and xk and k are the kth components of sample points xi and xj. In some cases, using a single
correlation parameter gives sufficiently good results; the user can specify the use of a single correlation
parameter, or one correlation parameter for each design variable (Tools>Options>Design Exploration>Response
Surface>Kriging Options>Kernel Variation Type: Variable or Constant).
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
45
Z( x ) = ir ( xi , x )
(10)
i =1
Non-Parametric Regression
Non-parametric regression provides improved response quality and is initialized with one of the available
DOE types.
The non-parametric regression (NPR) algorithm is implemented in the ANSYS Workbench Design Exploration
as a metamodeling technique prescribed for predictably high nonlinear behavior of the outputs with respect
to the inputs. The NPR feature is accessible via Tools>Options>Design Exploration>Response Surface>Type:
Non-Parametric Regression.
NPR belongs to a general class of Support Vector Method (SVM) type techniques. These are data classification
methods which use hyperplanes to separate data groups. The regression method works almost similarly,
the main difference being that the hyperplane is used to categorize a subset of the input sample vectors
which are deemed sufficient to represent the output in question. This subset is called the support vector
set. The objective is to find a minimal, sparse support vector set from the input samples provided. Support
vector methods are preferred over other non-parametric regression methods because of two inherent advantages. First, SVM-type approaches map the input space to a feature space by a kernel map before doing
the fitting; this ensures that the curse of dimensionality as suffered by high dimensional input spaces no
longer exist. Second, the training objective function is quadratic in the feature space, thus, QP based methods
may be used easily to determine the support vectors without the risk of the solution being trapped in a
local minimum.
Let the input sample (as generated from a DOE method) be X = {x1 , x2, x3, , xM}, where each xi is an Ndimensional vector and represents an input variable. The objective is to determine the equation of the form:
Y = < W , X > +b
(11)
Where, W is a weighting vector. In case of generic non-parametric case, the Equation 11 (p. 46) is rewritten
as the following:
N
r r
Y = ( A i A i* )K( Xi, X) + b
i =1
(12)
r r
*
K
(
X
i , X) is the kernel map and the quantities A and A i are Lagrange multipliers whose derivation
Where,
i
will be shown in later sections.
In order to determine the Lagrange multipliers, we start with the assumption that the weight vector W must
be minimized such that all (or most) of the sample points lie within an error zone around the fitted surface.
For a simple demonstration of this concept, please see Figure : Fitting a regression line for a group of sample
points with a tolerance of which is characterized by slack variables * and (p. 47) .
46
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Transformation Families
Figure: Fitting a regression line for a group of sample points with a tolerance of which is
characterized by slack variables * and
Minimize L = 0.5 W
(13)
S.T.
( < W, Xi > + b) - yi + i*
Where, C is an arbitrary (> 0) constant. In order to characterize the tolerance properly, we define a loss
function in the interval (- , ) which defacto becomes the residual of the solution. In the present implementation, we use the -insensitive loss function which is given by the equation:
L = 0 f ( x ) y ) <
(14)
L = f ( x ) y )
The primal problem in Equation 13 (p. 47) can thus be rewritten as the following using generalized loss
functions as:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
47
+ C ( l ( i ) + l ( i* ))
i =1
S.T.
(15)
i , i* 0
< W, Xi > +b yi + i
yi < W, Xi > b + i*
In order to solve this efficiently, a Lagrangian dual formulation is done on this to yield the following expression:
L = 0 . 5* W
i =1
i =1
+ A i* [ yi < W, Xi > b ( + i* )]
i =1
N
(16)
+ ( ii + i* i* )
i =1
After some simplification, the dual Lagrangian can be written as the following Equation 17 (p. 48):
N N
min L = 0.5
*
*
( A i Ai)( A j A j ) < Xi , X j >
i =1 j =1
+ [( A i* A i ) yi ( A i* Ai )]
i =1
N l( )
N l( * )
i l( ) + C
i l( * )
+ C i
i*
i
i
i
i =1
i =1
i*
S.T.
N
(*)
(Ai
i =1
A (*)
i
(17)
Ai ) = 0
C
l((i *) )
(*)
i
l( (*)
i ) A (*)
(*) = inf C
(*)
i
Equation 17 (p. 48) is a quadratic constrained optimization problem, and the design variables are the vector
A. Once this is computed, the constant b in Equation 11 (p. 46) can be obtained by the application of KarushKuhn-Tucker (KKT) conditions. Once the -insensitive loss functions are used instead of the generic loss
functions l() in Equation 17 (p. 48), this can be rewritten in a much simpler form as the following:
48
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Transformation Families
MIN
L
A, A *
N N
+ [( Ai* + Ai ) yi( A i* A i )]
i =1
(18)
S.T.
0 A (*)
i C
N
*
( Ai A i ) = 0
i =1
A (*)
i
Neural Network
This mathematical technique is based on the natural neural network in the human brain.
In order to interpolate a function, we build a network with three levels (Inp, Hidden, Out) where the connections between them are weighed, like this:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
49
Each arrow is associated with a weight (w) and each ring is called a cell (like a neural).
If the inputs are xi, the hidden level contains function gj (xi) and the output solution is also:
fk ( xi ) = K( w jk g j ( xi ))
where K is a predefined function, such as the hyperbolic tangent or an exponential based function, in order
to obtain something similar to the binary behavior of the electrical brain signal (like a step function). The
function is continuous and differentiable.
The weight functions (wjk) are issued from an algorithm which minimizes (as the least squares method) the
distance between the interpolation and the known values (design points). This is called the learning. The
error is checked at each algorithm iteration with the design points which are not used for the learning. We
need to separate learning design points and error checking design points.
The error decreases and then increases when the interpolation order is too high. The minimization algorithm
is stopped when the error is the lowest.
This method uses a limited number of design points to build the approximation. It works better when the
number of design points and the number of intermediate cells are high. And it can give interesting results
with several parameters.
Note
Any new Refinement Design Points that are generated by any of the refinement methods below
will be retained by the Response Surface and used for any further analysis of this Response Surface
(using a different Response Surface Type, etc.).
50
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Interactive Refinement
Change Meta Model Types
Perform a Manual Refinement
Refine the Model Using the Kriging Meta Model
First do an initial solve of the response surface using any of the response surface types.
2.
Switch to the response surface type that you want to use to use for the refinement.
3.
In the Refinement Design Points section of the table, enter a value in one of the input parameter fields
in the New Refinement Design Point row of the table.
4.
5.
View the results to see how they were affected by the new point.
Interactive Refinement
The following Properties can be set for Kriging Interactive refinement.
Maximum Number of Points: Maximum number of design points that will be generated during the
refinement.
Maximum Relative Error: Can be set between 1% - 99%. If the user sets 5% as the maximum relative
error, it means that he will accept a Response Surface with an error less than 5%.
Output Parameter: Select an output parameter. Interactive refinement can be associated with only one
output parameter. If the user needs to refine the response surface for several output parameters, he
needs to do several interactive refinements.
The interactive refinement can be done in one step (Update Response Surface) or in several steps:
1.
Click the Preview Response Surface button in the toolbar to see the new design points that are going
to be inserted (only input parameter values).
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
51
If the user wants to delete or modify certain points before computing them, he can edit the table. To
do so, change the refinement type from Interactive to Manual, which makes the table editable.
3.
The design points generated for the refinement will appear in the Response Surface Table view under Design
Set of Response Surface. If the response surface is accurate enough, interactive refinement does not provide
new points to improve the response surface.
Auto Refinement
Auto refinement enables the user to refine the Kriging response surface automatically by choosing certain
refinement criteria and allowing an automated optimization-type procedure to add more DOE points to the
design domain where they are most needed. The number and the spread of the DOE points are determined
by the following Response Surface Properties.
Maximum Number of Points: Maximum number of Design Points that will be generated during the
refinement.
Maximum Output - the maximum mean square error of all output parameters is minimized.
All Outputs - the maximum product of the output and the mean square error is minimized.
Sum of Outputs - the maximum sum of the output and the mean square error is minimized.
Mean Square Error - only the output with the largest mean square error is considered.
Sum Of Outputs - mean square error of all outputs are considered, thus multiple DOE points are
generated in each iteration depending on the number of outputs.
Crowding Distance Separation Percentage: Minimum distance between 2 new DOE points in the input
space, implemented as a constraint in the search for the DOE points.
When the refinement type is Auto, the output parameters will have an additional option, Stop Criterion,
added to their properties. If the Stop Criterion box is checked for an output, a Relative Error property can
be set that corresponds to the maximum relative error that the user will accept for that parameter (0.01 to
0.99).
The Relative Error works in the following manner. At each step of the automatic refinement:
1.
The refinement finds new Design Points to improve the response surface; the output parameter values
are evaluated with the previous response surface.
2.
3.
4.
For direct output parameters with Stop Criterion enabled, the values (1) are compared with the real
values (2). If the relative error is greater than the Relative Error specified, then the auto refinement is
continued (need to add new a Design Point -> go to step 1).
Once you have entered all of the properties, click Update Response Surface in the toolbar to complete the
refinement. The design points generated for the refinement will appear in the Response Surface Table view
under Design Set of Response Surface.
52
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Min-Max Search
The automatic refinement stops when the maximum number of Design Points is reached or if the response
surface is accurate enough for direct output parameters with the Stop Criterion enabled. If there is no direct
output parameter with Stop Criterion enabled, automatic refinement executes all Design Points.
Manual Refinement
If you set the refinement type to Manual you can add, delete, or modify the additional refinement design
points used to generate the response surface. If you choose manual after doing an auto or interactive refinement, you can edit the points that were generated from those refinements. If you do a manual refinement
as your initial refinement, you can enter new as many new refinement design points as you would like into
the Table view.
Goodness of Fit
You can view goodness of fit information for any of the output parameters in a response surface. To do so,
edit the Response Surface cell from the Project Schematic. Then click on one of the output parameter cells
in the Outline view. The Goodness-of-Fit Information section will be displayed in the Properties view.
The following Goodness of Fit information is displayed:
Coefficient of Determination: The percent of the variation of the output parameter that can be explained
by the response surface regression equation. That is, coefficient of determination is the ratio of the explained variation to the total variation.
The design points used to create the response surface are likely to contain variation for each output
parameter (unless all the output values are the same, which will result in a flat response surface). This
variation is explained by the response surface that the DOE generates. If the response surface were to
pass directly through each design point, the coefficient of determination would be 1 or 100%, meaning
that all variation is explained.
Adjusted Coefficient of Determination: The Adjusted Coefficient of Determination takes the sample
size into consideration when computing the Coefficient of Determination. Usually this is more reliable
than the usual coefficient of determination when the number of samples is small ( < 30).
Maximum Relative Residual: The maximum distance (relatively speaking) out of all of the generated
design points from the calculated response surface to each generated design point.
Root mean square error: This is the square root of the average square of the residuals at the DOE
points for regression methods. For Kriging based methods, this is done on a set of testing points and
the prediction error is used instead.
Relative Root Mean Square Error: This is the square root of the average square of the residuals scaled
by the actual output values at the DOE points for regression methods. For Kriging based methods, this
is done on a set of testing points and the relative prediction error is used instead.
Relative Maximum Absolute Error: This is the absolute maximum residual value relative to the
standard deviation of the actual outputs.
Relative Average Absolute Error: This is the absolute maximum residual value relative to the standard
deviation of the actual output data, modified by the number of samples. This is useful when the number
of samples is low ( < 30).
Min-Max Search
When a Response Surface is solved, if the Min-Max Search cell is enabled, it will examine the entire output
parameter space to approximate the minimum and maximum of each output parameter. If the response
surface is updated in any way, including changing the fitting for an output parameter or performing a refinement, a new Min-Max Search is performed.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
53
Note
If the Design of Experiments object is solved, but the Response Surface is not solved, the
minimum and maximum values for each output parameter are extracted from the DOE
solutions design points and displayed in the Properties for the output parameters.
For discrete/usability parameters, there is only one Minimum and Maximum value per output
parameter even if a discrete parameter has many levels (there is not one Min-Max value set
per combination of discrete/usability parameter values).
54
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
You can change the parameter values to explore other designs by moving the slider bars, or entering specific values in the accessible value boxes.
The triad control at the bottom left in the 3D Response chart window allows you to rotate the chart in
freehand mode or quickly view the chart from a particular plane. To zoom any part of the chart, use the
normal zooming controls (Shift-middle mouse button, or scroll wheel). See Using the Triad for more information on chart manipulation.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
55
56
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
You can use the slider bars in the chart's Properties view to adjust the value for the input parameter(s) to
visualize different designs. You can also enter specific values in the value boxes. The parameter legend box
at the top left in the Chart view allows you to select the parameter that is in the primary (top) position. Only
the axis of the primary parameter will be labeled with values.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
57
For discrete parameters, you can change the parameter values to explore other designs by moving the slider
bars, or entering specific values in the value boxes.
The Local Sensitivity chart can be a powerful exploration tool. On the Local Sensitivity chart, you see for
each output the weight of the different inputs. By default, all outputs are shown on the chart. If you consider
only one output, the resulting chart provides an independent sensitivity analysis for each single input
parameter.
The following is an explanation of how the bar chart is built. To determine the sensitivity of P1 vs. P4, we
plot the response curve P4 = f(P1), and compute Max(P4)-Min(P4)/Avg(P4) ~= 0.72. If P4 increases while P1
increases, the sign is positive; otherwise its negative. In our example we get -0.72 which is the red bar above.
Now we plot P4 = f(P2), Max(P4)-Min(P4)/Avg(P4) ~= 1.35 which is the yellow bar above.
58
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Related Topics:
On each of these curves, only one input is varying and the other input parameters are constant, but you
can change the value of any input and get an updated curve. The same possibility applies to the Local
Sensitivity chart: all the sensitivity values are recalculated when you change the value of an input parameter.
If parameters are correlated, then youll see the sensitivity varying; in other words, the relative weights of
inputs may vary depending on the design.
Sensitivity charts are not available if ALL input parameters are discrete.
Global Sensitivities
The sensitivity plots available in Design Exploration allow you to efficiently improve your design toward a
more reliable and better quality design, or to save money in the manufacturing process while maintaining
the reliability and quality of your product. You can request a sensitivity plot for any output parameter in
your model.
The sensitivities available under the Six Sigma Analysis and the Goal Driven Optimization views are statistical
sensitivities. Statistical sensitivities are global sensitivities, whereas the single parameter sensitivities available
under the Responses view are local sensitivities. The global, statistical sensitivities are based on a correlation
analysis using the generated sample points, which are located throughout the entire space of input parameters. The local single parameter sensitivities are based on the difference between the minimum and
maximum value obtained by varying one input parameter while holding all other input parameters constant.
As such, the values obtained for single parameter sensitivities depends on the values of the input parameters
that are held constant. Global, statistical sensitivities do not depend on the settings for the input parameters,
because all possible values for the input parameters are already taken into account when determining the
sensitivities.
Sensitivity charts are not available if ALL input parameters are discrete.
Related Topics:
Using the Local Sensitivity Chart (p. 57)
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
59
Design Points can be exported from your Parameter Set to the DOE Table or imported to the DOE Table
from the Parameter Set.
Design Points can be added to the Parameter Set from various tables and charts in the DOE, DOE (SSA),
Response Surface, Response Surface (SSA), or Goal Driven Optimization workspaces.
Response Points can be added to the Response Surface Table from the tables and charts in the Response
Surface or Goal Driven Optimization workspaces.
In addition, Design Points can be added by hand or inserted by copy/paste into a Custom DOE Table.
To Import all Design Points from the Parameter Set into a DOE Table:
1.
From the Project Schematic, double click on the Design of Experiments cell where you want to import
the Design Points.
2.
3.
4.
Right click on the Design of Experiments object in the Outline and select Copy all Design Points from
the Parameter Set. This option is only available if the DOE Type is Custom.
All Design Points from the Parameter Set will be copied to the DOE Table if the values of the parameters in
the Design Points fall within the defined ranges of the parameters in the DOE. If some of the parameter
values in the Design Points are out of range, you will be given the option to automatically expand the
parameter ranges in the DOE to accommodate the out of range values. If you choose not to expand the
parameter ranges, only the Design Points where all parameters fall within the existing ranges will be copied
into the DOE Table. You may also cancel the import action.
To Export Design Points from the Parameter Set into a DOE Table:
1.
2.
Select one or more Design Points from the Table of Design Points.
3.
Right click and select Copy Design Points to: and then select one of the DOE cells listed in the submenu. Only DOE cells with DOE Type of Custom will be listed.
All selected Design Points will be copied from the Parameter Set to the selected DOE Table if the values of
the parameters in the Design Points fall within the defined ranges of the parameters in the DOE. If some of
the parameter values in the Design Points are out of range, you will be given the option to automatically
expand the parameter ranges in the DOE to accommodate the out of range values. If you choose not to
expand the parameter ranges, only the Design Points containing parameters that all fall within the existing
ranges will be copied into the DOE Table. You may also cancel the export action.
60
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Note
If an unsolved Design Point was previously copied to a custom DOE, and subsequently this Design
Point is solved in the Parameter Set, you can copy it to the custom DOE again to push its output
values to the Custom DOE Table.
Insert as Response Points inserts new Response Points into the Table of Response Surface by copying
the input and output parameter values of the selected Candidate Design(s).
Copy Values to Response Point copies the input and output parameter values of the selected Candidate
Design(s) to the current Response Point.
Insert as Design Points creates new Design Points at the project level (edit the Parameter Set in the
Schematic view to list the existing Design Points) by copying the input parameter values of the selected
Candidate Design(s). The output parameter values are not copied because they are approximated values
provided by the Response Surface.
The above insertion operations are available from Optimization charts, depending on the context. For instance,
it is possible to right click on a point in a Tradeoff chart in order to insert the corresponding sample as a
Response Point or a Design Point. The same operation is available from a Samples chart, unless it is being
displayed as a spider chart.
Note
If your cell is out of date, the information (charts and tables) you see is out of date, but you can
still manipulate those charts and table with the old data (change the navigator to explore different
Response Points, Insert Response points, etc.).
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
61
With the Project Schematic displayed, drag a Parameters Correlation template from the Design Exploration area of the toolbox directly under the Parameter Set bar of the schematic.
2.
After placing the Parameters Correlation system on the Schematic, double click on the Parameters
Correlation cell to open the workspace and set up the correlation options that are described in Parameters Correlation (p. 20).
3.
Click the Update Parameters Correlation button on the toolbar. When the update is finished, examine
the results using the various charts shown in the Outline view. See Parameters Correlation (p. 20) for
more information about the available charts.
Sample Generation
The importance of an input to an output is determined from their correlation. The samples used for correlation
calculation are generated with Latin Hypercube Sampling method. The Latin Hypercube samples are generated
in such a way that the correlations among the input parameters are less than or equal to 5%. Also, the Latin
Hypercube samples are generated such a way that each sample has equal probability of pick (uniformly
distributed). Two correlation calculation methods are provided for the user to select from; Spearmans rank
correlation and Pearsons linear correlation. Pearsons uses actual data for correlation evaluation while
Spearmans uses ranks of data, which is deemed to be more accurate, and therefore is recommended.
If Auto Stop Type is set to Enable Auto Stop for the correlation, the number of samples required to calculate
correlation is determined according to convergences of means and standard deviations of output parameters.
At each iteration of simulation runs (finite element runs in this case), convergences of means and standard
deviations are checked against accuracy provided by the user. The iteration stops when accuracy is met, or
number of samples exceeds the maximum value set by the user, whichever comes first.
62
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Optimization Options
Determining Significance
Significance of an input parameter to an output parameter is determined using a statistical hypothesis test.
In the hypothesis test, a NULL hypothesis of insignificance is assumed, and is tested if it is statistically true
according to significance level (or acceptable risk) set by the user. From the hypothesis test, a p-Value
(probability value) is calculated, and is compared with the significance level. If the p-Value is greater than
the significance level, it is concluded that the NULL hypothesis is true, and that the input parameter is insignificant to the output parameter, and vice versa. See the Six Sigma Theory section for more information.
Optimization Options
To create a GDO system on your project schematic, do the following:
1.
With the Project Schematic displayed, drag a Goal Driven Optimization template from the Design
Exploration area of the toolbox to the schematic. You may drag it:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
63
directly under the Parameter Set bar, in which case it will not share any data with any other systems
on the schematic.
onto the Design of Experiments cell of a Response Surface, in which case it will share all of the
data generated by the DOE.
onto the Response Surface cell of a Response Surface, in which case it will share all of the data
generated for the DOE and Response Surface cells.
2.
After placing the GDO system on the Schematic, if you are not sharing the DOE and Response Surface
cells, edit the DOE and set it up as described previously in Design of Experiments (p. 18) and solve the
DOE and Response Surface cells.
3.
Edit the Optimization cell. Make sure that cell A2 is selected in the Outline view, and set up the options
for your optimization in the Properties view.
Screening
The Screening option allows you to generate a new sample set and sort its samples based on objectives. It
is available for all types of input parameters. Usually the Screening approach is used for preliminary design,
which may lead you to apply the MOGA or NLPQL options for more refined optimization results. See the
Principles section under Goal Driven Design for more information.
To perform a Screening optimization, in the Properties view of the Optimization:
1.
Specify the Constraint Handling (GDO). This option can be used for any optimization application
and is best thought of as a "constraint satisfaction" filter on samples generated from the Screening,
MOGA, or NLPQL runs. This is especially useful for Screening samples to detect the edges of solution
feasibility for highly constrained nonlinear optimization problems. The following choices are available:
As Goals (default): Implies that the upper, lower, and equality constrained objectives of the
candidate designs shown in the Table of Optimization are treated as goals; thus any violation
of the objectives is still considered feasible.
As Hard Constraints: When chosen, the upper, lower and equality constrained objectives are
treated as hard constraints; that is, if any of them are violated then the candidate is no longer
displayed. So, in some cases no candidate designs may be displayed depending on the extent
of constraint violation.
2.
Set input and output parameter Objectives in the Table of Optimization view. See Defining Design
Objectives (p. 67) for more information.
3.
64
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Optimization Options
Note
The Tradeoff chart for a screening optimization where some objectives are constraints will
show all of the feasible points (that meet the constraints) as the first Pareto front. Points
from the sample set that do not meet the constraints will be shown as infeasible points.
The Samples chart will also indicate the infeasible points by color.
MOGA
The MOGA (Multi-Objective Genetic Algorithm) option allows you to generate a new sample set or use an
existing set for providing a more refined approach than the Screening method. It is available for continuous
input parameters only and can handle multiple goals. See the MOGA (Multi-objective Genetic Algorithm)
section for more information on this algorithm.
To perform a MOGA optimization:
1.
Number of Initial Samples: Complete this field if you wish to start from a new set. This number
should be at least 10 times the number of continuous input parameters, but not more than 300.
If you want to use an existing Screening sample set, you cannot edit the Number of Initial Samples
field.
Number of Samples Per Iteration: The number of samples that are iterated and updated at each
iteration. This setting must be less than or equal to the number of initial samples.
Maximum Allowable Pareto Percentage: The approximate maximum percentage of first Pareto
front points (computed with respect to the Number of Samples Per Iteration) that you wish to
compute. For example, if the Maximum Allowable Pareto Percentage is set to 70 and the Number
of Samples Per Iteration is set to 200, then you can expect to have, at most, approximately 140
first Pareto front points. If the Maximum Allowable Pareto Percentage is too low (below 30), then
there is a chance that the iterations will converge slowly. If the value is too high (above 80), then
the problem may converge prematurely. The value of the Maximum Allowable Pareto Percentage
depends on the number of output and input parameters of the problem, in addition to the nature
of the design space itself. Using a value between 55 and 75 works best for most problems.
Maximum Number of Iterations: The maximum possible number of iterations the algorithm executes. If convergence happens before this number is reached, the iterations will cease. This also
provides an idea of the maximum possible number of function evaluations that are needed for the
full cycle, as well as the maximum possible time it may take to run the optimization. For example,
the absolute maximum number of evaluations is given by: Number of Initial Samples + Number of
Samples Per Iteration * (Maximum Number of Iterations - 1).
Constraint Handling (GDO): This option can be used for any optimization application and is best
thought of as a "constraint satisfaction" filter on samples generated from the Screening, MOGA, or
NLPQL runs. This is especially useful for Screening samples to detect the edges of solution feasibility
for highly constrained nonlinear optimization problems. The following choices are available:
As Goals (default): Implies that the upper, lower, and equality constrained objectives of the
candidate designs shown in the Table of Optimization are treated as goals; thus any violation
of the objectives is still considered feasible.
As Hard Constraints: When chosen, the upper, lower, and equality constrained objectives are
treated as hard constraints; that is, if any of them are violated then the candidate is no longer
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
65
Set input and output parameter objectives in the Table of Optimization view. At least one of the
output parameter objectives must be a goal. Multiple output parameter goals can be set. See Defining
Design Objectives (p. 67) for more information.
3.
NLPQL
The NLPQL (Non-Linear Programming by Quadratic Lagrangian) option allows you to generate a new sample
set or use an existing set to provide a more refined approach than the Screening method. It is available for
continuous input parameters only and can only handle one output parameter goal (other output parameters
can be defined as constraints). See the Non-Linear Programming by Quadratic Lagrangian section for more
information on this algorithm.
To generate samples and perform a NLPQL optimization:
1.
2.
66
Allowable Convergence Percentage: The tolerance to which the Karush-Kuhn-Tucker (KKT) optimality criterion is generated during the NLPQL process. (See the NLPQL theory section for more information.) A smaller value indicates more convergence iterations and a more accurate (but slower)
solution. A larger value indicates less convergence iterations and a less accurate (but faster) solution.
The typical default is 1.0e-06, which is consistent across all problem types since the inputs, outputs,
and the gradients are scaled during the NLPQL solution.
Maximum Number of Iterations: The maximum possible number of iterations the algorithm executes. If convergence happens before this number is reached, the iterations will cease. This also
provides an idea of the maximum possible number of function evaluations that are needed for the
full cycle, as well as the maximum possible time it may take to run the optimization. For example,
the absolute maximum number of evaluations is given by: Number of Initial Samples + Number of
Samples Per Iteration * (Maximum Number of Iterations - 1).
Constraint Handling (GDO): This option can be used for any optimization application and is best
thought of as a "constraint satisfaction" filter on samples generated from the Screening, MOGA, or
NLPQL runs. This is especially useful for Screening samples to detect the edges of solution feasibility
for highly constrained nonlinear optimization problems. The following choices are available:
As Goals (default): Implies that the upper, lower, and equality constrained objective of the
candidate designs shown in the Table of Optimization are treated as goals; thus any violation
of the objectives is still considered feasible.
As Hard Constraints: When chosen, the upper, lower, and equality constrained objectives are
treated as hard constraints; that is, if any of them are violated then the candidate is no longer
displayed. So, in some cases no candidate designs may be displayed depending on the extent
of constraint violation.
Set input and output parameter objectives in the Table of Optimization view. See Defining Design
Objectives (p. 67) for more information. Remember, you can only have one output parameter goal for
NLPQL.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Because NLPQL is a gradient based method, the candidates found depend on the starting point in the
parameter space. NLPQL could find a local optimum which is not the global optimum. To give control
to the user, the Optimization table has an additional row titled Initial Value when using NLPQL. This
allows the user to determine the starting point of the optimization by entering the value of each input
parameter in the Initial Value row.
4.
Meaning
Mathematical Meaning
No Objective
Keep the input parameter within its upper XLower <= X <= XUpper
and lower bounds
Minimize
X XLower
Seek Midpoint
X (XLower + XUpper)
Maximize
X XUpper
Table 3 Objectives and Their Meanings for Discrete or Usability Input Parameters
Goal
Meaning
Mathematical Meaning
No Objective
Uses the full discrete range allowed for the XLower <= X <= XUpper
parameter
(X = an input parameter, XLower = lower limit, XUpper = Upper Limit, XTarget = Target Value)
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
67
Meaning
Mathematical Meaning
Values = Target
X XTarget
X <= XTarget
X >= XTarget
(X = an input parameter, XLower = lower limit, XUpper = Upper Limit, XTarget = Target Value)
Meaning
Mathematical Meaning
GDO Treatment
No Objective
No objective is specified
N/A
N/A
Minimize
Goal
Maximize
Goal
Seek Target
Goal
Y <= YTarget
Inequality constraint
Y > =YTarget
Inequality constraint
Values = Target
Constraint
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Insert as Response Points inserts new Response Points into the Table of Response Surface by copying
the input and output parameter values of the selected Candidate Design(s).
Insert as Design Points creates new Design Points at the project level (edit the Parameter Set in the
Schematic view to list the existing Design Points) by copying the input parameter values of the selected
Candidate Design(s). The output parameter values are not copied because they are approximated values
provided by the Response Surface.
Both of the above insertion operations are also available from Optimization charts, depending on the context.
For instance, it is possible to right click on a point in a Tradeoff chart in order to insert the corresponding
sample as a Response Point or a Design Point. The same operation is available from a Samples chart.
Note
If the p-Value calculated for a particular input parameter is above the Significance Level specified
in the Design Exploration section of the Tools > Options dialog, the bar for that parameter will
be shown as a flat line on the chart. See Determining Significance (p. 63) for more information.
The chart can be displayed as in 2D or 3D by changing the Mode in the chart's Properties view.
You can also select which parameter to display on each axis of the chart by selecting the parameter
from the drop down menu next to the axis name in the Properties view.
Limit the pareto fronts shown by moving the slider or entering a value in the field above the slider in
the chart's Properties view.
69
For more information, see Six Sigma Analysis (p. 28) and "Understanding Six Sigma Analysis" (p. 95).
updating each individual cell in the analysis either from the right menu in the Project Schematic or
the Update button in the workspace toolbar while editing the cell.
right clicking on the Six Sigma Analysis cell in the project schematic and selecting Update to update
the entire analysis at once.
clicking the Update Project button in the toolbar to update the entire project.
70
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Statistical Measures
Tables (SSA)
In the Six Sigma Analysis workspace, you are able to view the probability tables of any input or output
variable. To select which table view is displayed, select the desired value (Quantile-Percentile or PercentileQuantile) for the Probability Table field in the parameter's Properties view.
You can modify both types of tables by adding or deleting values. To add a value to the Quantile-Percentile
table, type the desired value into the New Parameter Value cell at the end of the table. A row with the
value you entered will be added to the table in the appropriate location. To add a new value to the PercentileQuantile table, type the desired value into the appropriate cell (New Probability Value or New Sigma Level)
at the end of the table. To delete a row from either table, right click on the row and select Remove Level
from the menu. The row will be deleted from the table. You can also overwrite any value in an editable
column and corresponding values will be displayed in the other columns in that row.
Note
If the p-Value calculated for a particular input parameter is above the Significance Level specified
in the Design Exploration section of the Tools > Options dialog, the bar for that parameter will
be shown as a flat line on the chart. See Determining Significance (p. 63) for more information.
Statistical Measures
When the Six Sigma Analysis is updated, the following statistical measures are displayed in the Properties
view for each parameter
Mean
Standard Deviation
Skewness
Kurtosis
71
Sigma Minimum
Sigma Maximum
72
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Principles (GDO)
GDO can be used for design optimization in three ways: the Screening approach, the MOGA approach, or
the NLPQL approach. The Screening approach is a non-iterative direct sampling method by a quasi-random
number generator based on the Hammersley algorithm. The MOGA approach is an iterative Multi-Objective
Genetic Algorithm, which can optimize problems with continuous input parameters. NLPQL is a gradient
based single objective optimizer which is based on quasi-Newton methods.
MOGA is better for calculating the global optima while NLPQL is a gradient-based algorithm ideally suited
for local optimization. So you can start with Screening or MOGA to locate the multiple tentative optima and
then refine with NLPQL to zoom in on the individual local maximum or minimum value. Problems with
mixed parameter types (i.e., usability, discrete, or scenario parameters with continuous parameters) or discrete
problems cannot currently be handled by the MOGA or NLPQL techniques, and in these cases you will only
be able to use the Screening technique.
The GDO framework uses a Decision Support Process (DSP) based on satisficing criteria as applied to the
parameter attributes using a weighted aggregate method. In effect, the DSP can be viewed as a postprocessing
action on the Pareto fronts as generated from the results of the MOGA, NLPQL, or Screening process.
Usually the Screening approach is used for preliminary design, which may lead you to apply the MOGA or
NLPQL approaches for more refined optimization results.
In either approach, the TradeOff chart, as applied to the resulting sample set, shows the Pareto-dominant
solutions. However, in the MOGA approach, the Pareto fronts are better articulated and most of the feasible
solutions lie on the first front, as opposed to the usual results of the Screening approach where the solutions
are distributed across all the Pareto fronts. This is illustrated in the following two figures. Figure : 6,000 Sample
Points Generated by Screening Method (p. 74) shows the sample as generated by the Screening method,
and Figure : Final Sample Set After 5,100 Evaluations by MOGA Method (p. 74) shows a sample set generated
by the MOGA method for the same problem.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
73
Figure : Pareto Optimal Front Showing Two Non-dominated Solutions (p. 74) shows the necessity of generating
Pareto fronts. The two axes represent two output parameters that are conflicting by nature. Two optimal
solutions, marked "1" and "2," are shown. Both of these solutions are "non-dominated." That is, they are
equally good in terms of Pareto optimality. Point 1 corresponds to a better solution in terms of the output
marked "v." Point 2 corresponds to a better solution in terms of the output marked "buck."
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
As shown in Figure : Case Where GDO Solves a Constraint Satisfaction Problem (p. 76), some combinations of
Objective and Target Value settings specify that the problem is one of constraint satisfaction. The results
obtained from these settings indicate only the boundaries of the feasible region in the design space. Currently
the MOGA and NLPQL options cannot be used to solve a pure constraint satisfaction problem.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
75
The Target and Objective settings of the input parameters do not affect the sample generation part of the
MOGA and NLPQL methods. However, these settings will affect the ranking of the samples which generate
the three candidate designs, if the Optimization is updated.
Sample generation using the Screening method is not affected by any parameter setting. However, the
parameter settings do still affect the generation of candidate designs.
Setting the Objective to Maximize, Minimize , or Seek Target, will always treat the objective as a goal.
Setting the Objective to = Target, >= Target or <= Target treats the objectives as constraints if Constraint
Handling (GDO) in the Optimization Properties view is set to As Hard Constraints. If Constraint Handling
(GDO) is set to As Goals, those objectives are treated as goals and not constraints. If all objectives are constraints, Screening is your only optimization option. If at least one of the objectives is a goal, MOGA and
NLPQL are optimization options.
Typically, the Screening method is best suited for conducting a preliminary design study, because it is a
low-resolution, fast, and exhaustive study that can be useful in locating approximate solutions quickly. The
solutions may then be used as a starting sample for the MOGA and NLPQL methods. Since the Screening
method does not depend on any parameter objectives, you can change the objectives after performing the
Screening analysis to view the candidates that meet different objective sets, allowing you to quickly perform
preliminary design studies. It's easy to keep changing the goals and constraints to view the different corresponding candidates, which are drawn from the original sample set.
For example, you may run the Screening process for 3,000 samples, then use the Tradeoff or Samples
charts to view the Pareto fronts. The solutions slider can be used in the chart Properties to display only the
prominent points you may be interested in (usually the first few fronts). When you do a MOGA optimization,
you can limit the number of Pareto fronts that are computed in the analysis.
Once all of your objectives are defined, click Update Optimization to generate up to three candidate designs.
You may save any of the candidates by right-clicking and selecting Insert as Response Points or Insert as
Design Points . See Adding Design Points and Response Points (p. 60) for more information.
Because the GDO algorithm works off of response surfaces, you should validate the best obtained candidate
results by saving the corresponding Design Points, solving them at the Project level, and comparing the
results.
76
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Postprocessing Overview
Postprocessing of GDO optimization results is explained below. The postprocessing phase follows the generation of samples. Once you have Updated your optimization, candidates are displayed in the table and
one instance of each available chart is created for you in the outline (you can create more charts by doubleclicking in the toolbox). Select a chart and view the results, changing the chart's options to see different
views of the parameter space.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
77
It is also important to discuss the representation of feasible and infeasible points (infeasible points are
available if any of the objectives are defined as constraints). The MOGA algorithm always ensures that
feasible points are shown as being of better quality than the infeasible points, and different markers are
used to indicate them in the chart. In the 2-D and 3-D tradeoff chart, rectangles denote feasible points and
cirlcles denote infeasible ones. You can enable or disable the display of infeasible points in the Chart Properties view.
Also, in both types of charts, the best Pareto front is blue and the fronts gradually transition to red (worst
Pareto front). Figure : Two-dimensional TradeOff Plot Showing Feasible and Infeasible Points (p. 78) is a typical
2-D tradeoff chart with feasible and infeasible points.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
(1)
For example, consider the integer 687459, which can be represented this way as n0 = 6, n1 = 8, and so on.
Because this integer is represented with radix 10, we can write it as 687459 = 9 + 5 * 10 + 4 * 100 and so
on. In general, for a radix R representation, the equation is:
n = nm + nm 1 * R + + n0
(2)
The inverse radical function is defined as the function which generates a fraction in (0, 1) by reversing the
order of the digits in Equation 2 (p. 79) about the decimal point, as shown below.
R (n) = 0 nm nm 1 nm 2 n0
= nm * R 1 + nm 1 * R 2 + + n0 * R (m 1)
(3)
Thus, for a k-dimensional search space, the Hammersley points are given by the following expression:
Hk (i) = [i / N, R1(i), R2 (i),, Rk 1(i)]
(4)
where i = 0, ..., N indicates the sample points. Now, from the plot of these points, it is seen that the first row
(corresponding to the first sample point) of the Hammersley matrix is zero and the last row is not 1. This
implies that, for the k-dimensional hypercube, the Hammersley sampler generates a block of points that are
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
79
1
N
2
(5)
This moves the point set more toward the center of the search space and avoids unnecessary bias. Thus,
the initial population always provides unbiased, low-discrepancy coverage of the search space.
(6)
(7)
X such that for all objective functions x* is optimal. This, for i = 1, ..., k, is expressed:
f i( x *) f i( x )x X,
(8)
This indicates that x* is certainly a desirable solution. Unfortunately, this is a utopian situation that rarely
exists, as it is unlikely that all f1(x) will have minimum values for X at a common point (x*). The question is
left: What solution should be used? That is, how should an "optimal" solution be defined? First, consider the
so-called ideal (utopian) solution. In order to define this solution, separately attainable minima must be
found for all objective functions. Assuming there is one, let x'1 be the solution of the scalar optimization
problem:
min f i( x ) = f i*
xX
(9)
*
*
* i
Here f1' is called the individual minimum for the scalar problem i; the vector f = ( f i ,, fk ) is called ideal
for a multi-objective optimization problem; and the points in X which determined this vector is the ideal
solution.
80
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
(10)
f i( x ) f i( x * )
and, for at least one i
(11)
{1, ..., k}:
f i( x ) < f i( X* )
(12)
This definition is based on the intuitive conviction that the point x* X is chosen as the optimal if no criterion
can be improved without worsening at least one other criterion. Unfortunately, the Pareto optimum almost
always gives not a single solution, but a set of solutions. Usually Pareto optimality is spoken of as being
global or local depending on the neighborhood of the solutions X, and in this case, almost all traditional
algorithms can at best guarantee a local Pareto optimality. However, this MOGA-based system, which incorporates global Pareto filters, yields the global Pareto front.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
81
i =1
j =1
w iNi + w jM j
(13)
where:
wi and wj = weights defined in Equation 16 (p. 82)
Ni and Mi = normalized objectives for input and output parameters, respectively
The normalized objectives (metrics) are:
xt x
Ni=
x x
l i
u
(14)
yt y
Mj=
y
max ymin j
(15)
where:
x = current value for input parameter i
xt, yt = corresponding "target value"
y = current value for output parameter j
xl and xu = lower and upper values, respectively, for input parameter i
ymin and ymax = corresponding lower and upper bounds, respectively, for output parameter j
The fuzziness of the combined objective function derives from the weights w, which are simply defined as
follows:
1.000,
w i = w j = 0.666,
0.333,
(16)
The labels used are defined in Defining Design Objectives (p. 67).
The targets represent the desired values of the parameters, and are defined for the continuous input parameters as follows:
x,
x ,
l
xt = 1
2 (x l + xu ),
x ,
u
and, for the output parameters we have the following desired values:
82
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
(17)
yt = *
y t , if Objective is "Seek Target" or "Values = Target"
*
y, if Objective is "Values > = Target" and "Target" is defined and y y t
*
*
y t , if Objective is "Values > = Target" and "Target" is defined and y y t
ymax , if Objective is "Maximize" and a "Target" value is not defined
(18)
where:
*
yt
Thus, Equation 16 (p. 82) and Equation 17 (p. 82) constitute the input parameter goals for the continuous
input parameters and Equation 16 (p. 82) and Equation 18 (p. 83) constitute the output parameter goals.
The following section considers the case where the continuous input parameters have discrete Usability
values ("Levels" in the parameters' Table view) and there may be discrete input parameters of type "Discrete."
Let us consider the case where the Usable Values of the continuous input parameter are defined as the following:
{ X} = { x0 , x1,..., xl 1}
(19)
xt x
(20)
xMAX xMIN
where, as before,
xMAX = upper bound of the usable values
xMIN = lower bound of the usable values
The target value xt is given by the following:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
83
x,
xt = *
xt ,
x,
*
x t ,
(21)
Thus, the GDO objective equation becomes the following (for parameters with discrete Usable Values).
n
i =1
j =1
l =1
w iN i + w jM j + w lP l
(22)
Therefore, Equation 15 (p. 82), Equation 16 (p. 82), and Equation 20 (p. 83) constitute the input parameter
goals for parameters which may be continuous or possess discrete usable values.
The norms and goals as in equations Equation 20 (p. 83) and Equation 21 (p. 84) are also adopted to define
the input goals for the discrete input parameters of the type "Discrete"; i.e., those discrete parameters whose
Usable Alternatives indicate a whole number of some particular design feature (number of holes in a plate,
number of stiffeners, etc.).
Thus, equations Equation 15 (p. 82), Equation 16 (p. 82), and Equation 20 (p. 83) constitute the input parameter goals for "Discrete" parameters.
Therefore, the GDO objective function equation for the most general case (where there are continuous and
discrete parameters) can be written as the following:
n
w iN i
i =1
+ w jM j
( Continuous Input )
j =1
( Continuous Output )
(23)
+ w f Pf
f =1
(Usability + Discrete )
where:
n = number of Continuous Input parameters
m = number of Continuous Output parameters
l = number of Continuous Input parameters with Usability Levels and "Discrete" parameters
From the normed values it is obvious that the lower the value of , the better the design with respect to
the desired values and importances. Thus, a quasi-random uniform sampling of design points is done by a
Hammersley algorithm and the samples are sorted in ascending order of . The desired number of designs
are then drawn from the top of the sorted list. A crowding technique is employed to ensure that any two
sampled design points are not very close to each other in the space of the input parameters.
84
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Design Candidate value of 0.9 (the worst), the rating scale is 6-3 = +3 [three crosses]
2.
Design Candidate value of 1.1 (the best), the rating scale is 0-3 = -3 [three stars]
3.
Design Candidate value of 1.0 (neutral), the rating scale is 3-3 = 0 [dash]
85
(24)
where, it is assumed that the Taylor series actually models a local area of the function by a quadratic approximation. The objective is to devise an iterative scheme by linearizing the vector Equation 24 (p. 86). To this
end, it is assumed that at the end of the iterative cycle, the Equation 24 (p. 86) would be exactly valid. This
implies that the first variation of the following expression with respect to x must be zero.
1
( x ) = f ( x + x ) f ( x ) + {x} T {f ( x )} + {x }T [ f ( x )]{x}
2
(25)
(26)
The first expression indicates the first variation of the converged solution with respect to the increment in
the independent variable vector. This gradient is necessarily zero since the converged solution clearly does
not depend on the step-length. Thus, Equation 26 (p. 86) can be written as the following:
{ x j +1} = { x j } [ f ( x j )]1{f ( x j )}
(27)
where, the index "j" indicates the iteration. Equation 27 (p. 86) is thus used in the main quadratic programming
scheme.
NLPQL derivation:
Consider the following single-objective nonlinear optimization problem. It is assumed that the problem is
smooth and analytic throughout and is a problem of N decision variables.
Minimize:
f = f ({ x })
Subject to:
gk ({ x }) 0, k = 1, 2,..., K
hl ({ x}) 0, l = 1, 2,..., L
where
{ xL } { x} { xU }
(28)
Where, K and L are the numbers of inequality and equality constraints. In many cases the inequality constraints
are bound above and below, in such cases, it is customary to split the constraint into two inequality con86
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
NLPQL derivation:
straints. In order to approximate the quadratic sub-problem assuming the presence of only equality constraints,
the Lagrangian for Equation 28 (p. 86) as:
({ x },{ }) = f ({ x }) + { }T {h({ x })}
(29)
Where, {} is a L dimensional vector (non-zero) which are used as Lagrange multipliers. Thus, Equation 29 (p. 87) becomes a functional, which depends on two sets of independent vectors. In order to minimize
this expression, we seek the stationarity of this functional with respect to the two sets of vectors. These expressions give rise to two sets of vector expressions as the following:
({ x },{ }),{ x } = { } = {f } + [H]{ } = {0}
({ x },{ }),{ } = {h({ x })} = {0}
(30)
Equation 30 (p. 87) defines the Karush-Kuhn-Tucker (KKT) conditions which are the necessary conditions for
the existence of the optimal point. The first equation comprises "N" nonlinear algebraic equations and the
second one comprises "L" nonlinear algebraic equations. The matrix [H] is a (N*L) matrix defined as the following:
[H](N*L ) = [{h1({ x })}, {h2 ({ x})}, {h3 ({ x})}, ...]
(31)
Thus, in Equation 31 (p. 87), each column is a gradient of the corresponding equality constraint. For our
convenience, the nonlinear equations of Equation 30 (p. 87) can be written as the following:
{F({ Y })} = {0}
(32)
Where, Equation 32 (p. 87) is a (N+L) system of nonlinear equations. The independent variable set {Y} can
be written as:
{x}
{ Y} =
{ }
(33)
{h}
(34)
Referring to the section on Newton based methods, the vector {Y} in Equation 33 (p. 87) is updated as the
following:
{ Yj +1} = { Yj } + {Yj )
(35)
The increment of the vector is given by the iterative scheme in Equation 27 (p. 86). Referring to Equation 32 (p. 87), Equation 33 (p. 87), and Equation 34 (p. 87), the iterative equation is expressed as the following:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
87
(36)
Note that this is only a first order approximation of the Taylor expansion of Equation 32 (p. 87). This is in
contrast to Equation 27 (p. 86) where a quadratic approximation is done. This is because in Equation 33 (p. 87),
a first order approximation has already been done. The matrices and the vectors in Equation 36 (p. 88) can
be expanded as the following:
{ x }
{ Y } =
{ }
(37)
and
[ 2 ]
(N*N) [H](N*L )
[ F ] =
[H]T(L *N) [0]
(L *L ) ((N+L *(N +L ))
(38)
This is obtained by taking the gradients of Equation 34 (p. 87) with respect to the two variable sets. The submatrix
[ 2 ](N*N)
To demonstrate how Equation 38 (p. 88) is formed, let us consider the following simple case. Given a vector
of two variables x and y, we write:
a1( x, y )
{a( x, y )} a2( x, y )
{V} =
=
{b( x, y )} b1( x, y )
b2( x, y )
It is required to find the gradient (i.e. Jacobian of the vector function V ). To this effect, the derivation of the
Jacobian is evident because in the present context, the vector V indicates a set of nonlinear (algebraic)
equations and the Jacobian is the coefficient matrix which "links" the increment of the independent variable
vector to the dependent variable vector. Thus, we can write:
a1
x
a2
{a( x, y )} x
{ V } =
=
{b( x, y )} b1
x
b2
x
a1
y
a2
y x
b1 y
y
b2
Thus, the Jacobian matrix is formed. In some applications, the equation is written in the following form:
88
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
NLPQL derivation:
a1 a2
{a( x, y )} x
x
{ V } =
=
{b( x, y )} a1 a2
y
y
b1 b2
x
x x
b1 b2 y
y
y
Where, the column "i" indicates the gradient of the "i-th" component of the dependent variable vector with
respect to the independent vector. This is the formalism we use in determining Equation 38 (p. 88).
Equation 36 (p. 88) may be rewritten as the following:
[ 2 ]
(N*N) [H](N*L )
(L * L )
( j)
( j)
{x }
{ }
( j)
{ }
=
{h}
(39)
Solution of Equation 39 (p. 89) iteratively will solve Equation 40 (p. 89) in a series of linear steps till the point
when the increment is negligible. The update schemes for the independent variable and Lagrange multiplier
vectors x and are written as the following:
{ x j +1} = { x j } + {x j }
(40)
{ j +1} = { j } + { j }
The individual equations of the Equation 39 (p. 89) are written separately now. The first equation (corresponding to minimization with respect to x) may be written as:
[ 2 ] j {x} j + [H] j { } j = { } j
[ 2 ] j {x} j + [H] j { } j = {f } j [H] j {} j
(41)
[ 2 ] j {x} j + [H] j { } j +1 = {f } j
The last step in Equation 41 (p. 89) is done by using Equation 40 (p. 89). Thus, using Equation 41 (p. 89) and
Equation 39 (p. 89), the iterative scheme can be rewritten in a simplified form as:
[ 2 ]
(N*N) [H](N*L )
(L * L )
( j)
( j)
{x}( j)
{f }
( j +1) =
{h}
{ }
(42)
Thus, Equation 42 (p. 89) can be used directly to compute the (j+1)th value of the Lagrange multiplier vector.
Note that by using Equation 42 (p. 89), it is possible to compute the update of x and the new value of the
Lagrange multiplier vector in the same iterative step. Equation 42 (p. 89) shows the general scheme by
which the KKT optimality condition can be solved iteratively for a generalized optimization problem.
Now, consider the following quadratic approximation problem as given by:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
89
(43)
(44)
Where the definition of the matrices in Equation 43 (p. 90) and Equation 44 (p. 90) are given earlier. In order
to solve the quadratic minimization problem let us form the Lagrangian as:
1
= {f } T {x } + {x}T [ 2 ]{x} + { }T {h} + { }T [H]T {x }
2
(45)
Now, the KKT conditions may derived (as done earlier) by taking gradients of the Lagrangian in Equation 45 (p. 90) as the following:
,{ x } = {f } + [2 ]{x } + [H]{ } = {0}
,{ } = {h} + [H]T {x} = {0}
(46)
In a condensed matrix form Equation 46 (p. 90) may be written as the following:
[ 2 ] [H] {x}
{f }
[H]T [0] { }
{h}
(47)
Equation 47 (p. 90) is the same as Equation 42 (p. 89) this implies that the iterative scheme in Equation 47 (p. 90) which actually solves a quadratic subproblem (Equation 43 (p. 90) and Equation 44 (p. 90)) in
the domain x. In case the real problem is quadratic then this iterative scheme solves the exact problem.
On addition of inequality constraints, the Lagrangian of the actual problem can be written as the following:
({ x },{ },{},{ y }) = f ({ x}) + { }T {h({ x })} + {}T {g({ x})} + {} T { y 2 }
(48)
Note that the inequality constraints have been converted to equality constraints by using a set of slack
variables y as the following:
gk ({ x }) + ( yk )2 = 0, k = 1, 2,..., M
(49)
The squared term is used to ensure that the slack variable remains positive which is required to satisfy
Equation 49 (p. 90). The Lagrangian in Equation 48 (p. 90) acts as an enhanced objective function. It is seen
that the only case where the additional terms may be active is when the constraints are not satisfied.
90
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
NLPQL derivation:
The KKT conditions as derived from Equation 48 (p. 90) (by taking first variations with respect to the independent variable vectors) are:
,{ x } = { } = {f } + [H]{ } + [G]{} = {0}(N*1)
,{ } = {h({ x })} = {0}(L *1)
,{} = {g({ x})} = { y 2 } = {0}(M*1)
(50)
,{ y } = {2y} = {0}(M*1)
From the KKT conditions in Equation 50 (p. 91), it is evident that there are (N + L + 2*M) equations for a
similar number of unknowns, thus this equation set possesses an unique solution. Let this (optimal) solution
be marked as x. At this point, a certain number of constraints will be active and some others will be inactive.
Let the number of active inequality constraints be p and the total number of active equality constraints be
q. By an active constraint it is assumed that the constraint is at its threshold value of zero. Thus, let J1 and
J2 be the sets of active and inactive equality constraints (respectively) and K1 and K2 be the sets of active
and inactive inequality constraints respectively. Thus, we can write the following relations:
J1 J2 = ; meas( J1 J2 ) = L; meas(J1) = q; meas( J2 = (L q)
K1 K 2 = ; meas(K1 K 2 ) = M; meas(K1) = p; meas(K 2 = (M p)
(51)
Where, meas() indicates the count of the elements of the set under consideration. These sets thus partition
the constraints into active and inactive sets. Thus, we can write:
hk ({ x}) = 0, k J2
g j ({ x}) = 0, y j = 0; j 0; j K1
(52)
g j ({ x}) < 0, y j 0; j = 0; j K1
Thus, it can be stated that the last 3 equations in Equation 50 (p. 91) may be represented by the Equation 52 (p. 91). These are the optimality conditions for constraint satisfaction. From these equations, we can
now eliminate y such that the Lagrangian in Equation 48 (p. 90) will depend on only 3 independent variable
vectors. From the last two conditions in Equation 52 (p. 91), we can write the following condition, which is
always valid for a optimal point:
jg j ({ x}) = 0
(53)
Using Equation 53 (p. 91) in the set Equation 50 (p. 91), the KKT optimality conditions may be written as the
following:
{} = {f } + [H]{ } + [G]{} = {0}(N*1)
{h({ x})} = {0}(L *1)
(54)
Thus, the new set contains only (N + L + M) unknowns. Now, following the same logic as in Equation 33 (p. 87)
as done earlierlet us express Equation 54 (p. 91) in the same form as in Equation 32 (p. 87). This represents
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
91
(55)
Newtons iterative scheme is also used here, thus, the same equations as in Equation 35 (p. 87) and Equation 36 (p. 88) also apply here. Thus, following Equation 36 (p. 88), we can write:
{ x }
{ Y } = { }
{ }
(56)
Taking the first variation of the KKT equations in Equation 54 (p. 91) and equating to zero, the sub-quadratic
equation is formulated as the following:
[ 2 ] [H]T
0
[H]
[G]
0
[G]T x
{ }
0 = {h}
{g}
g
(57)
Where indicates a diagonal matrix.
At the jth step, the first equation can be written as (by linearization)
[ 2 ] j {x} j + [H] j { } j + [G] j {} j = {f } j [H] j { } j [G] j {} j
(58)
Which simplifies to
[ 2 ] j {x} j + [H] j { } j +1 + [G] j {} j +1 = {f } j
(59)
Thus, the linearized set of equations for Newtons method to be applied can be written in an explicit form
as:
92
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
NLPQL derivation:
[ 2 ] [H]T
j
[H] j
0
0
j [G] j
{ f } j
[G]Tj {x } j
0 { } j +1 = {h} j
2{g} j
g j {} j +1
(60)
So, in presence of both equality and inequality constraints, Equation 60 (p. 93) can be used in a quasi Newton
framework to determine the increments {x}j and Lagrange multipiers {}j+1 and {}j+1 when stepping from
iteration j to j+1.
2
The Hessian matrix [ ] is not computed directly but is estimated and updated in a BFGS type line search.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
93
94
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
An example of this is a case when the maximum von Mises stress in a component must not exceed the yield
strength. The relevant output parameter is, of course, the maximum von Mises stress and the USL is the
yield strength. The lower specification limit is not relevant. The area below the probability density function
falling outside the specification interval is a direct measure of the probability that the product does not
conform to the quality criteria, as shown above. If the output parameter does follow a Gaussian distribution,
then the product satisfies a Six Sigma quality criterion if both specification limits are at least six standard
deviations away from the mean value.
In reality, an output parameter rarely exactly follows a Gaussian distribution. However, the definition of Six
Sigma quality is inherently probabilistic -- it represents an admissible probability that parts do not conform
to the quality criteria defined by the specified limits. The nonconformance probability can be calculated no
matter which distribution the output parameter actually follows. For distributions other than Gaussian, the
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
95
Principles (SSA)
Computer models are described with specific numerical and deterministic values: material properties are
entered using certain values, the geometry of the component is assigned a certain length or width, etc. An
analysis based on a given set of specific numbers and values is called a deterministic analysis. The accuracy
of a deterministic analysis depends upon the assumptions and input values used for the analysis.
While scatter and uncertainty naturally occur in every aspect of an analysis, deterministic analyses do not
take them into account. To deal with uncertainties and scatter, use Six Sigma Analysis, which allows you to
answer the following questions:
If the input variables of a finite element model are subject to scatter, how large is the scatter of the
output parameters? How robust are the output parameters? Here, output parameters can be any parameter that ANSYS Workbench can calculate. Examples are the temperature, stress, strain, or deflection
at a node, the maximum temperature, stress, strain, or deflection of the model, etc.
If the output is subject to scatter due to the variation of the input variables, then what is the probability
that a design criterion given for the output parameters is no longer met? How large is the probability
that an unexpected and unwanted event takes place (i.e., what is the failure probability)?
Which input variables contribute the most to the scatter of an output parameter and to the failure
probability? What are the sensitivities of the output parameter with respect to the input variables?
Six Sigma Analysis can be used to determine the effect of one or more variables on the outcome of the
analysis. In addition to the Six Sigma Analysis techniques available, ANSYS Workbench offers a set of strategic
tools to enhance the efficiency of the Six Sigma Analysis process. For example, you can graph the effects of
one input parameter versus an output parameter, and you can easily add more samples and additional
analysis loops to refine your analysis.
In traditional deterministic analyses, uncertainties are either ignored or accounted for by applying conservative
assumptions. You would typically ignore uncertainties if you know for certain that the input parameter has
no effect on the behavior of the component under investigation. In this case, only the mean values or some
nominal values are used in the analysis. However, in some situations, the influences of uncertainties exists
but is still neglected, as for the thermal expansion coefficient, for which the scatter is usually ignored.
96
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
~16%
~2.3%
~22%
~8%
More information about choosing and defining uncertainty variables can be found in the following sections.
Uncertainty Variables for Response Surface Analyses
Choosing a Distribution for a Random Variable
Distribution Functions
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
97
Measured Data
If you have measured data, then you must first know how reliable that data is. Data scatter is not just an
inherent physical effect, but also includes inaccuracy in the measurement itself. You must consider that the
person taking the measurement might have applied a "tuning" to the data. For example, if the data measured
represents a load, the person measuring the load may have rounded the measurement values; this means
that the data you receive are not truly the measured values. The amount of this tuning could provide a deterministic bias in the data that you need to address separately. If possible, you should discuss any bias that
might have been built into the data with the person who provided that data to you.
If you are confident about the quality of the data, then how you proceed depends on how much data you
have. In a single production field, the amount of data is typically sparse. If you have only a small amount of
data, use it only to evaluate a rough figure for the mean value and the standard deviation. In these cases,
you could model the uncertainty variable as a Gaussian distribution if the physical effect you model has no
lower and upper limit, or use the data and estimate the minimum and maximum limit for a uniform distribution.
In a mass production field, you probably have a lot of data. In these cases you could use a commercial
statistical package that will allow you to actually fit a statistical distribution function that best describes the
scatter of the data.
98
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Material Data
If the data supplier does not give you any further information, then you could consider assuming that the
number of measured samples was large. If the given exceedance limit is denoted with x1 - /2 and the given
mean value is denoted with x, then the standard deviation can be derived from the equation:
x1 / 2 x
99.5%
2.5758
99.0%
2.3263
97.5%
1.9600
95.0%
1.6449
90.0%
1.2816
No Data
In situations where no information is available, there is never just one right answer. Below are hints about
which physical quantities are usually described in terms of which distribution functions. This information
might help you with the particular physical quantity you have in mind. Also below is a list of which distribution
functions are usually used for which kind of phenomena. Keep in mind that you might need to choose from
multiple options.
Geometric Tolerances
If you are designing a prototype, you could assume that the actual dimensions of the manufactured
parts would be somewhere within the manufacturing tolerances. In this case it is reasonable to use a
uniform distribution, where the tolerance bounds provide the lower and upper limits of the distribution
function.
If the manufacturing process generates a part that is outside the tolerance band, one of two things may
happen: the part must either be fixed (reworked) or scrapped. These two cases are usually on opposite
ends of the tolerance band. An example of this is drilling a hole. If the hole is outside the tolerance
band, but it is too small, the hole can just be drilled larger (reworked). If, however, the hole is larger
than the tolerance band, then the problem is either expensive or impossible to fix. In such a situation,
the parameters of the manufacturing process are typically tuned to hit the tolerance band closer to the
rework side, steering clear of the side where parts need to be scrapped. In this case, a Beta distribution
is more appropriate.
Often a Gaussian distribution is used. The fact that the normal distribution has no bounds (it spans
minus infinity to infinity), is theoretically a severe violation of the fact that geometrical extensions are
described by finite positive numbers only. However, in practice, this lack of bounds is irrelevant if the
standard deviation is very small compared to the value of the geometric extension, as is typically true
for geometric tolerances.
Material Data
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
99
In some cases the material strength of a part is governed by the "weakest-link theory." The "weakestlink theory" assumes that the entire part will fail whenever its weakest spot fails. For material properties
where the "weakest-link" assumptions are valid, the Weibull distribution might be applicable.
For some cases, it is acceptable to use the scatter information from a similar material type. For example,
if you know that a material type very similar to the one you are using has a certain material property
with a Gaussian distribution and a standard deviation of 5% around the measured mean value, then
you can assume that for the material type you are using, you only know its mean value. In this case,
you could consider using a Gaussian distribution with a standard deviation of 5% around the given
mean value.
Load Data
For loads, you usually only have a nominal or average value. You could ask the person who provided the
nominal value the following questions: Out of 1000 components operated under real life conditions, what
is the lowest load value any one of the components sees? What is the most likely load value? That is, what
is the value that most of these 1000 components are subject to? What is the highest load value any one
component would be subject to? To be safe you should ask these questions not only of the person who
provided the nominal value, but also to one or more experts who are familiar with how your products are
operated under real-life conditions. From all the answers you get, you can then consolidate what the minimum, the most likely, and the maximum value probably is. As verification, compare this picture with the
nominal value that you would use for a deterministic analysis. The nominal value should be close to the
most likely value unless using a conservative assumption. If the nominal value includes a conservative assumption (is biased), then its value is probably close to the maximum value. Finally, you can use a triangular
distribution using the minimum, most likely, and maximum values obtained.
Distribution Functions
Beta Distribution
fX(x)
r,t
xmin
xmax
x
You provide the shape parameters r and t and the lower and the upper limit xmin and xmax of the random
variable x.
The Beta distribution is very useful for random variables that are bounded at both sides. If linear operations
are applied to random variables that are all subjected to a uniform distribution, then the results can usually
be described by a Beta distribution. For example, if you are dealing with tolerances and assemblies where
the components are assembled and the individual tolerances of the components follow a uniform distribution
(a special case of the Beta distribution), the overall tolerances of the assembly are a function of adding or
subtracting the geometrical extension of the individual components (a linear operation). Hence, the overall
tolerances of the assembly can be described by a Beta distribution. Also, as previously mentioned, the Beta
distribution can be useful for describing the scatter of individual geometrical extensions of components as
well.
100
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Exponential Distribution
fX(x)
xmin
x
You provide the decay parameter and the shift (or lower limit) xmin of the random variable x.
The exponential distribution is useful in cases where there is a physical reason that the probability density
function is strictly decreasing as the uncertainty variable value increases. The distribution is mostly used to
describe time-related effects; for example, it describes the time between independent events occurring at
a constant rate. It is therefore very popular in the area of systems reliability and lifetime-related systems reliability, and it can be used for the life distribution of non-redundant systems. Typically, it is used if the lifetime
is not subjected to wear-out and the failure rate is constant with time. Wear-out is usually a dominant lifelimiting factor for mechanical components that would preclude the use of the exponential distribution for
mechanical parts. However, where preventive maintenance exchanges parts before wear-out can occur, then
the exponential distribution is still useful to describe the distribution of the time until exchanging the part
is necessary.
x
You provide values for the mean value and the standard deviation of the random variable x.
The Gaussian, or normal, distribution is a fundamental and commonly-used distribution for statistical matters.
It is typically used to describe the scatter of the measurement data of many physical phenomena. Strictly
speaking, every random variable follows a normal distribution if it is generated by a linear combination of
a very large number of other random effects, regardless which distribution these random effects originally
follow. The Gaussian distribution is also valid if the random variable is a linear combination of two or more
other effects if those effects also follow a Gaussian distribution.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
101
Lognormal Distribution
fX(x)
x
You provide values for the logarithmic mean value and the logarithmic deviation . The parameters and
are the mean value and standard deviation of ln(x):
f ( x, , ) =
1 ln x - 2
exp -
2
The lognormal distribution is another basic and commonly-used distribution, typically used to describe the
scatter of the measurement data of physical phenomena, where the logarithm of the data would follow a
normal distribution. The lognormal distribution is suitable for phenomena that arise from the multiplication
of a large number of error effects. It is also used for random variables that are the result of multiplying two
or more random effects (if the effects that get multiplied are also lognormally distributed). It is often used
for lifetime distributions such as the scatter of the strain amplitude of a cyclic loading that a material can
endure until low-cycle-fatigue occurs.
Uniform Distribution
fX(x)
xmin
xmax
x
You provide the lower and the upper limit xmin and xmax of the random variable x.
The uniform distribution is a fundamental distribution for cases where the only information available is a
lower and an upper limit. It is also useful to describe geometric tolerances. It can also be used in cases where
any value of the random variable is as likely as any other within a certain interval. In this sense, it can be
used for cases where "lack of engineering knowledge" plays a role.
102
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Triangular Distribution
fX(x)
xmin
xmlv
xmax
x
You provide the minimum value xmin, the most likely value limit xmlv and the maximum value xmax.
The triangular distribution is most helpful to model a random variable when actual data is not available. It
is very often used to capture expert opinions, as in cases where the only data you have are the well-founded
opinions of experts. However, regardless of the physical nature of the random variable you want to model,
you can always ask experts questions like "Out of 1000 components, what are the lowest and highest load
values for this random variable?" and other similar questions. You should also include an estimate for the
random variable value derived from a computer program, as described above. For more details, see Choosing
a Distribution for a Random Variable (p. 98).
2G
xmin
xmax
G
You provide the mean value and the standard deviation of the non-truncated Gaussian distribution and
the truncation limits xmin and xmax.
The truncated Gaussian distribution typically appears where the physical phenomenon follows a Gaussian
distribution, but the extreme ends are cut off or are eliminated from the sample population by quality control
measures. As such, it is useful to describe the material properties or geometric tolerances.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
103
Weibull Distribution
fX(x)
m,xchr
xmin
x
You provide the Weibull characteristic value xchr , the Weibull exponent m, and the minimum value xmin.
There are several special cases. For xmin = 0 the distribution coincides with a two-parameter Weibull distribution. The Rayleigh distribution is a special case of the Weibull distribution with = xchr - xmin and m = 2.
In engineering, the Weibull distribution is most often used for strength or strength-related lifetime parameters,
and is the standard distribution for material strength and lifetime parameters for very brittle materials (for
these very brittle materials, the "weakest-link theory" is applicable). For more details, see Choosing a Distribution for a Random Variable (p. 98).
Sample Generation
For Six Sigma Analysis, the sample generation by default is based on the Latin Hypercube Sampling (LHS)
technique. (You can also select Weighted Latin Hypercube Sampling (WLHS) as the Sampling Type in the
Six Sigma Properties view.) The LHS technique is a more advanced and efficient form of Monte Carlo analysis methods. The only difference between LHS and the Direct Monte Carlo Sampling technique is that LHS
has a sample "memory," meaning it avoids repeating samples that have been evaluated before (it avoids
clustering samples). It also forces the tails of a distribution to participate in the sampling process. Generally,
the LHS technique requires 20% to 40% fewer simulations loops than the Direct Monte Carlo analysis technique
to deliver the same results with the same accuracy. However, that number is largely problem-dependent.
104
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Histogram
A histogram plot is most commonly used to visualize the scatter of a Six Sigma Analysis variable. A histogram
is derived by dividing the range between the minimum value and the maximum value into intervals of equal
size. Then Six Sigma Analysis determines how many samples fall within each interval, that is, how many
"hits" landed in each interval.
Six Sigma Analysis also allows you to plot histograms of your uncertainty variables so you can double-check
that the sampling process generated the samples according to the distribution function you specified. For
uncertainty variables, Six Sigma Analysis not only plots the histogram bars, but also a curve for values derived
from the distribution function you specified. Visualizing histograms of the uncertainty variables is another
way to verify that enough simulation loops have been performed. If the number of simulation loops is sufficient, the histogram bars will:
Not have major gaps (no hits in an interval where neighboring intervals have many hits)
However, if the probability density function is flattening out at the far ends of a distribution (for example,
the exponential distribution flattens out for large values of the uncertainty variable) then there might logically
be gaps. Hits are counted only as positive integer numbers and as these numbers gradually get smaller, a
zero hit can happen in an interval.
105
For more information, see Six Sigma Analysis Theory (p. 109).
Probability Table
Instead of reading data from the cumulative distribution chart, you can also obtain important information
about the cumulative distribution function in tabular form. A Probability Table is available that is designed
to provide probability values for an even spread of levels of an input or output parameter. You can view
the table in either Quantile-Percentile (Probability) mode or Percentile-Quantile (Inverse Probability) mode.
The Probability Table lets you find out the parameter levels corresponding to probability levels that are
typically used for the design of reliable products. If you want to see the probability of a value that is not
listed, you can add it to the table. Likewise, you can add a probability or sigma-level and see the corresponding
values. You can also delete values from the table. For more information, see Using Statistical Postprocessing (p. 70).
106
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Note
Both tables will have more rows, i.e., include more data, if the number of samples is increased. If
you are designing for high product reliability, i.e., a low probability that it does not conform to
quality or performance requirements, then the sample size must be adequately large to address
those low probabilities. Typically, if your product does not conform to the requirements denoted
with "Preq," then the minimum number of samples should be determined by: Nsamp = 10.0 /
Preq.
For example, if your product has a probability of Preq=1.0 e-4 that it does not conform to the
requirements, then the minimum number of samples should be Nsamp = 10.0 / 1.0e-4 = 10e+5.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
107
How can I make the component more reliable or improve its quality?
If the results for the reliability or failure probability of the component do not reach the expected levels, or
if the scatter of an output parameter is too wide and therefore not robust enough for a quality product,
then you should make changes to the important input variables first. Modifying an input variable that is
insignificant would be a waste of time.
Of course, you are not in control of all uncertainty parameters. A typical example where you have very limited
means of control involves material properties. For example, if it turns out that the environmental temperature
(outdoor) is the most important input parameter, then there is probably nothing you can do. However, even
if you find out that the reliability or quality of your product is driven by parameters that you cannot control,
this data has importance -- it is likely that you have a fundamental flaw in your product design! You should
watch for influential parameters like these.
If the input variable you want to tackle is a geometry-related parameter or a geometric tolerance, then improving the reliability and quality of your product means that it might be necessary to change to a more
accurate manufacturing process or use a more accurate manufacturing machine. If it is a material property,
then there might be nothing you can do about it. However, if you only had a few measurements for a material property and consequently used only a rough guess about its scatter, and the material property turns
out to be an important driver of product reliability and quality, then it makes sense to collect more raw data.
How can I save money without sacrificing the reliability or the quality of the product?
If the results for the reliability or failure probability of the component are acceptable or if the scatter of an
output parameter is small and therefore robust enough for a quality product, then there is usually the
question of how to save money without reducing the reliability or quality. In this case, you should first make
changes to the input variables that turned out to be insignificant, because they do not effect the reliability
or quality of your product. If it is the geometrical properties or tolerances that are insignificant, you can
consider applying a less expensive manufacturing process. If a material property turns out to be insignificant,
then this is not typically a good way to save money, because you are usually not in control of individual
108
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Statistical Postprocessing
material properties. However, the loads or boundary conditions can be a potential for saving money, but in
which sense this can be exploited is highly problem-dependent.
Statistical Postprocessing
Convention: Set of data xi.
1.
Mean Value
Mean is a measure of average for a set of observations. The mean of a set of n observations is defined
as follows:
^ =
2.
1 n
yi
n i =1
(1)
Standard Deviation
Standard deviation is a measure of dispersion from the mean for a set of observations. The standard
deviation of a set of n observations is defined as follows:
^ =
3.
1 n
( yi ^ )2
(n 1) i =1
(2)
Sigma Level
Sigma level is calculated as the inverse cumulative distribution function of a standard Gaussian distribution at a given percentile. Sigma level is used in conjunction with standard deviation to measure
data dispersion from the mean. For example, for a pair of quantile X and sigma level n, it means that
value X is about n standard deviations away from the sample mean.
4.
Skewness
Skewness is a measure of degree of asymmetry around the mean for a set of observations. The observations are symmetric if distribution of the observations looks the same to the left and right of the
mean. Negative skewness indicates the distribution of the observations being left-skewed. Positive
skewness indicates the distribution of the observations being right-skewed. The skewness of a set of
n observations is defined as follows:
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
109
^
n
yi
^
=
(n 1)(n 2) i =1
5.
(3)
Kurtosis
Kurtosis is a measure of relative peakedness/flatness of distribution for a set of observations. It is
generally a relative comparison with the normal distribution. Negative kurtosis indicates a relatively
flat distribution of the observations compared to the normal distribution, while positive kurtosis indicates
a relatively peaked distribution of the observations. As such, the kurtosis of a set of n observations is
defined with calibration to the normal distribution as follows:
4
^
n
n
(
n
+
1
)
y
3(n 1)2
i
^ =
(n 1)(n 2)(n 3) i =1 (n 2)(n 3)
(4)
^
^
where and represent mean and standard deviation, respectively.
<0
=0
>0
Shannon Entropy
Entropy is a concept first introduced in classical thermodynamics. It is a measure of disorder in a system.
The concept of entropy was later extended and recognized by Claude Shannon in the domain of information/communication as a measure of uncertainty over the true content of a message. In statistics,
the entropy is further reformulated as a function of mass density, as follows:
H = Pi ln Pi
(5)
where Pi represents mass density of a parameter. In the context of statistics/probability, entropy becomes
a measure of complexity and predictability of a parameter. A more complex, and less predictable,
parameter carries higher entropy, and vice versa. For example, a parameter characterized with uniform
distribution has higher entropy than that with truncated triangular distribution (within the same
110
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Statistical Postprocessing
bounds). Hence, the parameter characterized with uniform distribution is less predictable compared
to the triangular distribution.
Pi =
Ni
N w
(6)
The probability mass is used in a normalized fashion, such that not only the shape, but the range of
variability, or the distribution is accounted for. This is shown in Equation 6 (p. 111), where Ni/N is the
relative frequency of the parameter falling into a certain interval, and w is the width of the interval.
As a result, Shannon entropy can have a negative value. Below are some comparisons of the Shannon
entropy, where S2 is smaller than S1.
7.
Nominal is Best
Smaller is Better
Larger is Better
Signal-to-noise (S/N) ratios are measures used to optimize control parameters in order to achieve a
robust design. These measures were first proposed by Dr. Taguchi of Nippon Telephone and Telegraph
Company, Japan, to reduce design noises in manufacturing processes. These design noises are normally
expressed in statistical terms such as mean and standard deviation (or variance). In computer aided
engineering (CAE), these ratios have been widely used to achieve a robust design in computer simulations. For a design to be robust, the simulations are carried out with an objective to minimize the
variances. The minimum variance of designs/simulations can be done with or without targeting a
certain mean. In Design Exploration, minimum variance targeted at a certain mean (called Nominal is
Best) is provided, and is given as follows:
= 20log
^
^
(7)
^
^
where and represent mean and standard deviation, respectively.
Nominal is Best is a measure used for characterizing design parameters such as model dimension in
a tolerance design, in which a specific dimension is required, with an acceptable standard deviation.
In some designs, however, the objectives are to seek a minimum or a maximum possible at the price
of any variance.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
111
For the cases of minimum possible (Smaller is Better), which is a measure used for characterizing
output parameters such as model deformation, the S/N is expressed as follows:
1 n
= 10 log yi2
n i =1
(8)
For the cases of maximum possible (Larger is Better), which is a measure used for characterizing
output parameters such as material yield, the S/N is formulated as follows:
1 n 1
= 10 log
n i =1 yi2
(9)
These three S/N ratios are mutually exclusive. Only one of the ratios can be optimized for any given
parameter. For a better design, these ratios should be maximized.
8.
(10)
(11)
Note
The minimum and maximum values strongly depend on the number of samples. If you
generate a new sample set with more samples, then chances are that the minimum value
will be lower in the larger sample set. Likewise, the maximum value of the larger sample
set will most likely be higher than for the original sample set. Hence, the minimum and
maximum values should not be interpreted as absolute physical bounds.
9.
The amount by which the output parameter varies across the variation range of an input parameter.
The variation range of an input parameter. Typically, the wider the variation range is, the larger
the impact of the input parameter will be.
The statistical sensitivities used under Six Sigma Analysis are based on the Spearman-Rank Order Correlation coefficients that take both those aspects into account at the same time.
112
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Statistical Postprocessing
Basing sensitivities on correlation coefficients follows the concept that the more strongly an output
parameter is correlated with a particular input parameter, the more sensitive it is with respect to
changes of that input parameter.
10. Spearman Rank-Order Correlation Coefficient
The Spearman rank-order correlation coefficient is:
n
rs =
(Ri R)(Si S)
i
(12)
(Ri R) (Si S)
2
where:
x x ...x
Ri = rank of xi within the set of observations 1 2 n
y y ...y
Si = rank of yi within the set of observations 1 2 n
n2
(13)
1 rP2
is approximately distributed like the student's t-distribution with = n - 2 degrees of freedom. The
cumulative distribution function student's t-distribution is:
A( t | ) =
+1
1
x2 2
dx
1 +
1
B , t
2 2
t
(14)
where:
B(...) = complete Beta function
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
113
n!
(n k )! k ! Fki (1 Fi )n k = 50%
k =i
(15)
x xi
xi +1 xi
(16)
The cumulative distribution function of sampled data can only be given at the individual sampled
values x1, x2, ..., xi, xi+1, ..., xn using Equation 15 (p. 114). Hence, the evaluation of the inverse cumulative
distribution function for any arbitrary probability value requires an interpolation between the available
data points.
The evaluation of the inverse of the empirical distribution function is most important in the tails of
the distribution, where the slope of the empirical distribution function is flat. In this case, a direct interpolation between the points of the empirical distribution function similar to Equation 16 (p. 114) can
lead to inaccurate results. Therefore, the inverse standard normal distribution function -1 is applied
for all probabilities involved in the interpolation. If p is the requested probability for which we are
114
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Statistical Postprocessing
looking for the inverse cumulative distribution function value, and p is between Fi and Fi+1, then the
inverse cumulative distribution function value can be calculated using:
x = xi + ( xi +1 xi )
1(p) 1(Fi )
1(Fi + 1) 1(Fi )
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
(17)
115
116
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Troubleshooting
Table view and Charts view become unreadable because of the number of parameters and the length
of their names
In order to make your charts and tables more readable, consider using the short parameter name option:
Open the Tools>Options dialog, select Design Exploration, and uncheck the Display Parameter Full
Name preference.
A Design Exploration Update operation returns an error message saying that one or several Design
Points have failed to update
Click the Show Details button in the error dialog to see the list of Design Points that failed to update
and the associated error messages. Each failed Design Point is automatically preserved for you at the
project level, so you can Return to the Project and Edit the Parameter Set cell in order to see the failed
Design Points and investigate the reason for the failures.
This error means that the project failed to update correctly for the parameter values defined in the listed
Design Points. There may be a variety of failure reasons, from a lack of a license to a CAD model generation failure, for instance. If you try to update the system again, only the failed Design Points will be
attempted for a new Update, and if the update is successful, the operation will complete. If there are
persistent failures with a Design Point, copy its values to the Current Design Point, attempt a Project
Update, and edit the cells in the project that are not correctly updated in order to investigate.
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
117
118
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Appendices
Design Point
Input Parameters
Mean Value
A measure of location often used to describe the general location of the bulk
of the scattering data of a random output parameter or of a statistical distribution function.
Mathematically, the mean value is the arithmetic average of the data. The
mean value also represents the center of gravity of the data points. Another
name for the mean value is the expected value.
Median Value
The statistical point where 50% of the data is below the value and 50% is
above.
For symmetrical distribution functions (Gaussian, uniform, etc.) the median
value and the mean value are identical, but for nonsymmetrical distributions,
they are different.
Output Parameters
Parameters that define the response outputs (e.g., volume) from the analysis.
Pareto Set
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
119
Appendices
Response Point
Response Surface
The empirical relationship between a variable of interest, y, and a set of independent variables, x1, x2, x3, ... Usually the function y = F(x1, x2, x3, . . . ) is a
polynomial or some other well-defined relationship, which forms the response
surface model of y. When applied to design and analysis in the realm of CAD
and CAE, a response surface is the representation of the physical behavior of
a structure in terms of its independent variables. For example, a response
surface can be devised for the fundamental frequency (F) of a structure as a
function of CAD geometry parameters (R, L) and the modulus of elasticity (E);
that is, F = G(R,L,E).
Sample
Standard Deviation
Uncertainty Variables
120
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Index
Symbols
2-D contour graph , 55
3-D contour graph , 56
measured data, 98
no data, 99
types, 100
DOE
definition, 119
DOE solution, 36
C
contour graphs
2-D, 55
3-D, 56
correlation
parameters, 20
correlation matrix
object reference, 21
correlation scatter chart
object reference, 21
correlation sensitivities chart
object reference, 22
cumulative distribution function, 105
D
database
resuming, 4
decision support, 81
Design Exploration, 17
introduction, 1
options, 11
overview, 17
user interface, 5
workflow, 9
design goals
defining, 67
design of experiments, 18
definition, 119
six sigma, 29
design points, 9
adding, 60, 69
definition, 119
exporting, 60
importing, 60
uses, 60
using, 35
design points vs parameter plot
object reference, 20
design values, 33
determination matrix
object reference, 22
distribution
choosing for random variables, 98
data scatter values, 98
G
genetic algorithm, 81
glossary, 119
goal driven optimization, 73
creating design points, 69
decision support, 81
defining goals, 67
genetic algorithm, 81
guidelines, 74
MOGA option, 65
NLPQL option, 66
options, 63
Pareto dominance, 80
postprocessing, 76
principles, 73
sampling method, 79
screening option, 64
sensitivities, 69
theory, 79
tradeoff studies, 69
using, 63
goodness of fit, 53
H
histogram , 105
I
input parameters
definition, 119
using, 32
L
local sensitivities
description, 57
local sensitivity chart
description, 57
object reference, 25
M
mean value
definition, 119
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
121
Index
median value
definition, 119
min-max search, 53
object reference, 24
optimization, 63
defining goals, 67
MOGA option, 65
NLPQL option, 66
object reference, 26
screening option, 64
options, 11
output parameters, 34
definition, 119
response surface
definition, 119
Kriging, 45
meta model refinement, 50
meta model types, 42
neural network, 49
non-parametric regression, 46
object reference, 22
standard, 43
responses
min-max search, 53
overview, 42
response charts, 54
result parameter
definition, 119
R
response chart
object reference, 25
response charts, 54
2-D contour graph, 55
3-D contour graph, 56
goodness of fit, 53
local sensitivities, 57
spider charts, 56
x-y graph, 55
response points, 9
adding, 60, 69
definition, 119
122
sample
definition, 119
sample generation
MOGA, 65
NLPQL, 66
screening, 64, 79
six sigma analysis, 104
samples chart
object reference, 28
sensitivities
global, 59, 69, 71
goal driven optimization, 69
local, 57, 59
six sigma analysis, 71, 107
sensitivities chart
global, 28, 31
goal driven optimization, 28, 69
object reference, 28, 31
six sigma, 31
six sigma analysis, 71
sensitivity chart
description, 77
local, 25, 57
response surface, 25, 57
six sigma
object reference, 28
six sigma analysis, 30, 95
choosing variables, 97
guidelines, 97
performing, 70
postprocessing, 70, 105
sample generation, 104
statistical measures, 71
theory, 109
understanding, 96
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
Index
using, 70
six sigma analysis postprocessing, 70
charts, 71
sensitivities, 71
tables, 71
solution
DOE, 36
spider chart
description, 56
object reference, 26
standard deviation
definition, 119
T
tradeoff chart
description, 77
object reference, 27
tradeoff studies, 69
troubleshooting, 117
U
uncertainty variable
definition, 119
uncertainty variables , 33
choosing, 97
choosing distribution, 98
response surface analyses, 97
user interface, 5
W
workflow, 9
X
x-y graph, 55
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.
123
124
Release 12.1 - 2009 SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information
of ANSYS, Inc. and its subsidiaries and affiliates.