Sie sind auf Seite 1von 11

Optimization

Introduction
What is optimisation? And why?

Optimization is the act of obtaining the best result under given circumstances. Optimization can be
defined as the process of finding the conditions that give the maximum or minimum value of a
function.

If a point x* corresponds to the minimum value of function f(x), the same point also corresponds to
the maximum value of the negative of the function, - f (x). Thus, without loss of generality,
optimization can be taken to mean minimization since the maximum of a function can be found by
seeking the minimum of the negative of the same function. There is no single method available for
solving all optimization problems efficiently. Hence a number of optimization methods have been
developed for solving different types of optimization problems.

Optimisation Design Formulation


Statement of an Optimization Problem

Design Vector
Any engineering system or component is defined by a set of quantities; the design is sensitive to the
changes in the values of these quantities. Some of these variables which largely affect the design of
an engineering system or component are viewed as variables during the design process. In general,
certain quantities are usually fixed at the outset and these are called pre-assigned parameters. All
the other quantities are treated as variables in the design process and are called the design variables
or decision variables. The design variables are collectively represented as design vector.

Constraints
In many practical problems, the design variables cannot be chosen arbitrarily; rather, they have to
satisfy certain specified functional and other requirements. The restrictions that must be satisfied to
produce an acceptable design are collectively called design constraints. For illustration, consider an
optimization problem with only inequality constraints gj (X) <=0. The set of values of X that satisfy
the equation forms a hypersurface in the design space and is called a constraint surface. Thus the
points lying on the hypersurface will satisfy the constraint gj (X) critically, whereas the points lying in
the region where gj (X) > 0 are infeasible or unacceptable, and the points lying in the region where gj
(X) < 0 are feasible or acceptable. The collection of all the constraint surfaces gj (X) = 0, j = 1,2, ... ,m,
which separates the acceptable region is called the composite constraint surface.

The next figure shows a hypothetical two-dimensional design space where the infeasible region is
indicated by hatched lines. A design point that lies on one or more than one constraint surface is
called a bound point, and the associated constraint is called an active constraint. Design points that
do not lie on any constraint surface are known as free points.
Objective Function
The conventional design procedures aim at finding an acceptable or adequate design which merely
satisfies the functional and other requirements of the problem. In general, there will be more than
one acceptable design, and the purpose of optimization is to choose the best one of the many
acceptable designs available. Thus a criterion has to be chosen for comparing the different
alternative acceptable designs and for selecting/the best one. Criterion, with respect to which the
design is optimized, when expressed as a function of the design variables, is known as the criterion
or merit or objective function. The choice of objective function is governed by the nature of
problem; the objective function for minimization is generally taken as weight in aircraft and
aerospace structural design problems. In civil engineering structural designs, the objective is usually
taken as the minimization of cost. The maximization of mechanical efficiency is the obvious choice of
an objective in mechanical engineering systems design. The selection of the objective function can
be one of the most important decisions in the whole optimum design process.

Variable Bounds
Generally a maximum and minimum limit is set on the value of design variables. This ensures that
the feasible region is a closed surface.
An optimisation problem often formulated as,

which is called a Non-Linear Programming (NLP) format.


SINGLE VARIABLE OPTIMIZATION
Objective function is made of one variable

Optimality Criteria:
(i) Local optimal point: A point or solution x* is said to be a local optimal point, if there exists no
point in the neighbourhood of x* which is better than x*. In the parlance of minimization problems,
a point x* is a locally minimal point if no point in the neighbourhood has a function value smaller
than f(x*).
(ii) Global optimal point: A point or solution x** is said to be a global optimal point, if there exists no
point in the entire search space which is better than the point x**. Similarly, a point x** is a global
minimal point if no point in the entire search space has a function value smaller than f(x**).
(iii) Inflection point: A point x* is said to be an inflection point if the function value increases locally
as x* increases and decreases locally as x* reduces or if the function value decreases locally as x*
increases and increases locally as x* decreases.
Certain characteristics of the underlying objective function can be exploited to check whether a
point is either a local minimum or a global minimum, or an inflection point.

Necessary Condition: At point x*, the first derivative is zero and it means that the point x* is either a
minimum, a maximum, or an inflection point.

Sufficient Condition: At point x*, the first derivative is zero and the first nonzero higher order
derivative is denoted by n; then

• If n is odd, x* is an inflection point.


• If n is even, x* is a local optimum.
1. If the derivative is positive, x* is a local minimum.
2. If the derivative is negative, x* is a local maximum.
It is important to note here that even though a point can be tested for local optimality using the
above conditions, global optimality of a point cannot be obtained using the above conditions. The
common procedure for obtaining the global optimal point is to find a number of local optimal points
tested using the above conditions and choose the best point.
Direct Search Method: Golden Section Method
In this algorithm, the search space (a, b) is first linearly mapped to a unit interval search space (0,1).
Thereafter, two points at τ from either end of the search space are chosen so that at every iteration
the eliminated region is (1 - τ) to that in the previous iteration. This can be achieved by equating 1 - τ
with (τ X τ). This yields the golden number: τ = 0.618.

Algorithm

Gradient Descent Method: Bi-section Method


A simple bisection procedure for iteratively converging on a solution which is known to lie inside
some interval proceeds by evaluating the function in question at the midpoint of the original
interval and testing to see in which of the subintervals or the
solution lies. The procedure is then repeated with the new interval as often as needed to locate the
solution to the desired accuracy.

Algorithm
MULTI-VARIABLE OPTIMIZATION
Objective function is made of more than one variable

Optimality Criteria:
Same as single variable condition. There exists minimum, maximum and inflection points and the
conditions are,

Necessary Condition: If f(X) has an extreme point (maximum or minimum) at X = X* and if the first
partial derivatives of f(X) exist at X*, then

Sufficient condition: A sufficient condition for a point X* to be an extreme point is that the matrix of
second partial derivatives (Hessian matrix) of f(X) evaluated at X* is (i) positive definite when X* is a
relative/local minimum point, and (ii) negative definite when X* is a relative/local maximum point
and (iii) is semi-definite when X* is a inflection point.

Hessian Matrix

POSITIVE DEFINITE MATRIX: check from books if needed!


Example:

Find out the displacements for minimum potential energy condition.


We will just do one algorithm for this,
Gradient Descent Method: Steepest Descent Method
At every iteration, the derivative is computed at the current point and a unidirectional search is
performed in the negative to this derivative direction to find the minimum point along that
direction. The minimum point becomes the current point and the search is continued from this
point. The algorithm continues until a point having a small enough gradient vector is found. This
algorithm guarantees improvement in the function value at every iteration.

Algorithm
CONSTRAINED OPTIMIZATION

A good way to approach this is using the concept of Lagrange Multiplier

Quick Reminder on what is Lagrange Multiplier?


Optimality Conditions: Kuhn-Tucker Conditions

The Kuhn-Tucker conditions are the most generalised conditions for finding optimal (minimum)
points in a mathematical optimisation problem.

Note:


 Inequality constraints are changed to equality constraints by adding slack variables if
necessary.
 If the objective function and the constraints are all linear and satisfy non-negative
conditions, then the NLP can be termed as Linear Programming Problem A very famous
algorithm of solving LP problem is called Simplex Method.

Simplex Method
Pdf provided

Das könnte Ihnen auch gefallen