Sie sind auf Seite 1von 31

1 INTRODUCTION

Optimization means finding the best way to act in a decision making

situation.
People optimize:
How to spend your time in the best way?
How to get to a destination as fast as possible?
How to invest your savings to maximize returns?
Which job to apply for to maximize future happiness?
Companies and organizations optimize:
How to design operations to maximize profit or minimize cost?
Nature optimizes:
Systems of particles tend to settle in a state of minimum internal
energy
1.1 DEFINITION OF OPTIMIZATION PROBLEMS
OPTIMIZATION PROBLEM:
How to find the best solution x* among all possible choices x O ?
Criterion for goodness:
objective function f(x) to be minimized or maximized
Examples of minimization:
cost, weight, distance, energy, loss, waste, processing time, raw
material consumption.
Examples of maximization:
profit, return, yield, utility, performance, efficiency, capacity,
strength, endurance.
The objective function depends on some variables x = (x
1
,,x
n
)
The decision maker sets the values for these variables in order to
minimize or maximize the objective f(x).
The variables might be e.g.
dimensions of a construction
control parameters of a chemical process
allocation of resources in a production plant
capital investments of a company etc.
Choice of x can be constrained e.g. by natural laws, energy
conservation, Newton mechanics, entropy, chemical reactions, raw
material properties, scarce resources, machine availability, work
force, industrial standards, quality requirements, structural stability,
resistance, operational environment.
Fields of application for nonlinear optimization:
mechanical engineering
chemical engineering
sciences
industry, production
economics
medicine, agriculture, government
Applications: data analysis and model fitting, CAD/CAM,
equilibrium models, minimum energy problems, structural design
problems, traffic equilibrium models, data networks and routing,
production planning, resource allocation, hydro-electric power
scheduling, computer tomography (image reconstruction), models for
alternative energy sources
Mathematical modeling
Problem formulation: required for solution procedures
Modeling process:
1. Definition of the optimization problem
2. Definition of the objective to be minimized or maximized.
3. Definition of the decision variables.
4. Definition of constraints.
5. Model parameters.
6. Formulation of the model: presenting the objective as a function of
the variables and the constraints as functional relationships,
equalities and/or inequalities of the variables.
Nonlinear optimization =
nonlinear programming, mathematical programming
Example 1.1
In food industry, massive volumes of soft drink and beer cans are
manufactured. Material costs should be minimized.
Design a closed cylindrical can with volume 500 ml (cm
3
) that is
comfortable to handle and uses a minimum amount of aluminum.
Specifications:
- diameter of the end disk 5 7.5 cm
- height of the can should be at least twice the diameter.
Can is made of aluminum sheet of constant thickness
the surface area of the can to be minimized.
Decision variables
d = diameter of the cylinder ends
h = height of the can
Optimization problem:
2
/2
s. t.
2
2
h/4 500 = 0
h _ 2d h 2d _ 0
5 _ d _ 7.5
Constrained nonlinear optimization problem with two variables.
Optimum solution: d = 6.83 cm, h = 13.66 cm
Surface area f = 366.15 cm
2
GENERAL FORMULATION OF AN OPTIMIZATION PROBLEM
WITH CONSTRAINTS
Decision variables, design variables (the quantities to be decided):
x = (x
1
, x
2
,..., x
n
)
T
R
n
minimize f(x) (1.0)
subject to
c
i
(x) = 0, i = 1,...,m
e
(1.1)
c
i
(x) _ 0, i = m
e
+1,...,m (1.2)
No open bounds c
i
(x) > 0.
In equality constraint form could be c
i
(x) _ 0 as well.
Upper and lower bounds for variables (box constraints) may be
given separately as
lb _ x _ ub
Different constraint types may be given separately, e.g. linear and
nonlinear constraints. Input format depends on the software.
Maximization problems may be treated as minimization problems:
If f is maximized, the solution is the same if f is minimized.
A symbolic optimization model may contain several parameters
(given by numbers or symbols). Model parameter values are given,
not to be chosen, and are considered constant in the process of
optimization.
Definitions:
Solution x = (x
1
,...,x
n
)
T
is feasible, if it satisfies all constraints.
Feasible set = the set of all feasible solutions, O.
If O = R
n
, the problem is an unconstrained optimization problem.
Problem can be stated as
(NLP) minimize f(x)
subject to x O
Solution x* = (x
1
,...,x
n
)
T
O is a local minimum point
or local minimizer of (NLP) if for some positive c,
f(x*) _ f(x) for all x O such that ||xx*|| < c.
Solution x* = (x
1
,...,x
n
)
T
O is a global (or absolute) minimum point
or global minimizer of (NLP) if
f(x*) _ f(x) for all x O.
Maximum points defined respectively: f(x*) _ f(x)
Special cases (not treated in this course):
1) Discrete optimization problems: feasible set is discrete, typically
finite.
2) Optimum control theory, calculus of variations: feasible set O is a
function space (dimension infinite).
1.2 CLASSIFICATION OF OPTIMIZATION
PROBLEMS
Assume that variables are continuous i.e. they can take any real value
subject to the constraints and all functions are continuous
Properties of objective function
f(x)
Properties of constraint functions
c
i
(x)
Function of a single variable
Smooth nonlinear function
Non smooth nonlinear function
Linear function
Sum of squares of functions
Convex function
No constraints
Simple bounds
Linear functions
Smooth nonlinear functions
Non smooth nonlinear functions
Special cases:
LP problem, linear programming problem: f(x) and c
i
(x):s linear.
i
(x):s linear.
Convex programming problem: f(x) convex, equality constraint
functions linear and inequality constraint functions concave
(=> feasible set is convex).
We want solutions as fast as possible:
How to choose the most efficient method?
(In other words: How to get a good solution with as little computation
as possible?)
Some remarks:
Linear and nonlinear optimization: totally different algorithms.
Nonlinear optimization problems:
If a differentiable (smooth) function f is minimized, gradient based
optimization algorithms are most suitable.
Constrained optimization: treat nonlinear and linear constraints
separately, by appropriate strategies.
Newer, unconventional methods like the genetic of evolutionary
algorithms have proven successful in solving difficult optimization
problems.
No general optimization method is best or most efficient for all
types of problems!
Pay attention to the special structure of the problem when
choosing a solution method.
Before solving the model, anticipate what could be the expected
solution and what is the magnitude / scale of the variables.
Why?
The model may be inadequate or inconsistent, people make typing
errors, but the computer solver may be unable to detect these faults.
After solving the model, check if the solution is plausible and
realizable.
1.3 MATHEMATICAL DEFINITIONS AND
PREREQUISITES
Gradient of a differentiable function f: R
n
R
g(x) = f(x) =

n
2
1
x
f
x
f
x
f
M
Hessian matrix of a twice differentiable function:
H(x) =
2
f(x) =

2
n
2
2 n
2
1 n
2
n 2
2
2
2
2
1 2
2
n 1
2
2 1
2
2
1
2
x
f
x x
f
x x
f
x x
f
x
f
x x
f
x x
f
x x
f
x
f
K
M M M
K
L
For a quadratic function f(x) =
2
1
x
T
Qx + c
T
x + d
f(x) = Qx + c

2
f(x) = Q
Important definitions:
Point x is a stationary point of f if f(x) = 0.
The symmetric n n -matrix A and the corresponding quadratic form
x
T
Ax are said to be
1) positive definite, if (any of the conditions)
x
T
Ax > 0 for all x = 0
all its eigenvalues are positive
all determinants
D
i
=
ii 2 i 1 i
i 2 22 21
i 1 12 11
a a a
a a a
a a a
L
M O M M
L
L
1 _ i _ n are positive
that is
D
1
= a
11
> 0
D
2
=
22 21
12 11
a a
a a
> 0

D
n
= det(A) > 0
2) negative definite, if (any of the conditions)
x
T
Ax < 0 for all x = 0
all its eigenvalues are negative
D
i
< 0 for odd indices i, and
D
i
> 0 for even indices i
3) indefinite, if (any of the conditions)
x
T
Ax assumes positive and negative values
A has both positive and negative eigenvalues
det(A) = 0 but A is neither pos. nor neg.definite
4) positive semidefinite, if (any of the conditions)
x
T
Ax _ 0 for all x = 0
all its eigenvalues are nonnegative
5) negative semidefinite, if (any of the conditions)
x
T
Ax _ 0 for all x = 0
all its eigenvalues are nonpositive.
6) If det(A) = 0, A may be semidefinite or indefinite.
Order notation (ordo symbol):
A univariate function f(x) is said to be of order x
p
, as x 0, written
f(x) = O(x
p
), as x 0
if there exists a positive number M such that as |x| approaches zero
|f(x)| _ M |x
p
|
TAYLOR'S FORMULA
Let f: R R be r times continuously differentiable.
Then there exists a scalar 0 [0, 1] such that
) h x ( f h ) x ( f h ) x ( f h ) x ( f h ) x ( f ) h x ( f
) r ( r
! r
1
) 1 r ( 1 r
)! 1 r (
1
2
2
1
+ + + + + + = +

L
or
) h ( O ) x ( f h ) x ( f h ) x ( f h ) x ( f ) h x ( f
r ) 1 r ( 1 r
)! 1 r (
1
2
2
1
+ + + + + = +

L
Names: Taylor's formula, Taylor expansion, Taylor series,
Taylor's polynomial.
Taylor's polynomial of second order for a multivariate function:
Let f: R
n
R be twice continuously differentiable function, x a given
point in R
n
, p unit vector in R
n
, and h scalar. Then
f(x + hp) = f(x) + hp
T
f(x) +
2
1
h
2
p
T
H(x)p + O(h
3
)
For any vector d in R
n
f(x + d) = f(x) + d
T
f(x) +
2
1
d
T
H(x)d + O(||d||
3
)
FINITE DIFFERENCE APPROXIMATIONS TO DERIVATIVES
Taylor's formula gives
1) the forward difference approximation for derivative f(x):
h
) x ( f ) h x ( f
) x ( ' f
+
= + O(h)
2) the central-difference approximation for derivative f(x):
h 2
) h x ( f ) h x ( f
) x ( ' f
+
= + O(h
2
)
RATE OF CONVERGENCE OF ITERATIVE SEQUENCES
Assume the sequence {x
k
} converges to x* i.e.
k
lim||x
k
x*|| = 0
The sequence {x
k
} converges with order r when there are positive
constants c and N such that
||x
k+1
x*|| _ c ||x
k
x*||
r
when k _ N
or
<

+

r
k
|| ||
|| ||
lim 0
x* x
x* x
k
1 k
(i.e. finite)
When
0
|| ||
|| ||
lim
k
=

+

x* x
x* x
k
1 k
the convergence is superlinear.
Linear convergence: r = 1