Beruflich Dokumente
Kultur Dokumente
and
argmax f (x) := {x0 S | f (x0 ) f (x), x S}.
xS
(1)
(2)
The function f and the set S, involved in a general optimization problem of type (1)
or (2), are called objective function and feasible set, respectively. Thus by a feasible
point of these problems we mean any element of S.
Remark 2.1 It is easily seen that maximization problems can be converted into
minimization problems, and vice-versa, since
argmax f (x) = argmin(-f )(x) and argmin f (x) = argmax(-f )(x).
xS
xS
xS
(3)
xS
and
xS
xS
(b) If g(x1 ) < g(x2 ) for all x1 , x2 S with f (x1 ) < f (x2 ), then
argmin g(x) argmin f (x)
xS
and
xS
xS
and
xS
xS
and
xS
xS
Solution: (a) Assume that for all x1 , x2 S with f (x1 ) f (x2 ) we have g(x1 )
g(x2 ). Then it is easily seen that Sf> (f (x)) Sg> (g(x)) and Sf6 (f (x)) Sg6 (g(x))
for all x S. By Proposition 3.1 we get
argmin f (x) = {x0 S | S Sf> (f (x0 ))}
xS
argmax f (x) = {x S | S
xS
(b) Assume that for all x1 , x2 S with f (x1 ) < f (x2 ) we have g(x1 ) < g(x2 ).
Then, for all x1 , x2 S such that g(x1 ) g(x2 ), we actually have f (x1 ) f (x2 ).
Then, by interchanging the roles of f and g in (a), we get the desired conclusion.
(c) This assertion directly follows from (a) and (b).
(d) By considering := , and recalling the property (3), the conclusion easily
follows from (c).
2
(4)
(5)
xS
xS
Proof: Let x0 argminxS f (x). Then x0 S and f (x0 ) f (x) for all x S, which
means that x Sf> (f (x0 )) Df> (f (x0 )) for all x S. Thus we have S Df> (f (x0 )).
Hence the inclusion in (4) is true. In order to prove the converse inclusion, let
x0 S be such that S Df> (f (x0 )). Then, for every x S, we have x Df> (f (x0 )),
i.e., f (x) f (x0 ), hence x0 argminxS f (x). Thus the inclusion in (4) is also
true.
In view of (3), the equality (5) can be easily deduced from (4), taking into account
>
that Df6 (f (x0 )) = Df
(f (x0 )).
x0 argmaxxS f (x) if and only if the feasible set S lies in the halfplane Df6 (f (x0 ))
(bounded by the straight line Df (f (x0 )) and oriented by the vector c).
Consider now the particular case of the optimal location problem, where
=
6 S D := R2 and
f (x) := kx x k, x R2 ,
the point x R2 \ S being a priori given.
In this case, for every point x R2 , f (x) represents the Euclidean distance
between x and x . Consider a point x0 S. Since x
/ S, we have kx0 x k > 0,
hence the level set Df6 (f (x0 )) = {x R2 | kx x k kx0 x k} = B(x , kx0 x k)
actually is the closed Euclidean ball (i.e., a closed disk) centered at x with radius
kx0 x k, and Df> (f (x0 )) = {x R2 | kx x k kx0 x k} = Rn \ B(x , kx0 x k)
represents the complement of the open Euclidean ball (i.e., the complement in R2
of an open disk) centered at x with radius kx0 x k. Thus (4) shows that x0
argminxS f (x) if and only if the feasible set S lies outside the open disk centered
at x with radius kx0 x k. Similarly, (5) shows that x0 argmaxxS f (x) if and
only if the feasible set S is contained in the closed disk centered at x with radius
kx0 x k.
In particular, if f is continuous on the nonempty compact set S, then both level sets Sf6 ()
and Sf> () are closed for every R. In this case we recover the classical Weierstrass Theorem.
2
A function f : S R defined on a nonempty set S Rn is said to be coercive if for any
sequence (xk )kN of points in S with lim kxk k = we have lim f (xk ) = .
k
Convex sets
SF
S is convex.
T
T
Proof: (a) Let t [0, 1]. For every M F, we have (1 t) SF S + t SF S
T
T
T
(1 t)M + tM M , since M is convex. Hence (1 t) SF S + t SF S SF S,
T
i.e., SF S is convex.
S
(b) Let x, y SF S and t [0, 1]. Then there exist X, Y F such that x X
and y Y . The family F being directed, we can choose Z F such that X Y Z.
S
Since S is convex and x, y Z, it follows that (1 t)x + ty Z SF S. Thus
S
SF S is convex.
Corollary 5.1 Let (Mi )iN be a sequence of convex sets in Rn . Then the following
hold:
(a)
S T
i=1
j=i
Mj is convex.
(b) If the sequence (Mi )iN is ascending, i.e., Mi Mi+1 for all i N , then
S
i=1 Mi is convex.
Solution: (a) For each i N , consider the set Si :=
k=i
Mk . According to
Note that conv M is a convex set (as an intersection of a family of convex sets). It
is clear that M is convex if and only if M = conv M .
Definition 5.3 Given an arbitrary nonempty set M Rn , a point x Rn is
said to be a convex combination of elements of M Rn , if there exist k N ,
x1 , . . . , xk M , and (t1 , . . . , tk ) k := {(s1 , . . . , sk ) Rk+ | s1 + . . . + sk = 1},
such that x = t1 x1 + . . . + tk xk .
nP
k
i=1 ti x
| k N , x , . . . , x M, (t1 , . . . , tk ) k .
Proof: Denote by
nP
o
k
i
1
k
C(M ) :=
t
x
|
k
N
,
x
,
.
.
.
,
x
M,
(t
,
.
.
.
,
t
)
.
i
1
k
k
i=1
(6)
For the equality conv M = C(M ) it suffices to show that the following conditions
are fulfilled:
(i) M C(M );
(ii) C(M ) is convex;
(iii) C(M ) S for every convex set S Rn with M S.
Condition (i) holds, since one obtains in C(M ) the elements of M considering
k = 1. To show (ii) pick x, y C(M ) and [0, 1]. Then there exist k, ` N ,
x1 , . . . , xk , y 1 , . . . , y ` M , (t1 , . . . , tk ) k and (s1 , . . . , s` ) ` such that x =
P`
Pk
i
i
i=1 si y . Thus
i=1 ti x and y =
(1 )x + y =
k
`
X
X
(1 )ti xi +
si y i .
i=1
Since
Pk
i=1 (1
)ti +
P`
i=1
i=1
si = (1 )
Pk
i=1 ti
P`
i=1
si = 1 + = 1, it
7
Rn with kN Sk = and m
i=1 Si 6= for all m N .
Solution: For every k N consider the closed convex set
Sk := {(x1 , . . . , xn ) Rn | x1 k}.
Obviously,
Tm
i=1
kN
Sk = .
Convex cones
1
t
C R+ C = C, thus
C = t( 1t C) t C. This yields 3 .
S
3 1 . Using 3 , we obtain that R+C = (0C) ( tR t C) = {0n } C C.
+
Proposition 6.1 If (Ci )iI is a nonempty family of cones in Rn , then the following
hold:
(a) Both
iI
Ci and
iI
Ci are cones.
T
(b) If Ci is convex (closed) for all i I, then iI Ci is convex (closed).
T
(c) If Cj is pointed for some j I, then iI Ci is pointed.
T
Proof: (a) Since 0n Ci = R+ Ci for every i I, it follows that 0n iI Ci =
T
S
S
T
S
R+ ( iI Ci ) and 0n iI Ci = R+ ( iI Ci ), hence the sets iI Ci and iI Ci
are cones.
(b) Since the intersection of any family of convex (closed) subsets of Rn is convex
(closed), this assertion follows from (a).
(c) Let j I be so that Cj (Cj ) = {0n }. Since 0n Ci for every i I, we
T
T
T
get that {0n } ( iI Ci ) ( iI Ci ) = iI (Ci (Ci )) Cj (Cj ) = {0n },
T
T
T
hence ( iI Ci ) ( iI Ci ) = {0n }. By (a), the set iI Ci is a pointed cone.
Theorem 6.1 (Characterization of convex cones) For any cone C in Rn the
following assertions are equivalent:
9
1 C is convex.
2 C + C C.
Pm
Pm i 1
m
3
i=1 C (:= {
i=1 x | x , . . . , x C}) = C for every m N .
Proof: 1 2 . If C is convex, then 21 C + 21 C C. Since C is a cone, we get that
C + C = 2( 12 C + 12 C) 2C R+ C C.
2 3 . Under the hypothesis 2 , we are going to prove by induction that the
proposition
P
P(m) : m
i=1 C = C
holds for every m N . It is obvious that P(1) is true. Assume now that P(h)
P
holds for a natural number h N . Then hi=1 C = C, hence
h+1
X
C=
h
h+1
X
X
C +C =C +C C
C,
i=1
i=1
i=1
by the hypothesis assumed in 2 and by the fact that every cone contains the null
P
vector. It follows that h+1
i=1 C = C, thus proposition P(h + 1) holds. We conclude
that P(m) is true for every m N , thus 3 is proved.
3 1 . Let x, y C and t [0, 1]. Since the set C is a cone, it follows that
{(1 t)x, ty} R+ C C, hence (1 t)x + ty C + C. Applying 3 for m = 2,
we obtain that (1 t)x + ty C. Thus C is convex.
Definition 6.2 The conic hull of an arbitrary subset M of Rn is defined as the set
con M :=
Pm
| m N , x1 , . . . , xm M, t1 , . . . , tm R+ }
= {
Pn
| x1 , . . . , xn M, t1 , . . . , tn R+ }
i=1 ti x
i=1 ti x
= R+ conv M
= conv(R+ M ).
10
Pn
i=1 C.
Proof: Since C is a cone, the equality C = R+ C holds. Thus, on the one hand,
conv C = conv(R+ C) and, on the other hand,
Pn
i=1 C
= {
Pn
= {
Pn
i=1 c
| c1 , . . . , cn C}
i=1 ti x
| x1 , . . . , xn C, t1 , . . . , tn R+ } .
The relation
{
Pn
i=1 ti x
| x1 , . . . , xn C, t1 , . . . , tn R+ } = conv(R+ C),
which follows from Theorem 6.2, finally yields the asserted equality conv C =
Pn
i=1 C.
The following result concerns an important notion of convex analysis, namely
the recession cone of a set. As we shall see further, it plays a key role in characterizing both the unboundedness of closed convex sets and the unboundedness of the
objective functions in linear optimization.
Theorem 6.3 Let M be a nonempty subset of Rn .
(a) For each point x M the set
M (x) := {d Rn | x + R+ d M }
is a cone.
(b) The set
rec M :=
M (x)
xM
11
Exercise 6.1 Give an example of a cone C in R2 for each of the following cases:
(1) C is convex, but C 6= rec C.
(2) C = rec C, but C is neither convex nor closed.
Solution: (1) Consider the following convex cone:
C := {02 } (R+ )2
(7)
H > (a, 0) if a 6= 0n
Rn
if a = 0n
is a cone (indeed, h0n , ai 0, and htx, ai = thx, ai 0 for every x A+ and every
t R+ , hence 0n {a}+ and R+ {a}+ {a}+ ). Furthermore, {a}+ is convex and
closed. Since
A+ =
{a}+ ,
aA
12
Proposition 6.3 Consider the set of all vectors in Rn whose first nonzero coordinate (if any) is positive, i.e.,
C lex := {0n } {x = (x1 , . . . , xn ) Rn |
i {1, . . . , n} : xi > 0, @ j {1, . . . , n}, j < i : xj 6= 0}.
The set C lex is a pointed convex cone (the so-called lexicographic cone)..
Proof: Consider the function : Rn \{0n } {1, . . . , n}, defined for all x = (x1 , . . . , xn )
Rn \ {0n } by
(x) := min{i {1, . . . , n} | xi 6= 0}.
Observe that
C lex = {0n } {x = (x1 , . . . , xn ) Rn \ {0n } | x(x) > 0}.
It is easily seen that, for every x C lex and every t 0, we have tx C lex . Thus
C lex is a cone.
Suppose to the contrary that the cone C lex is not pointed. Then we can choose
a point x = (x1 , . . . , xn ) C lex (C lex ) \ {0n }. It follows that
{x, x} {v = (v1 , . . . , vn ) Rn \ {0n } | v(v) > 0},
hence x(x) > 0 and x(x) > 0. Since (x) = (x), we infer that 0 < x(x) =
(x(x) ) < 0, a contradiction. Thus C lex is a pointed cone.
According to Theorem 6.1, in order to prove the convexity of C lex it suffices
to show that C lex + C lex C lex . To this end, consider two arbitrary points x =
(x1 , . . . , xn ), y = (y1 , . . . , yn ) C lex . If x = 0n or y = 0n , then we have x + y =
0n C lex . Otherwise, if x 6= 0n 6= y, then we have x(x) > 0 and y(y) > 0,
hence x + y 6= 0n and (x + y) = min{(x), (y)}. Without loss of generality
we can assume that x(x) 6 y(y) . Then, we have x(x) > 0 and y(x) 0, hence
(x + y)(x+y) = x(x) + y(x) > 0. In both cases we infer x + y C lex . Thus we have
C lex + C lex C lex .
Linear programming
13