Sie sind auf Seite 1von 13

Mathematical Foundations of the Decision-Making Process

Exam date: February 4, 2011

Partial orders and preference relations

See the course notes!

Optimization problems in general setting

Definition 2.1 Given a function f : D R, defined on some nonempty set D, and


given a nonempty subset S of D, we will use the notations:
argmin f (x) := {x0 S | f (x0 ) f (x), x S}
xS

and
argmax f (x) := {x0 S | f (x0 ) f (x), x S}.
xS

An element of argminxS f (x) is called a minimum point of f on S, or an optimal


solution of the optimization (i.e., minimization) problem
(
f (x) min
x S.

(1)

Similarly, an element of argmaxxS f (x) is called a maximum point of f on S, or an


optimal solution of the optimization (i.e., maximization) problem
(
f (x) max
x S.

(2)

The function f and the set S, involved in a general optimization problem of type (1)
or (2), are called objective function and feasible set, respectively. Thus by a feasible
point of these problems we mean any element of S.

Remark 2.1 It is easily seen that maximization problems can be converted into
minimization problems, and vice-versa, since
argmax f (x) = argmin(-f )(x) and argmin f (x) = argmax(-f )(x).
xS

xS

xS

(3)

xS

Exercise 2.1 Let f, g: S R be two functions defined on a nonempty subset S of


Rn . Prove that:
(a) If g(x1 ) g(x2 ) for all x1 , x2 S with f (x1 ) f (x2 ), then
argmin f (x) argmin g(x)
xS

and

xS

argmax f (x) argmax g(x).


xS

xS

(b) If g(x1 ) < g(x2 ) for all x1 , x2 S with f (x1 ) < f (x2 ), then
argmin g(x) argmin f (x)
xS

and

xS

argmax g(x) argmax f (x).


xS

xS

(c) If there exists an increasing function : D R, defined on a nonempty set


D R with f (S) D, such that g = f , then
argmin f (x) = argmin g(x)
xS

and

xS

argmax f (x) = argmax g(x).


xS

xS

(d) If there exists a decreasing function : D R, defined on a nonempty set


D R with f (S) D, such that g = f , then
argmin f (x) = argmax g(x)
xS

and

xS

argmax f (x) = argmin g(x).


xS

xS

Solution: (a) Assume that for all x1 , x2 S with f (x1 ) f (x2 ) we have g(x1 )
g(x2 ). Then it is easily seen that Sf> (f (x)) Sg> (g(x)) and Sf6 (f (x)) Sg6 (g(x))
for all x S. By Proposition 3.1 we get
argmin f (x) = {x0 S | S Sf> (f (x0 ))}
xS

{x0 S | S Sg> (g(x0 ))} = argmin g(x);


xS

argmax f (x) = {x S | S
xS

Sf6 (f (x0 ))}

{x0 S | S Sg6 (g(x0 ))} = argmax g(x).


xS

(b) Assume that for all x1 , x2 S with f (x1 ) < f (x2 ) we have g(x1 ) < g(x2 ).
Then, for all x1 , x2 S such that g(x1 ) g(x2 ), we actually have f (x1 ) f (x2 ).
Then, by interchanging the roles of f and g in (a), we get the desired conclusion.
(c) This assertion directly follows from (a) and (b).
(d) By considering := , and recalling the property (3), the conclusion easily
follows from (c).


2

Geometric interpretation of optimal solutions

Definition 3.1 Given a function f : D R, defined on a nonempty set D, and a


nonempty subset S D, we shall use in the sequel the following level sets of f
associated with a number R:
Sf () := {x S | f (x) = },
Sf6 () := {x S | f (x) },
Sf< () := Sf6 () \ Sf (),
Sf> () := S \ Sf6 (),
Sf> () := S \ Sf< ().
Proposition 3.1 Let f : D R be a function defined on a nonempty set D. Then,
for any nonempty subset S of D, the following hold:
argmin f (x) = {x0 S | S Df> (f (x0 ))},

(4)

argmax f (x) = {x0 S | S Df6 (f (x0 ))}.

(5)

xS

xS

Proof: Let x0 argminxS f (x). Then x0 S and f (x0 ) f (x) for all x S, which
means that x Sf> (f (x0 )) Df> (f (x0 )) for all x S. Thus we have S Df> (f (x0 )).
Hence the inclusion in (4) is true. In order to prove the converse inclusion, let
x0 S be such that S Df> (f (x0 )). Then, for every x S, we have x Df> (f (x0 )),
i.e., f (x) f (x0 ), hence x0 argminxS f (x). Thus the inclusion in (4) is also
true.
In view of (3), the equality (5) can be easily deduced from (4), taking into account
>
that Df6 (f (x0 )) = Df
(f (x0 )).

Consider now the particular case of a linear optimization problem, where


=
6 S D := R2 and
f (x) := hx, ci, x R2 ,
the point c R2 \ {02 } being a priori given.
In this case f is a (not constant) linear objective function. For every point x0 S,
both level sets Df> (f (x0 )) = {x R2 | hx, ci hx0 , ci} and Df6 (f (x0 )) = {x R2 |
hx, ci hx0 , ci} actually are closed halfplanes, c being a normal vector of the straight
line Df (f (x0 )) = {x R2 | hx, ci = hx0 , ci}, which indicates the increasing direction
for the objective function. Thus, (4) shows that x0 argminxS f (x) if and only if
the feasible set S is contained in the halfplane Df> (f (x0 )) (which is bounded by the
straight line Df (f (x0 )) and is oriented by the vector c). Similarly, (5) shows that
3

x0 argmaxxS f (x) if and only if the feasible set S lies in the halfplane Df6 (f (x0 ))
(bounded by the straight line Df (f (x0 )) and oriented by the vector c).
Consider now the particular case of the optimal location problem, where
=
6 S D := R2 and
f (x) := kx x k, x R2 ,
the point x R2 \ S being a priori given.
In this case, for every point x R2 , f (x) represents the Euclidean distance
between x and x . Consider a point x0 S. Since x
/ S, we have kx0 x k > 0,
hence the level set Df6 (f (x0 )) = {x R2 | kx x k kx0 x k} = B(x , kx0 x k)
actually is the closed Euclidean ball (i.e., a closed disk) centered at x with radius
kx0 x k, and Df> (f (x0 )) = {x R2 | kx x k kx0 x k} = Rn \ B(x , kx0 x k)
represents the complement of the open Euclidean ball (i.e., the complement in R2
of an open disk) centered at x with radius kx0 x k. Thus (4) shows that x0
argminxS f (x) if and only if the feasible set S lies outside the open disk centered
at x with radius kx0 x k. Similarly, (5) shows that x0 argmaxxS f (x) if and
only if the feasible set S is contained in the closed disk centered at x with radius
kx0 x k.

Existence and unicity of optimal solutions

Theorem 4.1 (Existence of optimal solutions) Let f : S R be a function,


defined on a nonempty set S Rn . The optimization problem (1) has at least one
optimal solution if one of the following conditions is fulfilled:
(C1) There exists R such that Sf6 () is nonempty and bounded, and Sf6 () is
closed for every ] , ].
(C2) S is closed, f is continuous, and there exists R such that Sf6 () is
nonempty and bounded.
(C3) S is compact and Sf6 () is closed1 for each R.
(C4) Sf6 () is closed for each number R, and2 for every sequence (xk )kN of
points in S with limk kxk k = we have limk f (xk ) = .
1

In particular, if f is continuous on the nonempty compact set S, then both level sets Sf6 ()

and Sf> () are closed for every R. In this case we recover the classical Weierstrass Theorem.
2
A function f : S R defined on a nonempty set S Rn is said to be coercive if for any
sequence (xk )kN of points in S with lim kxk k = we have lim f (xk ) = .
k

Proposition 4.1 (Unicity of optimal solutions) Let f : S R be a function


defined on a nonempty set S Rn . The following assertions are equivalent:
1 The optimization problem (1) has at most one optimal solution.
2 For all x1 , x2 S, x1 6= x2 , there exists x S such that
f (x ) < max{f (x1 ), f (x2 )}.
Proof: 1 2 . Assume that card(argminxS f (x)) 1 and suppose to the contrary
that there exist two distinct points x1 , x2 S satisfying the inequality f (x)
max{f (x1 ), f (x2 )} for every x S. We infer that f (x1 ) f (x) and f (x2 ) f (x)
for all x S, i.e., x1 , x2 argminxS f (x), contradicting the hypothesis.
2 1 . Assume that for every distinct points x1 , x2 S there exists some point
x S such that f (x ) < max{f (x1 ), f (x2 )}, and suppose to the contrary that
card(argminxS f (x)) > 1. Then we can choose x1 , x2 argminxS f (x), x1 6= x2 .
By hypothesis, we can find x S such that f (x ) < max{f (x1 ), f (x2 )}. We infer
that f (x ) < f (x1 ) = f (x2 ) = inf f (S) f (x ), a contradiction. .

Convex sets

For any points x, y Rn let


[x, y] := {(1 t)x + ty | t [0, 1]}
[x, y[ := {(1 t)x + ty | t [0, 1[}
]x, y] := {(1 t)x + ty | t ]0, 1]}
]x, y[ := {(1 t)x + ty | t ]0, 1[}.
Note that if x = y, then [x, y] = ]x, y[ = [x, y[ = ]x, y] = {x}; otherwise, if x 6= y,
then ]x, y[ = [x, y] \ {x, y} = [x, y[ \{x} = ]x, y] \ {y}.
Definition 5.1 A subset S of Rn is said to be convex if [x, y] S for all x, y S.
In other words, S is convex if and only if
(1 t)S + tS S for all t [0, 1].
Proposition 5.1 If F is a family of convex sets in Rn , then the following hold:
T
(a) SF S is convex.

(b) If the family F is directed, i.e.,


A, B F, C F : A B C,
then

SF

S is convex.

T
T
Proof: (a) Let t [0, 1]. For every M F, we have (1 t) SF S + t SF S
T
T
T
(1 t)M + tM M , since M is convex. Hence (1 t) SF S + t SF S SF S,
T
i.e., SF S is convex.
S
(b) Let x, y SF S and t [0, 1]. Then there exist X, Y F such that x X
and y Y . The family F being directed, we can choose Z F such that X Y Z.
S
Since S is convex and x, y Z, it follows that (1 t)x + ty Z SF S. Thus
S

SF S is convex.
Corollary 5.1 Let (Mi )iN be a sequence of convex sets in Rn . Then the following
hold:
(a)

S T
i=1

j=i

Mj is convex.

(b) If the sequence (Mi )iN is ascending, i.e., Mi Mi+1 for all i N , then
S
i=1 Mi is convex.
Solution: (a) For each i N , consider the set Si :=

k=i

Mk . According to

Proposition 5.1(a), Si is convex for every i N . Moreover, the family F := {Si |


i N } is directed, since for all i, j N we have Si Sj Smax{i,j} . Thus, by
S T
S
S
,
i.e.,
Proposition 5.1(b) we can conclude that
i
i=1
j=i Mj , is convex.
i=1
T

(b) Since (Mi )iN is ascending, we have Mi =


j=i Mj for every i N . The
conclusion directly follows from (a).

Definition 5.2 The convex hull of an arbitrary set M Rn is defined by


conv M :=

{S Rn | S is convex and M S}.

Note that conv M is a convex set (as an intersection of a family of convex sets). It
is clear that M is convex if and only if M = conv M .
Definition 5.3 Given an arbitrary nonempty set M Rn , a point x Rn is
said to be a convex combination of elements of M Rn , if there exist k N ,
x1 , . . . , xk M , and (t1 , . . . , tk ) k := {(s1 , . . . , sk ) Rk+ | s1 + . . . + sk = 1},
such that x = t1 x1 + . . . + tk xk .

Theorem 5.1 (Characterization of the convex hull by means of convex


combinations) The convex hull of a nonempty set M Rn admits the following
representation:
conv M =

nP
k

i=1 ti x

| k N , x , . . . , x M, (t1 , . . . , tk ) k .

Proof: Denote by
nP
o
k
i

1
k
C(M ) :=
t
x
|
k

N
,
x
,
.
.
.
,
x

M,
(t
,
.
.
.
,
t
)

.
i
1
k
k
i=1

(6)

For the equality conv M = C(M ) it suffices to show that the following conditions
are fulfilled:
(i) M C(M );
(ii) C(M ) is convex;
(iii) C(M ) S for every convex set S Rn with M S.
Condition (i) holds, since one obtains in C(M ) the elements of M considering
k = 1. To show (ii) pick x, y C(M ) and [0, 1]. Then there exist k, ` N ,
x1 , . . . , xk , y 1 , . . . , y ` M , (t1 , . . . , tk ) k and (s1 , . . . , s` ) ` such that x =
P`
Pk
i
i
i=1 si y . Thus
i=1 ti x and y =
(1 )x + y =

k
`
X
X
(1 )ti xi +
si y i .
i=1

Since

Pk

i=1 (1

)ti +

P`

i=1

i=1

si = (1 )

Pk

i=1 ti

P`

i=1

si = 1 + = 1, it

follows that (1 )x + y is also a convex combination of elements of M , that is,


it belongs to C(M ). Thus (ii) holds.
To show (iii), consider a convex subset S Rn such that M S. We get
the inclusion C(M ) S by performing an induction argument. We prove that
proposition
P
P(k) : ki=1 ti xi S, x1 , . . . , xk M, (t1 , . . . , tk ) k
is true for every k N . Obviously P(1) is true (since M S). Assume now that
P(h) is true for a natural number h N . We are going to prove that P(h + 1) is
also true. Let x1 , . . . , xh , xh+1 M and (t1 , . . . , th , th+1 ) h+1 . Without loss of
P
generality we may assume that t := hi=1 ti > 0 (otherwise P(h+1) obviously would
be true). Then t + th+1 = 1 and 1t (t1 + . . . + th ) = 1. By the induction hypothesis
and using the convexity of S, we get
P

Ph+1 i
h
ti i
t
x
=
t
x
+ (1 t)xh+1 S.
i=1 i
i=1 t
Hence P(h + 1) is true. It follows that P(k) is true for every k N . Thus
C(M ) S.


7

Exercise 5.1 Prove that n = conv {e1 , . . . , en }.


Solution: Observe first that n is convex. Indeed, for every (t1 , . . . , tn ), (s1 , . . . , sn )
n and [0, 1], we have that (1 ) (t1 , . . . , tn ) + (s1 , . . . , sn ) = ((1 )t1 +
s1 , . . . , (1 )tn + sn ) Rn+ and [(1 )t1 + s1 ] + . . . + [(1 )tn + sn ] =
(1 ) 1 + 1 = 1, thus (1 ) (t1 , . . . , tn ) + (s1 , . . . , sn ) n .
Since {e1 , . . . , en } n , the inclusion conv {e1 , . . . , en } conv n holds. Thus
conv {e1 , . . . , en } n .
Let M := {e1 , . . . , en } and consider the set C(M ) defined in (6). Obviously
P
n = { ni=1 ti ei | (t1 , . . . , tn ) n } C(M ). On the other hand, Theorem 5.1
yields conv M = C(M ). It follows that
n conv{e1 , . . . , en }.
This implies the asserted equality: conv {e1 , . . . , en } = n .

Theorem 5.2 (Carath


eodory) If S is a nonempty subset of Rn , then every point
x conv S can be expressed as a convex combination of at most n + 1 points of S.
Theorem 5.3 (Helly) Let I be a set having at least n + 1 elements, and let (Si )iI
T
be a family of convex subsets of Rn , satisfying the property that iJ Sj 6= for each
T
J I with card J = n + 1. Then iI Si 6= provided that one of the following two
assumptions is fulfilled:
(a) I is finite.
(b) Si is closed for each i I, and there exists j I such that Sj is bounded.
Exercise 5.2 Give an example of a sequence (Sk )kN of closed convex subsets of
T
T

Rn with kN Sk = and m
i=1 Si 6= for all m N .
Solution: For every k N consider the closed convex set
Sk := {(x1 , . . . , xn ) Rn | x1 k}.
Obviously,

Tm

i=1

Si = Sm 6= for all m N , and

kN

Sk = .

Convex cones

Definition 6.1 A subset C Rn is called a cone if C is nonempty and R+ C C.


If furthermore C is a (closed) convex set, then it is a (closed ) convex cone. The
cone C is called pointed if C (C) = {0n }.
Lemma 6.1 Let C be a nonempty subset of Rn . The following assertions are
equivalent:
1 C is a cone.
2 0n C = R+ C.
3 0n C = t C for all t R+ .
Proof: 1 2 . Since C 6= , there exists an element c C. Using the fact
that C is a cone, it follows that 0n = 0 c R+ C C. On the other hand,
C = 1 C R+ C. Thus 2 holds.
2 3 . Under the hypothesis 2 , the element 0n lies in C. Choose now an
arbitrary t R+ . Then t C R+ C = C. Also,

1
t

C R+ C = C, thus

C = t( 1t C) t C. This yields 3 .
S
3 1 . Using 3 , we obtain that R+C = (0C) ( tR t C) = {0n } C C.
+

Since C 6= , we conclude that C is a cone.

Proposition 6.1 If (Ci )iI is a nonempty family of cones in Rn , then the following
hold:
(a) Both

iI

Ci and

iI

Ci are cones.

T
(b) If Ci is convex (closed) for all i I, then iI Ci is convex (closed).
T
(c) If Cj is pointed for some j I, then iI Ci is pointed.
T
Proof: (a) Since 0n Ci = R+ Ci for every i I, it follows that 0n iI Ci =
T
S
S
T
S
R+ ( iI Ci ) and 0n iI Ci = R+ ( iI Ci ), hence the sets iI Ci and iI Ci
are cones.
(b) Since the intersection of any family of convex (closed) subsets of Rn is convex
(closed), this assertion follows from (a).
(c) Let j I be so that Cj (Cj ) = {0n }. Since 0n Ci for every i I, we
T
T
T
get that {0n } ( iI Ci ) ( iI Ci ) = iI (Ci (Ci )) Cj (Cj ) = {0n },
T
T
T
hence ( iI Ci ) ( iI Ci ) = {0n }. By (a), the set iI Ci is a pointed cone. 
Theorem 6.1 (Characterization of convex cones) For any cone C in Rn the
following assertions are equivalent:
9

1 C is convex.
2 C + C C.
Pm
Pm i 1
m

3
i=1 C (:= {
i=1 x | x , . . . , x C}) = C for every m N .
Proof: 1 2 . If C is convex, then 21 C + 21 C C. Since C is a cone, we get that
C + C = 2( 12 C + 12 C) 2C R+ C C.
2 3 . Under the hypothesis 2 , we are going to prove by induction that the
proposition
P
P(m) : m
i=1 C = C
holds for every m N . It is obvious that P(1) is true. Assume now that P(h)
P
holds for a natural number h N . Then hi=1 C = C, hence
h+1
X

C=

h
h+1
X

X
C +C =C +C C
C,

i=1

i=1

i=1

by the hypothesis assumed in 2 and by the fact that every cone contains the null
P
vector. It follows that h+1
i=1 C = C, thus proposition P(h + 1) holds. We conclude
that P(m) is true for every m N , thus 3 is proved.
3 1 . Let x, y C and t [0, 1]. Since the set C is a cone, it follows that
{(1 t)x, ty} R+ C C, hence (1 t)x + ty C + C. Applying 3 for m = 2,
we obtain that (1 t)x + ty C. Thus C is convex.

Definition 6.2 The conic hull of an arbitrary subset M of Rn is defined as the set
con M :=

{C Rn | C is a convex cone and M C}.

By Proposition 6.1, con M is a convex cone, being the intersection of a nonempty


family of convex cones (note that Rn is a convex cone which contains M ).
Definition 6.3 A point x Rn is said to be a conic combination of elements of
M Rn if there exist m N , x1 , . . . , xm M , and t1 , . . . , tm R+ such that
x = t1 x1 + . . . + tm xm .
Theorem 6.2 (Characterization of the conic hull of a set) The conic hull of
any nonempty set M Rn admits the following representations:
con M = {

Pm

| m N , x1 , . . . , xm M, t1 , . . . , tm R+ }

= {

Pn

| x1 , . . . , xn M, t1 , . . . , tn R+ }

i=1 ti x
i=1 ti x

= R+ conv M
= conv(R+ M ).
10

Corollary 6.1 If C Rn is a cone, then


conv C =

Pn

i=1 C.

Proof: Since C is a cone, the equality C = R+ C holds. Thus, on the one hand,
conv C = conv(R+ C) and, on the other hand,
Pn

i=1 C

= {

Pn

= {

Pn

i=1 c

| c1 , . . . , cn C}

i=1 ti x

| x1 , . . . , xn C, t1 , . . . , tn R+ } .

The relation
{

Pn

i=1 ti x

| x1 , . . . , xn C, t1 , . . . , tn R+ } = conv(R+ C),

which follows from Theorem 6.2, finally yields the asserted equality conv C =
Pn

i=1 C.
The following result concerns an important notion of convex analysis, namely
the recession cone of a set. As we shall see further, it plays a key role in characterizing both the unboundedness of closed convex sets and the unboundedness of the
objective functions in linear optimization.
Theorem 6.3 Let M be a nonempty subset of Rn .
(a) For each point x M the set
M (x) := {d Rn | x + R+ d M }
is a cone.
(b) The set
rec M :=

M (x)

xM

is a cone (the so-called recession cone of M ).


(c) If M is closed and convex, then so is rec M , and
rec M = {d Rn | M + R+ d M } = M (x), x M.
Proof: (a) Let x M . Clearly 0n M (x), and for every d M (x) and R+
we have that x + R+ (d) x + R+ d M , so d M (x). Thus M (x) is
nonempty and R+ M (x) M (x), i.e., M (x) is a cone.
(b) This assertion follows from (a) and from Proposition 6.1(a).
(c) Without proof!


11

Exercise 6.1 Give an example of a cone C in R2 for each of the following cases:
(1) C is convex, but C 6= rec C.
(2) C = rec C, but C is neither convex nor closed.
Solution: (1) Consider the following convex cone:
C := {02 } (R+ )2

(7)

Obviously rec C = (R+ )2 6 C.


(2) Consider the following cone:
C := R+ {(1/n, 1 1/n) | n N }.
It is easily seen that rec C = C is not convex. However, it is not closed, since
{0} R+ (cl C) \ C.

Theorem 6.4 (A characterization of unbounded closed convex sets) The


following assertions are equivalent for a nonempty closed convex subset M Rn :
1 M is unbounded.
2 rec M 6= {0n }.
Proposition 6.2 Let A be a subset of Rn . Then the set
A+ := {x Rn | hx, ai 0, a A}
is a closed convex cone (the so-called polar cone of A).
Proof: If A = , then obviously A+ = Rn is a closed convex cone. It is straightforward to verify that for every a Rn the set
(
{a}+ := {x Rn | hx, ai 0} =

H > (a, 0) if a 6= 0n
Rn

if a = 0n

is a cone (indeed, h0n , ai 0, and htx, ai = thx, ai 0 for every x A+ and every
t R+ , hence 0n {a}+ and R+ {a}+ {a}+ ). Furthermore, {a}+ is convex and
closed. Since
A+ =

{a}+ ,

aA

Proposition 6.1 implies that A+ is a closed convex cone.

12

Proposition 6.3 Consider the set of all vectors in Rn whose first nonzero coordinate (if any) is positive, i.e.,
C lex := {0n } {x = (x1 , . . . , xn ) Rn |
i {1, . . . , n} : xi > 0, @ j {1, . . . , n}, j < i : xj 6= 0}.
The set C lex is a pointed convex cone (the so-called lexicographic cone)..
Proof: Consider the function : Rn \{0n } {1, . . . , n}, defined for all x = (x1 , . . . , xn )
Rn \ {0n } by
(x) := min{i {1, . . . , n} | xi 6= 0}.
Observe that
C lex = {0n } {x = (x1 , . . . , xn ) Rn \ {0n } | x(x) > 0}.
It is easily seen that, for every x C lex and every t 0, we have tx C lex . Thus
C lex is a cone.
Suppose to the contrary that the cone C lex is not pointed. Then we can choose
a point x = (x1 , . . . , xn ) C lex (C lex ) \ {0n }. It follows that
{x, x} {v = (v1 , . . . , vn ) Rn \ {0n } | v(v) > 0},
hence x(x) > 0 and x(x) > 0. Since (x) = (x), we infer that 0 < x(x) =
(x(x) ) < 0, a contradiction. Thus C lex is a pointed cone.
According to Theorem 6.1, in order to prove the convexity of C lex it suffices
to show that C lex + C lex C lex . To this end, consider two arbitrary points x =
(x1 , . . . , xn ), y = (y1 , . . . , yn ) C lex . If x = 0n or y = 0n , then we have x + y =
0n C lex . Otherwise, if x 6= 0n 6= y, then we have x(x) > 0 and y(y) > 0,
hence x + y 6= 0n and (x + y) = min{(x), (y)}. Without loss of generality
we can assume that x(x) 6 y(y) . Then, we have x(x) > 0 and y(x) 0, hence
(x + y)(x+y) = x(x) + y(x) > 0. In both cases we infer x + y C lex . Thus we have
C lex + C lex C lex .

Linear programming

See the course notes!

13

Das könnte Ihnen auch gefallen