Sie sind auf Seite 1von 8

JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: Vol. 100. No. 1, pp.

233-240, JANUARY 1999

TECHNICAL NOTE

On Subdifferentials of Set-Valued Maps


J. BAIER1 AND J. JAHN2

Communicated by F. Giannessi

Abstract. Using the concept of contingent epiderivative, we generalize


the notion of subdifferential to a cone-convex set-valued map. Properties
of the subdifferential are presented and an optimality condition is
discussed.

Key Words. Convex analysis, set-valued analysis, Subdifferentials,


vector optimization.

1. Introduction

It is well known from convex analysis that a subgradient of a convex


functional f: X->R [denned on a real normed space (X, || • | | x ) ] at some xeX
is a continuous linear functional l on X with

The set of all subgradients is commonly called subdifferential. Using the


notion of directional derivative f'(x)(•), this subdifferential can also be
characterized by the inequality

e.g., see Lemma 3.25 in Ref. 1. This inequality is the key for a generalization
to cone-convex set-valued maps. Instead of the directional derivative, we
use the concept of contingent epiderivative presented in Ref. 2, and the

1
Graduate Student, Angewandte Mathematik II, Universitat Erlangen-Nurnberg, Erlangen,
Germany.
2
AssociateProfessor, Angewandte Mathematik II, Universitat Erlangen-Nurnberg, Erlangen,
Germany.
233
0022-3239/99/0100-0233$16.00/0 C 1999 Plenum Publishing Corporation
234 JOTA: VOL. 100, NO. 1, JANUARY 1999

inequality sign has to be understood as a partial ordering. This is the actual


approach of this short paper.
Although there are many papers generalizing the concept of subdiffer-
ential to the vector-valued (single-valued) case, there are only a few papers
investigating subdifferentials of set-valued maps. For instance, Yang (Ref.
3) introduced a weak subgradient of a convex relation and Chen and Jahn
(Ref. 4) considered a weak subgradient for a general set-valued map. Here,
we present an approach different from that in Ref. 4.
For the investigation of set-valued maps, we use the following standard
assumption:
(Al) Let (X, ||- Ix) and (Y, ||- ||y) be real normed spaces, let (Y,
|| • ||Y) be partially ordered by a convex cone C< Y, let S be a
nonempty subset of X, let F: S->2y be a set-valued map, and let
xeS and yeF(x) be given elements.
It is well known that the convex cone C induces a partial ordering ^c (i.e.,
a reflexive and transitive binary relation compatible with addition and scalar
multiplication) in the space Y; for instance, see Ref. 5.
First, we recall the concept of contingent epiderivative given in Ref. 2,
which is based on the definition of contingent derivative introduced by Aubin
(Ref. 6).

Definition 1.1. Let Assumption (A1) be satisfied. Then:


(a) The set

is called the epigraph of F.


(b) A single-valued map DF(x, y): X-> Y whose epigraph equals the
contingent cone to the epigraph of F at (x, y), i.e.,

is called the contingent epiderivative of F at (x, y).

Recall that the contingent cone T(epi(.F), (x, y)) consists of all tangent
vectors

with
JOTA: VOL. 100, NO. 1, JANUARY 1999 235

and A n >0, n e N. Properties of the contingent epiderivative can be found in


Ref. 2.
Since convexity plays an important role in the following investigations,
recall the definition of cone-convex maps.

Definition 1.2. Let Assumption (A1) be satisfied, and in addition let


S be a convex set. F: S->2Y is called C-convex if, for all x1, x2eS and
Ae[0,1],

On the basis of the concept of contingent epiderivatives, we introduce


in Section 2 a subdifferential for cone-convex set-valued maps and discuss
the properties of this concept in Section 3. In Section 4, we present a gen-
eralization of a well-known optimality condition to a convex set-valued
optimization problem. The results are based on the presentation in Ref. 7.

2. Concept of Subdifferential

In this section, we generalize the concept of subdifferential of a convex


functional to the case of a cone-convex set-valued map. For these investiga-
tions, we extend Assumption (A1) as follows:
(A2) Let the set S be convex, let the set-valued map F: S->2y be C-
convex, and let the contingent epiderivative DF(x, y) of F at
(x, y) exist.

Definition 2.1. Let Assumptions (A1) and (A2) be satisfied. Then:


(a) A linear map L: X-+ Y, with

is called a subgradient of F at (x,y); see Fig. 1.


(b) The set

of all subgradients L of F at (x, y) is called the subdifferential of


F at (x, y).

The inequality (2) is a natural extension of the inequality (1) to the set-
valued case. Here, the directional derivative is replaced by the contingent
236 JOTA: VOL. 100, NO. 1, JANUARY 1999

Fig. 1. Subgradients of Fat (x, y).

epiderivative and the usual < ordering is replaced by the partial ordering
< c induced by the convex cone C.
Obviously, the subdifferential is not defined, if the contingent epideriv-
ative does not exist. Conditions ensuring the existence of the contingent
epiderivative can be found in Theorem 1 in Ref. 2 and Theorem 3 in Ref.
4. Notice also that the assumption of cone-convexity of F is actually not
needed in Definition 2.1.

3. Properties of the Subdifferential

In this section, we present basic properties of subdifferentials known


from the convex single-valued case. First, we remark under which
assumptions the subdifferential is nonempty.

Theorem 3.1. Let Assumptions (A1) and (A2) be satisfied, and in


addition let S=X, let C be pointed [i.e., C n (-C) = 0Y], and let Fbe order
complete. Then, the subdifferential dF(x, y) is nonempty.

Proof. By Theorem 4 in Ref. 2 the contingent epiderivative DF(x, y)


is sublinear. Then, by the Hahn-Banach theorem generalized by Zowe (Ref.
8), there is a linear map L : X - > Y with

Hence, the subdifferential dF(x, y) is nonempty. Q

Next, we show the convexity of the subdifferential.


JOTA: VOL. 100, NO. 1, JANUARY 1999 237

Theorem 3.2. Let Assumptions (A1) and (A2) be satisfied. Then, the
subdifferential is convex.

Proof. For an empty subdifferential, the assertion is trivial. Take two


arbitrary subgradients L1 , L2edF(x, y) and an arbitrary Ae[0, 1]. Then, we
obtain

Hence,

The next result shows that the subdifferential is closed under appropri-
ate assumptions.

Theorem 3.3. Let Assumptions (A1) and (A2) be satisfied, and in


addition let C be closed. If all subgradients are bounded, then the subdiffer-
ential is closed in the linear space of all linear bounded maps.

Proof. Choose an arbitrary sequence (Ln )ne N of subgradients converg-


ing to some linear bounded map L. Next, fix an arbitrary xeX. Then, we
obtain

where ||| • ||| denotes the operator norm. Since

the inequality (3) implies

By the definition of the subgradients Ln, we have

or

and with (4) and the assumption that C is closed we conclude that

Hence, L is a subgradient, and therefore the subdifferential is closed. D


238 JOTA: VOL. 100, NO. 1, JANUARY 1999

Notice that, for X = R n and Y=R m , linear maps are bounded, and in
this special case the subdifferential is closed whenever C is closed.
The following result presents a condition under which the subdifferential
is a singleton.

Theorem 3.4. Let Assumptions (A1) and (A2) be satisfied, and in


addition let C be pointed. If the contingent epiderivative DF(x, y) of F at
(x, y) is linear, then dF(x, y) = {DF(x, y)}.

Proof. Since DF(x, y) is linear, DF(x, y) is a subgradient. Assume that


there is another subgradient L=DF(x, y). Then, we obtain

or

This inequality implies, by addition of L(x) + DF(x, y)(x),

Since C is pointed, we get with (2)

a contradiction to our assumption. Hence, we conclude that

Finally, we discuss the relationship of the presented definition of sub-


differential to the standard definition used in convex analysis.

Theorem 3.5. Let Assumptions (A1) and (A2) be satisfied. Then, every
subgradient L of F at (x, y) fulfills the inequality

Proof. By Lemma 3 in Ref. 2, we obtain

and with Inequality (2) we conclude that


JOTA: VOL. 100, NO. 1, JANUARY 1999 239

4. Optimality Conditions

Under Assumption (A1), we now investigate the set-valued optimiza-


tion problem

In this context, minimization means that we determine strong minimizers.

Definition 4.1. Let Assumption (A1) be satisfied, and let the problem
(5) be given. Let F(S) := UxesF(x) denote the image set of F. The pair
(x, y) is called a strong minimizer of the set-valued optimization problem
(5), if y is a strongly minimal element of the set F(S), i.e.,

For strong minimizers, an optimality condition based on the subdiffer-


ential can be given. This result extends the well-known result of convex
analysis that a point is a minimal point of a convex functional if and only
if the null functional is a subgradient (e.g., see Theorem 3.27 in Ref. 1).

Theorem 4.1. Let Assumptions (A1) and (A2) be satisfied. Then:


(a) If the null map is a subgradient of F at (x, y), then the pair (x, y)
is a strong minimizer of the set-valued optimization problem (5).
(b) In addition, let S equal X and let C be closed. If the pair (x, y) is
a strong minimizer of the set-valued optimization problem (5),
then the null map is a subgradient of F at (x, y).

Proof, (a) By Theorem 3.5, we conclude that

or

i.e., (x, y) is a strong minimizer of the set-valued optimization problem (5).


(b) By Theorem 9 in Ref. 2, we obtain for the strong minimizer (x, y)

or

Hence, the null map is a subgradient of F at (x, y). D

The preceding theorem implies immediately the following corollary.


240 JOTA: VOL. 100, NO. 1, JANUARY 1999

Corollary 4.1. Let Assumptions (A1) and (A2) be satisfied, and in


addition let S equal X and let C be closed. The pair (x, y) is a strong
minimizer of the set-valued optimization problem (5) if and only if the null
map is a subgradient of F at (x, y).

5. Conclusions

This short paper shows that it makes sense to introduce subdifferentials


in a set-valued setting because standard results known from convex analysis
can be generalized easily. The concept of so-called weak subgradients of set-
valued maps (investigated in Ref. 4) seems to be suitable for the investigation
of weak minimizers of a set-valued optimization problem, whereas the con-
cept presented in this paper is appropriate for the characterization of strong
minimizers. Both approaches have advantages, but further investigations are
necessary.

References

1. JAHN, J., Introduction to the Theory of Nonlinear Optimization, Springer, Berlin,


Germany, 1996.
2. JAHN, J., and RAUH, R., Contingent Epiderivatives and Set-Valued Optimization,
Mathematical Methods of Operations Research, Vol. 46, pp. 193-211, 1997.
3. YANG, Q. X., A Hahn-Banach Theorem in Ordered Linear Spaces and Its Applica-
tions, Optimization, Vol. 25, pp. 1-9, 1992.
4. CHEN, G. Y., and JAHN, J., Optimality Conditions for Set-Valued Optimization
Problems, Mathematical Methods of Operations Research, Vol. 48, 1998.
5. JAHN, J., Mathematical Vector Optimization in Partially-Ordered Linear Spaces,
Peter Lang, Frankfurt, Germany, 1986.
6. AUBIN, J. P., Contingent Derivatives of Set-Valued Maps and Existence of Solu-
tions to Nonlinear Inclusions and Differential Inclusions, Mathematical Analysis
and Applications, Part A, Edited by L. Nachbin, Academic Press, New York,
New York, pp. 160-229, 1981.
7. BAIER, J., Subdifferentiale fur mengenwertige Abbildungen, Diplom Thesis, Univer-
sity of Erlangen-Nurnberg, 1997.
8. ZOWE, J., Konvexe Funktionen undkonvexe Dualitatstheorie in geordneten Vektor-
raumen, Habilitation Thesis, University of Wurzburg, 1976.

Das könnte Ihnen auch gefallen