Sie sind auf Seite 1von 18

This article was downloaded by: [27.72.189.

100]
On: 28 March 2015, At: 07:31
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Optimization: A Journal of
Mathematical Programming and
Operations Research
Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/gopt20

On the proximal point method for


equilibrium problems in Hilbert spaces
a

Alfredo N. Iusem & Wilfredo Sosa

Instituto de Matemtica Pura e Aplicada , Estrada Dona


Castorina 110, Jardim Botnico, CEP 22460-320 RJ, Rio de
Janeiro, Brazil
b

Instituto de Matemtica y Ciencias Afines , Universidad Nacional


de Ingeniera , Calle de los Bilogos 245, Lima 12, Lima, Per
Published online: 15 Oct 2010.

To cite this article: Alfredo N. Iusem & Wilfredo Sosa (2010) On the proximal point method for
equilibrium problems in Hilbert spaces, Optimization: A Journal of Mathematical Programming and
Operations Research, 59:8, 1259-1274, DOI: 10.1080/02331931003603133
To link to this article: http://dx.doi.org/10.1080/02331931003603133

PLEASE SCROLL DOWN FOR ARTICLE


Taylor & Francis makes every effort to ensure the accuracy of all the information (the
Content) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &

Downloaded by [27.72.189.100] at 07:31 28 March 2015

Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions

Optimization
Vol. 59, No. 8, November 2010, 12591274

On the proximal point method for equilibrium


problems in Hilbert spaces
Alfredo N. Iusema and Wilfredo Sosab*
a

Instituto de Matematica Pura e Aplicada, Estrada Dona Castorina 110, Jardim Botanico,
CEP 22460-320 RJ, Rio de Janeiro, Brazil; bInstituto de Matematica y Ciencias Afines,
Universidad Nacional de Ingeniera, Calle de los Biologos 245, Lima 12, Lima, Peru

Downloaded by [27.72.189.100] at 07:31 28 March 2015

(Received 20 February 2008; final version received 1 October 2009)


We analyse a proximal point method for equilibrium problems in Hilbert
spaces, improving upon previously known convergence results. We prove
global weak convergence of the generated sequence to a solution of the
problem, assuming existence of solutions and rather mild monotonicity
properties of the bifunction which defines the equilibrium problem, and we
establish existence of solutions of the proximal subproblems. We also
present a new reformulation of equilibrium problems as variational
inequalities ones.
Keywords: equilibrium problems; convex feasibility problems; variational
inequalities; convex optimization
AMS Subject Classifications: 90C47; 49J35

1. Introduction
Let H be a Hilbert space. Take a closed and convex set K  H and f : K  K ! R
such that
P1: f(x, x) 0 for all x 2 K,
P2: f(, y) : K ! R is upper semicontinuous for all y 2 K,
P3: f(x, ) : K ! R is convex and lower semicontinuous for all x 2 K.
The equilibrium problem EP( f, K) consists of finding x 2 K such that f(x, y)  0
for all y 2 K. The set of solutions of EP( f, K) will be denoted as S( f, K). A problem
related to EP( f, K), is thaf of finding y 2 K such that f(x, y)  0 for all x 2 K (this
problem is called the dual of EP( f, K) in [11]). The solution set of this dual problem
will be denoted as Sd( f, K).
The equilibrium problem encompasses, among its particular cases, convex
optimization problems, variational inequalities (monotone or otherwise), Nash
equilibrium problems and other problems of interest in many applications.
The prototypical example of an equilibrium problem is a variational inequality
problem. Since it plays an important role in the sequel, we describe it now in
*Corresponding author. Email: sosa@uni.edu.pe
ISSN 02331934 print/ISSN 10294945 online
2010 Taylor & Francis
DOI: 10.1080/02331931003603133
http://www.informaworld.com

Downloaded by [27.72.189.100] at 07:31 28 March 2015

1260

A.N. Iusem and W. Sosa

more detail. Consider a continuous T : H ! H, and define f(x, y) hT(x), y  xi.


Then f satisfies P1P3, and EP( f, K) is equivalent to the variational inequality
problem VIP(T, K), consisting of finding a point x 2 K such that hT(x), x  xi  0
for all x 2 K. We can consider also the case of a point-to-set operator T : H ! P(H),
if it is maximal monotone. In this case VIP(T, K) consists of finding x 2 K such that
that hv, x  xi  0 for some v 2 T(x) and all x 2 K. In this situation, we define
f(x, y) supu2T(x)hu, y  xi. Though it is less immediate, this f is well-defined and it
still satisfies P1P3. Finiteness of f follows from monotonicity of T, and upper
semicontinuity of f(, y) from maximality (via demi-closedness of the graph of
maximal monotone operators).
EP has been extensively studied in recent years, with emphasis on existence
results. Recently, a new necessary (and in some cases also sufficient) condition for
existence of solutions was proposed in [9], and later on simplified and further
analysed in [6]. This condition plays a significant role in our analysis, and appears as
condition P5 in Section 2. Its proof is based upon an important theorem by Fan,
presented in [3].
The proximal point algorithm, whose origins can be traced back to [12] and [14],
attained its basic formulation in the work of Rockafellar [23] as a procedure for
finding zeroes of a maximal monotone operator T : H ! P(H). The algorithm
generates a sequence {xk}  H, starting from some x0 2 H, and defines xk1 as the
unique zero of the operator T k given by
T k x Tx k x  xk ,

where { k} is a bounded sequence of positive real numbers, called regularization


coefficients. It has been proved in [23] that for a maximal monotone T, the sequence
{xk} is weakly convergent to a zero of T when T has zeroes, and is unbounded
otherwise. Such weak convergence is global, i.e. the result just announced holds in
fact for any x0 2 H.
It is convenient to introduce a local notation, where the current iterate xk of the
 and the regularized operator T k
classical proximal point method is called e.g. x,
 so that (1) becomes Tx

 with 40. In this
of (1) is called T,
Tx x  x
framework, we define our regularization procedure for equilibrium problems. Fix
 2 R and x 2 H. To any f satisfying P1P3, we will associate another bifunction
f~ : K  K ! R which will be called a regularization of f. It is defined as
 y  xi:
f~ x, y f x, y hx  x,

Several approaches have already been considered for extending the proximal
point method to the realm of equilibrium problems. In most cases, at each iteration
some regularized problem is solved instead of EP( f, K). We mention first some
proposals where the regularized subproblems are rather different from our approach,
and then two more, which use the same regularization scheme as in (2).
The regularized problem in [19] (see also [18]), can be rewritten as
 y hx  x,
 y  xi:
f x, y f x,
Clearly f 6 f~ . Note that, at variant with f~ as in (2), this f does not satisfy P1, and
we do not get f f for  0.

Downloaded by [27.72.189.100] at 07:31 28 March 2015

Optimization

1261

In [4], the regularized function is of the form f x, y f x, y D y, x, where D


is a Bregman distance (see e.g. [7] for the definition of Bregman distance), and the
 y over y 2 K. Again, the resulting
regularized problem consists of minimizing f x,
regularization is quite different from ours.
In [1] the feasible set K is assumed to be defined by a finite number of convex
functions, with associated multipliers, and an Augmented Lagrangian method,
generating both a primal and a dual sequence, is proposed and analysed, while the
method considered here, for a rather general feasible set K, is of a purely primal
nature.
Both [16] and [11] consider the regularized bifunction f~ given by (2) and develop
basically the same iterative procedure as we do here (see also [15] and [17]).
The difference between these two references and the results in this article lies in the
underlying space and in the obtained results, as we describe next.
Both these references work only in finite-dimensional spaces, while we deal with
Hilbert spaces. This introduces a nontrivial complication, because, as in the case of
the classical proximal point method, only weak convergence of the generated
sequence can be obtained, but then the convergence analysis requires some form
of weak continuity of f(, y), instead of P2. Weak continuity is a rather strong
hypothesis, and in order to avoid it we develop in Section 4 a reformulation of
EP( f, K) as a variational inequality problem where the operator is the subdifferential
of f in its second argument evaluated in the diagonal of H  H, and the proximal
point method for equilibrium problem turns out to be identical to the classical
proximal point method applied to a variational inequality problem with this
operator. This is, to our knowledge, a new approach, and allows us to omit any weak
continuity assumption on f, after proving maximality of the associated monotone
operator. This technique works only for the case in which f is itself monotone, i.e. it
satisfies P4 in Section 2. We emphasize that this reformulation is also of interest in
the finite-dimensional case.
Finally, we remark that the main progress obtained over the results in [11] and
[16] lies arguably in the fact that we establish an important property of the
regularized problem. A basic result on the regularized operator T k of the classical
method, defined in (2), is that it always has zeroes when T is maximal monotone
(even when T itself lacks them). This is a consequence of the so-called Mintys
theorem [13], and is a rather relevant fact for the analysis of the method, because it
guarantees that the iterates indeed exist, and that the regularized subproblems are
better conditioned that the original one, which is indeed the whole raison detre of
the method. Both in [16] and in [11] existence of the iterates is assumed as a
hypothesis. We prove instead that under mild monotonicity-like assumptions on f,
the problem EP f,~ K always has solutions, as a consequence of the existence result
for equilibrium problems introduced in [9,6]. This fact entails a significant progress
in the theory of the proximal point method for equilibrium problems.

2. Preliminary results
We will need in the sequel certain monotonicity properties of f. We consider the
following alternatives:
P4: f(x, y) f(y, x)  0 for all x, y 2 K.

1262

A.N. Iusem and W. Sosa

P4.: There exists   0 such that f(x, y) f(y, x)  kx  yk2 for all x, y 2 K.
P4: Whenever f(x, y)  0 with x, y 2 K, it holds that f(y, x)P
 0.
P4]: For all x1, . . . , xn 2 K and all 1, . . . , n  0 such that ni1 i 1 it holds that
!
n
X
min f xi ,
j xj  0:
1in

j1

P400 : For all x1, . . . , xn 2 K and 1, . . . , n  0 such that


!
n
n
X
X
 i f xi ,
j xj  0:

Downloaded by [27.72.189.100] at 07:31 28 March 2015

i1

Pn

i1

i 1, it holds that

j1

P4] and P400 where introduced in [6] (where P4] is denoted as P4).
If f(x, y) supu2T(x)hu, y  xi for some T : H ! P(H), it is easy to check that P4 is
equivalent to monotonicity of T. Thus, a function f satisfying P4 will be said to be
monotone.
We remind that an operator T : H ! P(H) is said to be pseudomonotone when
hu, x  yi  0 for some x, y 2 H and some u 2 T(x) implies that hv, x  yi  0 for all
v 2 T(y). It is easy to check that if T is pseudomonotone and single-valued then f,
defined as f(x, y) supu2T(x)hu, y  xi, satisfies P4, and the converse statement holds
also for the point-to-set case. For this reason, a function f satisfying P4 will be said
to be pseudomonotone. Along the same line, a function f satisfying P4. will be said to be
-undermonotone (some rationale behind this notation is presented in Section 4).
We discuss now the relations among these monotonicity-like properties, and also
the connection between the solution sets of EP( f, K) and its dual.
PROPOSITION 1

Under P1P3,

(i) P4 implies any one among P4., P4, P4] and P400 .
(ii) Both P4 and P400 imply P4], but no additional implication among these three
properties hold.
(iii) Sd( f, K)  S( f, K).
(iv) Under any one among P4, P4, P400 or P4], it holds that S( f, K) Sd( f, K).
Proof
(i)
(ii)
(iii)
(iv)

Elementary
See Section 2 of [6].
See Lemma 3.2 in [6].
In view of (i), (ii) and (iii), it suffices to check that under P4] it holds that
g
S( f, K)  Sd( f, K), which can be established in an elementary way.

We also mention that none of the converse implications in Proposition 1(i) holds.
The following illustrative example will be relevant in our analysis.
Example 1

Let K [1/2, 1]  R and define f : K  K ! R as


f x, y xx  y:

Note that f(x, y) f(y, x) (x  y)2 so that f is not monotone, but it is immediate
that it is 1-undermonotone. The fact that it satisfies P1, P2 and P3 is also immediate.

1263

Optimization

For P4, note that f(x, y)  0 with x, y 2 K implies, since x  1/2, that x  y  0, in
which case, using now that y  1/2, one has f(y, x) y(y  x)  0, and so f is
pseudomonotone.
The following existence result is essential for establishing that EP f,~ K has
solutions.
PROPOSITION 2 Assume that f satisfies P1, P2, P3, P4], and additionally the following
condition; P5: for any sequence {xn}  K satisfying limn!1kxnk 1, there exists
u 2 K and n0 2 N such that f(xn, u)  0 for all n  n0; then EP( f, K) has solutions.

Downloaded by [27.72.189.100] at 07:31 28 March 2015

Proof

See Theorem 4.3 in [6].

In view of Proposition 1(i) and (ii), the result of Proposition 2 also holds with P4,
P4 or P400 substituting for P4]. The following two propositions establish some
regularizing properties of f~ , as compared to f. For a convex set C  H, ri(C) will
denote the relative interior of C.
PROPOSITION 3 Take f satisfying P1, P2, P3 and P4.. Assume that 4. Then
EP f,~ K has a unique solution.
Proof First we prove existence of solutions. We claim that f~ satisfies the
assumptions of Proposition 2. It follows easily from (2) that f~ inherits P1, P2 and
P3 from f. We claim now that f~ satisfies P4. Note that
f~ x, y f~ y, x f x, y f y, x  kx  yk2    kx  yk2  0,

using (2) in the equality, the fact that f satisfies P4 in the first inequality and the
assumption that 4 in the second one. In view of Proposition 1(i), f~ satisfies P4].
In order to apply Proposition 2, it suffices to establish that f~ satisfies P5. Take a
 where PK : H ! K
sequence {xn} such that limn!1kxnk 1, and let u PK x,
denotes the orthogonal projection onto K.
Note that
 hxn  x,
 PK x
  xn i
f~ xn , u fxn , PK x
n
n
 hx  PK x,
 PK x
  xn i hPK x
  x,
 PK x
  xn i
fx , PK x

2
   PK x
  xn 
 fxn , PK x

2

2
 xn PK x
  xn    PK x
  xn 
 fPK x,

2
 xn    PK x
  xn  f u, xn    ku  xn k2 ,
fPK x,

using (2) in the first equality, the fact that {xn}  K, together with the well-known
obtuse angle property of orthogonal projections, in the first inequality, and P4. in
the second inequality. We introduce now some notation for the marginals of f.
For each x 2 K, define gx : K ! R as
gx y f x, y:

Take x^ 2 ri(K), so that x^ belongs to the relative interior of the effective domain of gu.
^ namely @gu x,
^ is nonempty.
Since gu is convex by P3, its subdifferential at x,
^ By the definition of subdifferential,
Take v^ 2 @gu x.
^ xn  xi
^  gu xn  gu x
^ f u, xn  f u, x:
^
hv,

1264

A.N. Iusem and W. Sosa

In view of (7),
^ x^  xn i  f u, x
^  kv^kkx^  xn k  f u, x
^
f u, xn  hv,
n
^
^
^
^
 kvkkx  uk kvkku  x k  f u, x:

Replacing (8) in (5),


^
f~ xn , u  kxn  ukkv^k    kxn  uk kv^kkx^  uk  f u, x:

Downloaded by [27.72.189.100] at 07:31 28 March 2015

Since   40 and limn!1kxnk 1, so that limn!1kxnuk 1, it follows easily


from (9) that limn!1 f~ xn , u 1, so that f~ xn , u  0 for large enough n. We have
verified that f~ satisfies all the assumptions of Proposition 2, and hence EP f,~ K has
solutions.
Now we prove uniqueness of the solution. Assume that both x~ and x~ 0 solve
EP f,~ K . In view of (2),
~ x~ 0 hx~  x,
 x~ 0  xi,
~
~ x~ 0 f x,
0  f~ x,

10

~ f x~ 0 , x
~ hx~ 0  x,
 x~  x~ 0 i:
0  f~ x~ 0 , x

11

Adding (10) and (11),






~   x~  x~ 0     x~  x~ 0   0,
~ x~ 0 f x~ 0 , x
0  f x,

12

using P4 in the second inequality and the fact that 4 in the third one. It follows
g
from (12) that   kx~  x~ 0 k 0, and hence x~ x~ 0 , because  6 .
PROPOSITION 4 Asume that f satisfies P1, P2 and P3. If x~ 2 S f,~ K and x 2 Sd( f, K)
then kx~  x k2 kx  x~ k2  kx  x k2 .
Proof

Take x~ 2 S f,~ K and x 2 Sd( f, K). Since x~ 2 S f,~ K , we have


~ x hx~  x,
 x  xi,
~
~ x f x,
0  f~ x,

and therefore
~ x  hx~  x,
 x  xi,
~
f x,
d

13

Since x 2 S ( f, K), we have that f(y, x )  0 for all y 2 K, and hence


~ x  0:
f x,

14

Combining (13) and (14),


 x  xi
~
0  hx~  x,


 
kx  x k2 kx  x~ k2 kx~  x k2 ,
2

from which the result follows immediately.

3. A proximal point method for equilibrium problems


Next we state the following proximal point method, to be denoted as PPEP, for
solving EP( f, K). Assume that f is -undermonotone, and take a sequence of

Optimization

1265

 for some  4 . Choose x0 2 K and construct


regularization parameters fk g  ,  ,
k
the sequence {x }  K as follows:
Given xk, xk1 is the unique solution of the problem EP( fk, K), where fk :
K  K ! R is defined as

Downloaded by [27.72.189.100] at 07:31 28 March 2015

fk x, y f x, y k hx  xk , y  xi:

15

We mention that if f(x, y) supu2T(x)hu, y  xi, for a maximal monotone


point-to-set operator T : H ! P(H), then the sequence defined by (15) is precisely
the one generated by the proximal point method for finding solutions of the
variational inequality problem VIP(T, K), studied e.g. in [23]. We also remark that
up to the assumptions on the problem data (including the dimension of H), PPEP is
essentially the algorithm analysed in [16] and [11].
We present next the convergence result for PPEP. We need first a notion of
asymptotic solutions for EP( f, K). We say that {zk}  K is an asymptotically solving
sequence for EP( f, K) if lim infk!1 f(zk, y)  0 for all y 2 K.
THEOREM 1 Consider EP( f, K), where f satisfies P1, P2 and P3. For all x0 2 K,
(i) if f satisfies P4. then the sequence {xk} generated by PPEP is well-defined,
(ii) if Sd( f, K) 6 ; then the sequence {xk} is bounded and limk!1kxk1  xkk 0,
(iii) under the assumptions of items (i) and (ii) the sequence {xk} is an
asymptotically solving sequence for EP( f, K),
(iv) if additionally f(, y) is weakly upper semicontinuous for all y 2 K, then all weak
cluster points of {xk} solve EP( f, K),
(v) if additionally S( f, K) Sd( f, K) then the sequence {xk} is weakly convergent
to some solution x^ of EP( f, K).
Proof (i) Since fk, as defined by (15), is a regularization of f, we obtain, using the
fact that f satisfies P4. and invoking recursively Proposition 3 with   k4, x xk
and x~ xk1 , that the sequence {xk} is well-defined.
(ii) The proof of this item is standard, and coincides basically with the one given
in [11] for the finite-dimensional case. We include it just for the sake of
self-containment. Take any x 2 S( f, K). Since f satisfies P4], we get from
Proposition 1(iv) that x belongs to Sd( f, K). We invoke Proposition 4 for
concluding that
 k1
2 
2 
2
x
 x  xk  xk1   xk  x  :

16

It follows that the sequence {kxk  xk} is nonnegative and nonincreasing, hence
convergent, say to   0. By (16),

2 
2 
2
0  xk  xk1   xk  x  xk1  x  :

17

Since the rightmost expression in (17) converges to    0 as k ! 1, we get that


lim xk  xk1 0:

k!1

18

It is also a consequence of (16) that kxk  xk  kx0  xk, so that {xk}  B(x,
kx0  xk), i.e. {xk} is bounded.

1266

A.N. Iusem and W. Sosa

(iii) Fix any y 2 K. The sequence {xk} is well-defined by (i). Since xk1 solves
EP( fk, K) we have, in view of (15),
0  f xk1 , y k hxk1  xk , y  xk1 i



 f xk1 , y k xk1  xk  y  xk1 

19

using CauchySchwartz inequality. We take limits as k ! 1 in (19). Note that { k}


 ky  xk1k is bounded by (ii), and limk!1kxk1  xkk 0, also by
is bounded by ,
(ii), so that

Downloaded by [27.72.189.100] at 07:31 28 March 2015

0  lim infk!1 f xk , y

8y 2 K,

20

and hence {xk} is an asymptotically solving sequence for EP( f, K).


(iv) In view of (ii), {xk} has weak cluster points, all of which belong to K, which,
being closed and convex, is weakly closed. Let x^ be one of them. Let {x jk} be a
^ Under weak upper semicontinuity of
subsequence of {xk} weakly convergent to x.
^ y  lim supk!1 f x jk , y  0 for all y 2 K, so that
f(, y), we have, in view of (20), f x,
x^ 2 S f, K .
(v) It suffices to check that there exists only one weak cluster point of {xk}.
Let x^ and x~ be two weak cluster points of {xk}, so that there exist subsequences {x jk}
~ respectively. By (iv) and the
and {xik} of {xk} whose weak limit points are x^ and x,
assumption of this item, both x^ and x~ belong to S( f, K) Sd( f, K). It follows from
(16) that fkx^  xk kg and fkx~  xk kg both converge, say to   0 and   0,
respectively. Thus
2 
2  
2 
2 

^ x~  xik   x~  x jk   x^  xik   x,
^ x jk  :
2hxik  x jk , x~  xi

21

Taking limits as k goes to 1 on both sides of (21) we get that


2kx~  x^ k2       0,
^ establishing the uniqueness of the weak accumulation points
and hence x~ x,
g
of {xk}.
At this point, two remarks are in order. Firstly, we comment that nonemptiness
of Sd( f, K) is a rather cumbersome hypothesis, and hard to check. On the other hand,
existence of solutions of EP( f, K), i.e. nonemptiness of S( f, K), though also
noncheckable a priori, is a natural assumption, since we do not expect the algorithm
to converge when the problem is unsolvable. With the help of Proposition 1, we can
reformulate Theorem 1 in terms of checkable monotonicity-like properties of f,
obtaining as a bonus the validity of the hypothesis of item (v). We proceed to do so.
COROLLARY 1 Assume that f satisfies P1, P2, P3, P4. and any one among P4, P4]
and P400 . If EP( f, K) has solutions and f(, y) is weakly upper semicontinuous for all
y 2 K, then the sequence {xk} generated by PPEP converges weakly to a solution
of EP( f, K).
Proof By Proposition 1, any one among P4, P4] and P400 implies that
Sd( f, K) S( f, K). Since S( f, K) 6 ; by assumption, the specific hypotheses of
items (ii) and (v) in Theorem 1 hold, and so do the corresponding results.
g

Optimization

1267

Also, weak upper semicontinuity of f(, y), as requested in Theorem 1(iv), is quite
restrictive, but it holds at least in two significant cases, dealt with in the following
corollary.
COROLLARY 2 Under the assumptions of Corollary 1 (excluding the weak upper
semicontinuity of f(, y)),

Downloaded by [27.72.189.100] at 07:31 28 March 2015

(i) if H is finite dimensional, then the sequence {xk} generated by PPEP


converges to a solution of EP( f, K),
(ii) if for all y 2 K f(, y) is concave and can be extended, preserving concavity, to
an open set W
K, then the sequence {xk} generated by PPEP is weakly convergent to
a solution of EP( f, K).
Proof Both results follow from Theorem 1 and Corollary 1: in the finitedimensional case, weak upper semicontinuity of f(, y) is just upper continuity,
which holds by P2; for (ii), note than concave functions are weakly upper
semicontinuous in the relative interior of their effective domain, which turns out
to contain K, under the hypothesis of this item.
g
In the following section, we will manage to remove the weak upper
semicontinuity assumption, replacing it with a rather weak technical assumption,
but only for the monotone (rather than undermonotone) case.

4. A reformulation of the equilibrium problem


First we recall our notation for the marginal functions of f given in (6), namely gx :
K ! R defined, for each x 2 K, as gx(y) f(x, y).
Throughout this section we assume that @gx(y) 6 ; for all x, y 2 K. This is the
case, for instance, if f can be extended, preserving P3, to some open subset V of
H  H, containing K  K. We associate with f the operator T f : H ! P(H) defined as
T f x @gx x NK x,

22

where NK is the normal operator of K, i.e. the subdifferential of the indicator


function IK, which vanishes at points of K, and takes the value 1 outside K. The
fact that @gx(x) is defined only for x 2 K is irrelevant, because NK (x) ; when x 2= K,
and hence the same holds for T f (one can also think that gx has been extended to the
whole H, taking the value 1 outside K).
We have the following relation between EP( f, K) and T f.
PROPOSITION 5
(i) S( f, K) is the set of zeroes of T f.
(ii) Starting from the same x0, the sequence generated by PPEP, using fk as
defined by (15), and the sequence generated by the proximal point method for finding
zeroes of T f, coincide (the latter being the sequence {xk}, where xk1 is the unique zero
of Tkf , defined as Tkf x T f x k x  xk ).
Proof (i) x 2 S( f, K) iff gx(x) f(x, x) 0  f(x, y) gx(y) for all y 2 K, i.e. iff
x solves the problem of minimizing gx(y) subject to y 2 K. The first-order condition
for this problem, necessary and sufficient for optimality, in view of the convexity
of gx and K, is the existence of v 2 @gx(x) such that hv, y  xi  0 for all y 2 K.

1268

A.N. Iusem and W. Sosa

In view of the definition of NK, this is precisely equivalent to saying that


0 2 @gx(x) NK (x), i.e. looking at (22), that x is a zero of T f.
(ii) Let {xk} be the sequence generated by the proximal point method for finding
zeroes of T f. Assume inductively that xk is equal to the k-th iterate of PPEP applied
to EP( f, K). We must prove that xk1 is the next iterate of the PPEP sequence.
We know that
0 2 T f xk1 k xk1  xk @gxk1 xk1 k xk1  xk NK xk1 :

23

For x 2 K, define gkx : K ! R as

Downloaded by [27.72.189.100] at 07:31 28 March 2015

gkx y gx y k hx  xk , y  xi:
It is immediate that @gkx y @gx y k x  xk . Define U~ f y @gky y. It follows
from (23) that xk1 is a zero of Uf Nz, which implies, using now the convexity of gky
and of K, that xk1 minimizes gkxk1 over K, meaning that, for all y 2 K,
0 gkxk1 xk1  gkxk1 y gxk1 y k hxk1  xk , y  xk1 i
f xk1 , y k hxk1  xk , y  xk1 i fk xk1 , y:
Since 0  fk(xk1, y) for all y 2 K, xk1 solves EP( fk, K), and hence it is the
k 1-th iterate of the PPEP sequence for problem EP( f, K), completing the
inductive step.
g
The result of Proposition 5 can be seen as a converse of our comment at the
beginning of Section 2, where we saw that variational inequality problems are
particular cases of equilibrium problems: here we have shown that, generally
speaking, each equilibrium problem can be reformulated as a variational inequality
problem, and, furthermore, that the proximal point method for the equilibrium
problem coincides with the classical proximal point method applied to its
reformulation as a variational inequality problem. This fact could convey the
impression that this whole article (with the exception, perhaps, of Proposition 5), is
rather superfluous, because the proximal point method for variational inequalities,
or equivalently for finding zeroes of point-to-set operators, has been extensively
analysed. We argue, however, that such an impression is misleading.
Firstly, convergence results for the classical proximal point method, as presented
e.g. in [23], demand monotonicity of the operator, in this case of T f. Since NK is
always maximal monotone, monotonicity of T f will occur when the operator
U f(x) @gx(x) is itself monotone. At this point, it is essential to note that U f is not
the subdifferential of a convex function; rather, at each point x it is the
subdifferential of a certain convex function, namely gx, but this function changes
with the argument of the operator. Thus, the monotonicity of U is not granted
a priori, but we have the following elementary result.
PROPOSITION 6 (i) If f is -undermonotone (i.e. it satisfies P4.) then U f I is
monotone.
(ii) If f is monotone (i.e. it satisfies P4) then U f is monotone.

Optimization

1269

Proof (i) Take x, y 2 K, v 2 (U f I )(x), w 2 (U f I )(y), so that v  x 2 U f(x),


w  y 2 U f(y). Then, using the definition of @gx, (13), P1 and P4.
 hv  x  w  y, x  yi hv  x, y  xi hw  y, x  yi
 gx y  gx x gy x  gy y f x, y  f x, x f y, x  f y, y

2
f x, y f y, x  x  y :

24

Downloaded by [27.72.189.100] at 07:31 28 March 2015

It follows easily from (24) that hv  w, x  yi  0, establishing the monotonicity


of U f I.
(ii) Follows from (i) with  0, noting that 0-undermonotonicity is just
monotonocity.
g
At this point we emphasize that our convergence analysis in Section 3 holds for
instances in which f is not monotone, e.g. the function given in Example 1, and thus
the reformulated problem does not fall within the range of the classical proximal
point method. We reckon that there are convergence results for the proximal point
method applied to nonmonotone operators (see e.g. [10]), but still they do not
encompass our results here. The closest results seem to be those dealing with
hypomonotone operators (e.g. [5,8,21]). An operator T is -hypomonotone when
T 1 I is monotone (I being the identity operator). Now, our assumption of
-undermonotonicity of f (namely P4.), implies that U f I is monotone, but this is
different from -hypomonotonicity, which means monotonicity of (U f )1 I.
Parenthetically, this could look suspicious, because it is known that monotonicity of
T I is not sufficient for getting convergence of the proximal point method for
finding zeroes of T, with the regularization parameters  k chosen as  k4, because in
general there will be no Fejer convergence of {xk} to the solution set, and (16) will
fail. For example, consider EP( f, K) with f as in Example 1, i.e. f(x, y) x(x  y), but
taking now K R instead of K [1/2, 1]. Since f(x, y) f(y, x) (x  y)2, we have
that f is 1-undermonotone also for this choice of K. The only solution of EP( f, K) is
now x 0, but it is easy to check that if we choose  k 4 1, the generated
sequence {xk}  R is given by
xk


1

k

x0 ,

25

which diverges for any x0 6 0. The point here is that for this choice of K we have
Sd( f, K) ;, and hence the result in Theorem 1(ii) fails (note that for this choice of f
and K neither P4, nor P4], nor P400 hold).
In the algorithm considered in [8,21] for finding zeroes of a -hypomonotone
1
operator T,  k is required to satisfy 0  k 5 2
. For f as in Example 1, one has
U f(x) x, which is 1-hypomonotone, but the analysis in [8,21] guarantees Fejer
convergence of {xk} to {0} when  k 2 (0, 1/2), which is inconsistent with the choice
prescribed in our algorithm, namely  k4 1 (note that {xk} as given by (25) indeed
converges to 0 with  2 (0, 1/2)).
On the other hand, if we take K [1/2, 1] and  k 41, i.e. satisfying the
prescription of the algorithm analysed in this article, the divergence effect noted
above does not occur because the sequence is forced to remain in [1/2, 1]. Indeed, in
this case, due to the presence of the constraint xk 2 [1/2, 1], the iteration formula is

Downloaded by [27.72.189.100] at 07:31 28 March 2015

1270

A.N. Iusem and W. Sosa

not the one given by (25), but rather we have xk1 min{1, [/(  1)]xk}, and it is
easy to check that {xk} converges to the unique solution x 1 after a finite number
of iterations, for any x0 2 K [1/2, 1]. The fact that the sequence {xk} does converge
to the solution of the problem is consistent with our convergence analysis, since f
satisfies P4 for this choice of K.
We mention that we have chosen a one-dimensional problem (namely Example 1)
for a detailed analysis of the breadth of our results viz a viz others, only because it is
possible to get closed iteration formulae, allowing an easy visualization of the
sequence behaviour. On the other hand, one-dimensional variational inequalities
always represent first-order conditions of optimization problems. Indeed, Example 1
is equivalent to the first-order conditions for the nonconvex problem of minimizing
f(x) x2 subject to 1/2  x  1. Convergence results for the proximal point method
for nonconvex optimization, appearing in [10], apply to this problem. We give next a
two-dimensional example of a pseudomonotone and -undermonotone equilibrium
problem which is not monotone, and does not represent the first-order conditions of
an optimization problem.
Example 2

Consider EP( f, K) with f : R2  R2 ! R defined as


f x, y 2x2  x1 y1  x1 ,

and K fx 2 R2 : maxfx1 , 1=2g  x2 g, so that K is the quadrangle whose vertices are


(0, 1/2), (0, 1), (1, 1) and (1/2, 1/2). It can be easily checked that f is 2-undermonotone
and pseudomonotone, but not monotone. Finally, it does not fall within the
optimization case, because its associated operator U f is given by U f(x) Ax with


 1 2 
,

A
0 0 
which is not symmetric. In such a case, the variational inequality problem
VIP(U f, K), equivalent to the problem of finding zeroes of T f U f NK, cannot
represent the first-order optimality conditions of any optimization problem.
In summary, our analysis works for  k  , where  is the undermonotonicity
constant, because we assume that Sd( f, K) 6 ; (or alternatively one of the
monotonicity-like properties P4, P4] and P400 ), which allows us to prove
Proposition 4, for which P4. is not enough. To our knowledge, the convergence
of the proximal point method for a pseudomonotone operator T which is also
-undermonotone (in the sense that T I is maximal monotone) with  k  , has not
been studied up to now. In fact, such a convergence analysis follows, up to certain
technicalities, from the results in this article, but we will not pursue this issue further.
We just mention that a one-dimensional example of a pseudomonotone and
-undermonotone operator which is not monotone is given by T : R ! P(R) defined
as T(x) x N[, ](x) with 055 , and a two-dimensional one, with  2, is
given by the operator T f associated with Example 2. The end of the story is that the
current literature on the proximal point method does not cover a case like the one
given in Example 2.
We give also at this point a reason for denoting as -undermonotone, and not
-hypomonotone, a function f satisfying P4.: this property translates into
monotonicity of the operator U f I, which, as explained above, does not coincide
with -hypomonotocity of U f, as defined above, following e.g. [21].

Downloaded by [27.72.189.100] at 07:31 28 March 2015

Optimization

1271

In addition, not only monotonicity of the operator (or some variant thereof) is
needed in the analysis of the proximal point method, but also maximality. Even in
the case in which f is monotone and defined on the whole space H, it is not at all
obvious that the operator U f will be maximal monotone. We remind that while
monotonicity of the subdifferential of a convex function is immediate, its maximality
is rather nontrivial [22]. We will prove below that the needed maximality indeed
holds when f is monotone, but it happens that the proof requires the EP techniques:
it uses in an essential way the existence result in Proposition 2. Since we cannot avoid
the equilibrium problem approach even when dealing directly with the reformulation, we considered it advisable to present a clean analysis of the PPEP, just in terms
of the equilibrium problem, as done in Section 3, going as far as possible without
introducing the more complicated machinery of the reformulation.
However, we proceed now to explore the reformulation in order to remove the
weak upper semicontinuity hypothesis, but only for the case of a monotone f.
We recall now a result on maximality of monotone operators. A similar result,
in reflexive Banach spaces but with  1, was established in Remark 10.8 of [24].
We present here a simplified version of the proof of Theorem 4.5.7 in [2], which also
deals with Banach spaces.
PROPOSITION 7 Let T : H ! P(H) be a monotone operator. If T I is onto for some
40, then T is maximal monotone.
 and a pair (v, z) such that
Proof Take a monotone operator T such that T  T,

v 2 Tz. We must prove that v 2 T(z). Define b v z. Since T I is onto, there
exists x 2 H such that
 x:
b 2 Tx x  Tx

26


 z. Since T I
On the other hand, since v 2 Tz,
we have that b v z 2 Tz
is strictly monotone, we conclude that x z, and thus, making x z in the first
inclusion of (26), we have v z 2 T(z) z, which implies that v 2 T(z). It follows
that T  T, i.e. T T, and hence T is maximal.
g
Now we use Propositions 2 and 7 to establish maximal monotonicity of T f under
adequate assumptions on f.
PROPOSITION 8

If f satisfies P1P4, then T f, as defined by (22), is maximal monotone.

Proof We intend to apply Proposition 7, for which we need to show that T f is


monotone and that T f I is onto for some 40. Note that U f, defined as
U~ f y @gky y, is monotone by Proposition 6(i) with  0 (recall that
0-undermonotonicity is just monotonicity). Since NK is certainly monotone, it
follows that T f U f NK is monotone. Now we address the surjectivity issue. Take
any 40 and b 2 H. We want to show that there exists x 2 K such that b 2 (T f I )x.
Consider f~ as in (2), with x 1 b,  . By Proposition 3, EP f,~ K has a solution,
say x. Define
g~x y f~ x, y f x, y hx  1 b, y  xi:
Note that, since x solves EP f,~ K ,
g~x x f~ x, x 0  f~ x, y g~x y

27

1272

A.N. Iusem and W. Sosa

for all y 2 K. Thus, x minimizes g~x over K, which is the same as saying that x is an
unrestricted minimizer of gx IK, where IK is the indicator function of K. By
assumption, we have that @gx(z), and henceforth @g~ x z, are nonempty for all z 2 K.
In view of (27) and the fact that @IK NK, we have
0 2 @ g~ x IK x @gx x x  1 b NK x @gx x x  b NK x: 28
Rewriting (28) as
b 2 @gx x NK x x U f NK x x T f I x,

Downloaded by [27.72.189.100] at 07:31 28 March 2015

we complete the proof of surjectivity of T f I. We can use now Proposition 7 to


g
conclude that T f is maximal monotone.
We remark that in the case of K H, and f(x, y) h(y)  h(x), where h :
H ! R [ {1} is a convex function, we get @gx @h and NK (x) 0 for all x 2 H, so
that T f U f @h. Since this f satisfies P1P4, Proposition 8 provides an alternative
(and rather short) proof of the maximality of the subdifferential of a convex function
(cf. [22]). Noting that the proof of Proposition 7 is also quite short, we conclude that
the heavy artillery behind this approach is hidden in the proof of Proposition 2,
given in [6].
We remind now a well-known property of maximal monotone operators, namely
demi-closedness.
Definition 1 Given T : H ! P(H), the graph of T is said to be demi-closed, when the
following property holds: if {xk}  H is weakly convergent to x 2 H, {vk}  H is
strongly convergent to v 2 H, and vk 2 T(xk) for all k, then v 2 T(x).
PROPOSITION 9
Proof

If T : H ! P(H) is maximal monotone, then its graph is demi-closed.

See, e.g. [20, p. 105].

Now we can get rid of the weak upper semicontinuity assumption in


Theorem 1(iv).
THEOREM 2 If f satisfies P1P4, EP( f, K) has solutions, and f(x, ) can be extended,
for all x 2 K, to an open set W
K, while preserving its convexity, then the sequence
{xk} generated by PPEP is weakly convergent to a solution of EP( f,K) for all x0 2 K.
Proof Note that we are within the assumptions of items (i)(iv) of Theorem 1,
recalling that P4 implies both P4. and P4] by Proposition 1(i), and we also have
S( f, K) Sd( f, K) by Proposition 1(iv). Define gkx y fk x, y, with fk as in (15). As
we have already seen several times, xk1 is a solution of the problem min gkxk1 y
subject to y 2 K, and thus it satisfies the first-order optimality condition, namely
0 2 @gkxk1 xk1 NK xk1 @gxk1 xk1 k xk1  xk NK xk1 ,
which can be rewritten as
vk1 : k xk  xk1 2 @gxk1 xk1 NK xk1 T f xk1 :

29

Note that T f is maximal monotone by Proposition 8, so that its graph is demi-closed


by Proposition 9. Also, {xk} is bounded by Theorem 1(ii). Observe also that {vk} is
strongly convergent to 0 by Theorem 1(ii) and boundedness of { k}. Let x^ be a weak

Optimization

1273

Downloaded by [27.72.189.100] at 07:31 28 March 2015

cluster point of {xk}. Taking limits along the corresponding subsequence in (29),
we are exactly in the situation of Definition 1, so that Proposition 9 entails that
^ By Proposition 5(i), x^ solves EP( f, K). Uniqueness of the cluster points of
0 2 T f x.
{xk}, and consequently weak convergence of {xk} to a point in S( f, K), follow with
the argument used in the proof of Theorem 1(v).
g
We remark that the difference between Theorems 1 and 2, besides the fact that
the proof of the latter requires the reformulation of the equilibrium problem as a
variational inequality one, lies in the assumptions on f (monotonicity and extension
of f(x, ) to an open set containing K), which replace weak upper semicontinuity of
f(, y), as the tool for establishing optimality of the weak cluster points of the
generated sequence. On the other hand, the extension of f(x, ) to W
K looks rather
harmless, since usually f and K are independent of each other (in most cases indeed f
is naturally defined on H  H).
For strengthening Theorem 2 beyond the monotone case, it would suffice to
prove that the graph of an operator of the form T  I, with T maximal monotone
and 40, is demi-closed. Unfortunately, we have no proof of this fact (and in fact we
conjecture that it is false).
At this point it would be reasonable to discuss the convergence of the method
under inexact solution of the subproblems, which is the standard situation in actual
implementations. Different error criteria which preserve the convergence result for
the proximal point method for finding zeroes of maximal monotone operators have
been proposed since Rockafellars 1976 paper [23]. Recently, less stringent error
criteria, allowing for constant relative errors along the iterations, were proposed
by Solodov and Svaiter in [25,26] for monotone operators, and extended to
hypomonotone operators in [8] (in the case of Hilbert spaces) and [5] (in the case of
Banach spaces). Through the reformulation of equilibrium problems as variational
inequality problems, proposed in this section, all these convergence results hold for
equilibrium problems, assuming, of course, that the monotonicity properties of f are
such that the associated operator T f satisfies the assumptions required for the
convergence of each of these inexact procedures. Nevertheless, it is possible to
introduce some error criteria which are specific of equilibrium problems.
These criteria will be the subject of a forthcoming paper.

References
[1] A.S. Antipin, Equilibrium programming: Proximal methods, Comput. Math. Math. Phys.
37 (1997), pp. 12851296.
[2] R.S. Burachik and A.N. Iusem, Set-Valued Mappings and Enlargements of Monotone
Operators, Springer, Berlin, 2007.
[3] K. Fan, A generalization of Tychonoffs fixed point theorem, Math. Ann. 142 (1961),
pp. 305310.
[4] S.D. Flam and A.S. Antipin, Equilibrium programming using proximal-like algorithms,
Math. Program. 78 (1997), pp. 2941.
[5] R. Garciga Otero and A.N. Iusem, Proximal methods in Banach spaces without
monotonicity, J. Math. Anal. Appl. 330 (2007), pp. 433450.
[6] A.N. Iusem, G. Kassay, and W. Sosa, On certain conditions for the existence of solutions of
equilibrium problems, Math. Program. Ser. B 116 (2009), pp. 259273.

Downloaded by [27.72.189.100] at 07:31 28 March 2015

1274

A.N. Iusem and W. Sosa

[7] A.N. Iusem and R.D.C. Monteiro, On dual convergence of the generalized proximal point
method with Bregman distances, Math. Oper. Res. 25 (2000), pp. 606624.
[8] A.N. Iusem, T. Pennanen, and B.F. Svaiter, Inexact variants of the proximal point method
without monotonicity, SIAM J. Optim. 13 (2003), pp. 10801097.
[9] A.N. Iusem and W. Sosa, New existence results for equilibrium problems, Nonlinear Anal.
52 (2003), pp. 621635.
[10] A. Kaplan and R. Tichatschke, Proximal point methods and nonconvex optimization,
J. Global Optim. 13 (1998), pp. 389406.
[11] I.V. Konnov, Application of the proximal point method to nonmonotone equilibrium
problems, J. Optim. Theory Appl. 119 (2003), pp. 317333.
[12] M.A. Krasnoselskii, Two observations about the method of successive approximations,
Usp. Mat. Nauk 10 (1955), pp. 123127.
[13] G. Minty, A theorem on monotone sets in Hilbert spaces, J. Math. Anal. Appl. 11 (1967),
pp. 434439.
[14] J. Moreau, Proximite et dualite dans un espace hilbertien, Bull. Soc. Math. France 93
(1965), pp. 273299.
[15] A. Moudafi, Proximal point methods extended to equilibrium problems, J. Natur. Geom. 15
(1999), pp. 91100.
[16] A. Moudafi, Second-order differential proximal methods for equilibrium problems,
J. Inequal. Pure Appl. Math. 4 (2003), Article no. 18.
[17] A. Moudafi and M. Thera, Proximal and dynamical approaches to equilibrium problems,
in Ill-posed Variational Problems and Regularization Techniques, Lecture Notes in
Economics and Mathematical Systems, Vol. 477, Springer, Berlin, 1999, pp. 187201.
[18] M.A. Noor, Auxiliary principle technique for equilibrium problems, J. Optim. Theory Appl.
122 (2004), pp. 371386.
[19] M.A. Noor and T.M. Rassias, On nonconvex equilibrium problems, J. Math. Anal. Appl.
212 (2005), pp. 289299.
[20] D. Pascali and S. Sburlan, Nonlinear Mappings of Monotone Type, Editura Academiei,
Bucarest, 1978.
[21] T. Pennanen, Local convergence of the proximal point method and multiplier methods
without monotonicity, Math. Oper. Res. 27 (2002), pp. 170191.
[22] R.T. Rockafellar, On the maximal monotonicity of subdifferential mappings, Pac. J. Math.
33 (1970), pp. 209216.
[23] R.T. Rockafellar, Monotone operators and the proximal point algorithm, SIAM J. Control
Optim. 14 (1976), pp. 877898.
[24] S. Simons, Minimax and Monotonicity, Lecture Notes in Mathematics, Vol. 1693,
Springer, Berlin, 1998.
[25] M.V. Solodov and B.F. Svaiter, A hybrid projectionproximal point algorithm, J. Convex
Anal. 6 (1999), pp. 5970.
[26] M.V. Solodov and B.F. Svaiter, An inexact hybrid extragradient-proximal point algorithm
using the enlargement of a maximal monotone operator, Set-Valued Anal. 7 (1999),
pp. 323345.

Das könnte Ihnen auch gefallen