Beruflich Dokumente
Kultur Dokumente
The ELQP has been widely investigated in [4] and [5] and
utilized to optimal control.
In general, with increasing number of decision variables,
traditional numerical algorithms designed for computer
program become inefficient for real-time online solutions.
Since the 1980s, the artificial neural networks with parallel
structure based on circuit implementation have been
employing to handle the online solutions for science and
engineering applications [6][10]. Tank and Hopfield [6]
spearheaded neurodynamic optimization approach for linear
programming, which motivated the development of neural
networks for solving optimization problems. Soon after
that, Kennedy and Chua [7] proposed a recurrent neural
network for constrained nonlinear optimization by utilizing
the finite penalty parameter method. From then on, many
optimization approaches are investigated for the design
of neural networks for constrained optimization, such as
the primaldual neural network [11], primal and dual
neural networks [12], projection neural networks [13], [14],
one-layer neural networks [15], [16], and generalized neural
networks [17][19].
In particular, recently the projection approach has played a
significant role in the design of neural networks for constrained
optimization, such as the models in [20][24]. Specially,
Xia and Wang [25] developed the projection neural network
for solving monotone variational inequalities and related
optimization problems. Gao et al. [26] proposed a projectionbased recurrent neural network for a class of convex quadratic
minimax problems with constraints. Hu and Wang [27]
developed
the
projection
neural
network
for
pseudomonotone variational inequalities and pseudoconvex
optimization problems. Cheng et al. [28] investigated the
neural networks for applications to the identification of genetic
regulatory networks. Moreover, the general [14], [20], [29]
and extended general [22], [30] projection neural networks
I. I NTRODUCTION
s.t. Ax + C y = d, x X , y Y
(1)
1 T
1
x Qx + q T x x T H y y T Sy s T y
2
2
(2)
where
f (x, y) =
in which x = (x 1 , x 2 , . . . , x n )T Rn , y = (y1 , y2 , . . . ,
ym )T Rm , Q Rnn , and S Rmm are symmetric,
H Rnm , q Rn , s Rm , A R pn , C R pm , (A C) is
full row-rank, d R p , and X and Y are nonempty and closed
convex sets in Rn and Rm , respectively.
Minimax optimization has arisen in a variety of applications
in science and engineering, including interactive decision
Manuscript received August 15, 2014; revised December 16, 2014; accepted
April 16, 2015. The work of Q. Liu was supported in part by the National
Natural Science Foundation of China under Grant 61473333, in part by the
Program for New Century Excellent Talents in University of China under
Grant NCET-12-0114, and in part by the Fundamental Research Funds for the
Central Universities of China under Grant 2015QN035. The work of J. Wang
was supported by the Research Grants Council through the Hong Kong Special
Administrative Region, China, under Grant CUHK416812E.
Q. Liu is with the School of Automation, Huazhong University of Science and Technology, and the Key Laboratory of Image Processing and
Intelligent Control, Ministry of Education, Wuhan 430074, China (e-mail:
qsliu@hust.edu.cn).
J. Wang is with the Department of Mechanical and Automation
Engineering, The Chinese University of Hong Kong, Hong Kong (e-mail:
jwang@mae.cuhk.edu.hk).
Color versions of one or more of the figures in this paper are available
online at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TNNLS.2015.2425301
1 T
x Qx + q T x + (s H T x)
2
s.t. Ax + C y = d, x X , y Y
min
x
where
f (x) =
1
() = max T y y T Sy .
y
2
2162-237X 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
2
(4)
(5)
(7)
(8)
(10)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
LIU AND WANG: PROJECTION NEURAL NETWORK FOR CONSTRAINED QUADRATIC MINIMAX OPTIMIZATION
B. Model Description
Based on the equations in (10), the dynamic equations of
the proposed neural network model for solving problem (1)
are described as follows:
State Equations:
d
= Pg() (I P)( g()
dt
+ W ((I P)g() + w) + c) + w
(11)
Output Equations:
z = g()
(12)
h i , u i > h i
g(u i ) = u i , li u i h i
li , u i < li .
If X = {u Rn : u v r, v Rn , r > 0}, then
u,
u v r
g(u) =
v + r(uv)
,
u
v > r.
uv
(14)
(15)
(16)
(17)
= g( ) W z c B T .
(x x)T (Q x H y + q) 0
and
(y y)T (H T x + S y + s) 0.
According to the convexity of f (x, y) on the feasible set with
respect to x, it results in f (x, y) f (x, y) (x x)T
x f (x, y) = (x x)T (Q x H y + q) 0. Similarly,
according to the concavity of f (x, y) on the feasible set
with respect to y, it follows f (x, y) f (x, y) (y y)T
y f (x, y) = (y y)T (H T x S y s) 0. Then (x T , y T )T
is an optimal solution to problem (1).
III. T HEORETICAL A NALYSIS
In this section, the validity of the proposed neural network
for solving the quadratic minimax optimization problems
is analyzed. The stability and global convergence of the
neural network are proved to guarantee the solutions of the
corresponding problems.
A. Convergence Analysis
In this section, the stability and convergence of the dynamic
system in (11) and (12) are analyzed using Lyapunov
method [17], [37].
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
4
V0 () = g() g()
Then
V1 ((t)) 2 [2(z z)T P(z z)
(18)
+ 2( )T (I 2P)(z z)
2( )T (I P)W (I P)(z z).
Next, since P is symmetric and P 2 = P, we have
2 2 = (I P)( ) + (I 2P)(z z)
(I P)W (I P)(z z)2
= ( )T (I P)( ) + (z z)T (z z)
V () = [V1 () + V2 ()]
where V1 () = (V0 () + ( )T P( )), with V0 ()
defined in (18) and 0, and V2 () = 2 .
Next, we calculate the derivative of Vi ((t)) (i = 1, 2)
along the solution of system (21), respectively. We have
V1 ((t)) = (V1 ())T (t)
= 2 [z z + P( )]T
[(I P)( ) + (I 2P)(z z)
(I P)W (I P)(z z)]
= 2 [(z z)T (I P)( )
+ (z z)T (I 2P)(z z)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
LIU AND WANG: PROJECTION NEURAL NETWORK FOR CONSTRAINED QUADRATIC MINIMAX OPTIMIZATION
Then
J1 = ( ) (I P)( ) (z z) (I P)(z z)
T
(z z)T P(z z)
(z z)T P(z z)
where the last inequality holds since I P is positive
semidefinite.
It further results that
V2 ((t)) + 2 2
J1 + J2
(z z)T P(z z)
2(z z)T (I P)W (I P)(z z)
+ (z z)T (I P)W T (I P)W (I P)(z z). (24)
From the inequalities in (22) and (24), we have
V ((t)) + 2 2
= V1 ((t)) + V2 ((t)) + 2 2
2 [2(z z)T P(z z)
(z z)T (I P)W (I P)(z z)]
(z z)T P(z z)
2(z z)T (I P)W (I P)(z z)
(I P)W T (I P)W (I P) 0.
(25)
For any initial point (t0 ) Rn+m , V ((t)) is nonincreasing as t . The inequality in (25) indicates that {(t) :
0 t < T } L((t0 )) = { Rn+m : V () V ((t0 ))}.
Thus, T = + and (t) is bounded from the boundedness
of L((t0 )). From (25), V () is a Lyapunov function of
system (11), and system (11) is Lyapunov stable.
From the boundedness of (t), there exists an increasing
sequence {tl } and a limit point such that liml (tl ) = .
Thus, is an -limit point of (t).
According to the LaSalles invariance principle [38],
(t) will converge to M as t , where M is the largest
invariant subset of the following set:
= { L((t0 )) : V ((t)) = 0}.
Note that, from (25), if V ((t)) = 0, we have (t) = 0.
Then (t) will converge to the equilibrium point set
(26)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
6
(27)
s.t. Ax + C y , x X , y Y
(28)
2z T Gz
y,v
s.t. Ax + C y v = 0, x X , y Y, v
where
1 T
x Qx + q T x x T H y
2
1
1
y T Sy s T y + v T (Ax + C y v).
2
2
Thus, the neural network in (11) and (12) can be used
for solving this problem by replacing Rn+m+ p ,
B = (A C I p ) with I p being p-dimensional identity matrix,
g becoming a projection operator form Rn+m+ p to
X Y , z = (x T , y T , v T )T, c = (q T , s T , oT )T , w = 0,
and
Q
H
A T /2
W = HT
S
C T /2 .
A/2 C/2
Ip
f (x, y, v) =
f (x) =
(29)
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
LIU AND WANG: PROJECTION NEURAL NETWORK FOR CONSTRAINED QUADRATIC MINIMAX OPTIMIZATION
d
= P g() (In P)( g()
dt
+ Q((In P)g() + w) + q) + w
(30)
Output Equations:
x = g()
(31)
Rn
0
5 5
2
5 0
1 2
2 1 1
0
Q=
, A=
.
5
1
0
2
0 1 1
2
2
2 2
0
It is obviously that Q is not positive definite and
some of the existing neural networks, such as the ones
in [25], [31], and [32], are not capable of solving this
problem. Using the MATLAB LMI toolbox, we can get an
estimate of k by solving the optimization problem in (26).
Here, if k 3.3334, the matrix in (32) is positive semidefinite.
Then the proposed neural network in (30) and (31) can be used
for solving this problem.
Let = 105 in the neural network model. Figs. 1 and 2,
respectively, show the transient behaviors of state vector (t)
and output vector x(t) with 10 random initial values. The
simulation results show that the output variables are globally
convergent to the unique solution x = (0.5, 2, 2, 1.5)T
in the bound constraints.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
8
Fig. 5. Transient behaviors of the neural network in [31] for solving the
problem in Example 2.
Fig. 6. Transient behaviors of the neural network in [34] for solving the
problem in Example 2.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
LIU AND WANG: PROJECTION NEURAL NETWORK FOR CONSTRAINED QUADRATIC MINIMAX OPTIMIZATION
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination.
10
Qingshan
Liu
(S07M08) received the
B.S. degree in mathematics from Anhui Normal
University, Wuhu, China, in 2001, the M.S. degree
in applied mathematics from Southeast University,
Nanjing, China, in 2005, and the Ph.D. degree
in automation and computer-aided engineering from
The Chinese University of Hong Kong, Hong Kong,
in 2008.
He was a Research Associate with the
City University of Hong Kong, Hong Kong,
in 2009 and 2011. In 2010 and 2012, he was
a Post-Doctoral Fellow with The Chinese University of Hong Kong.
In 2012 and 2013, he was a Visiting Scholar with Texas A&M University
at Qatar, Doha, Qatar. From 2008 to 2014, he was an Associate Professor
with the School of Automation, Southeast University. He is currently
a Professor with the School of Automation, Huazhong University of
Science and Technology, Wuhan, China. His current research interests
include optimization theory and applications, artificial neural networks,
computational intelligence, and multiagent systems.