Sie sind auf Seite 1von 13

Congurations of steady states for Hopeld-type

neural networks
E. Kaslik
a,b,
*
, St. Balint
a
a
Faculty of Mathematics and Computer Science, West University of Timis oara, Bd. V. Parvan nr. 4, 300223, Timis oara, Romania
b
L.A.G.A, UMR 7539, Institut Galile e, Universite Paris 13, 99 Avenue J.B. Cle ment, 93430, Villetaneuse, France
Abstract
The dependence of the steady states on the external input vector I for the continuous-time and discrete-time Hopeld-
type neural networks of n neurons is discussed. Conditions for the existence of one or several paths of steady states are
derived. It is shown that, in some conditions, for an external input I there may exist at least 2
n
exponentially stable steady
states (called conguration of steady states), and their regions of attraction are estimated. This means that there exist 2
n
paths of exponentially stable steady states dened on a certain set of input values. Conditions assuring the transfer of a
conguration of exponentially stable steady states to another conguration of exponentially stable steady states by succes-
sive changes of the external input are obtained. These results may be important for the design and maneuvering of
Hopeld-type neural networks used to analyze associative memories.
2006 Elsevier Inc. All rights reserved.
Keywords: Hopeld neural network; Associative memory; Steady states; Control
1. Introduction
In this paper, we consider the continuous-time Hopeld-type neural network dened by the following
system of nonlinear dierential equations:
_ x
i
a
i
x
i

n
j1
T
ij
g
j
x
j
I
i
; i 1; n: 1
In [1] a semi-discretization technique has been presented for (1), leading to discrete-time neural networks
which faithfully preserve the characteristics of (1), i.e. the steady states and their stability properties. Despite
0096-3003/$ - see front matter 2006 Elsevier Inc. All rights reserved.
doi:10.1016/j.amc.2006.04.054
*
Corresponding author. Address: Faculty of Mathematics and Computer Science, West University of Timis oara, Bd. V. Parvan nr. 4,
300223, Timis oara, Romania.
E-mail addresses: kaslik@math.univ-paris13.fr (E. Kaslik), balint@balint.math.uvt.ro (St. Balint).
Applied Mathematics and Computation 182 (2006) 934946
www.elsevier.com/locate/amc
this fact, in this paper, we will consider a more general class of discrete time Hopeld-type neural networks,
dened by the following discrete system:
x
i
p1
x
i
p
a
i
x
i
p

n
j1
T
ij
g
j
x
j
p
I
i
8i 1; n; p 2 N: 2
In Eqs. (1) and (2) we have that a
i
> 0, I
i
denote the external inputs, T = (T
ij
)
nn
is the interconnection matrix,
g
i
: R !Ri 1; n represent the neuron inputoutput activations. For simplicity, we will suppose that the
activation functions g
i
are of class C
1
on R and g
i
(0) = 0, for i 1; n.
The systems (1) and (2) can be written in the matrix forms:
_ x Ax Tgx I; 3
x
p1
x
p
Ax
p
Tgx
p
I; 4
where x x
1
; x
2
; . . . ; x
n

T
2 R
n
, A diaga
1
; . . . ; a
n
2 M
nn
, I I
1
; . . . ; I
n

T
2 R
n
and g : R
n
!R
n
is given
by g(x) = (g
1
(x
1
), g
2
(x
2
), . . . , g
n
(x
n
))
T
.
Since neural networks like (1) have been rst considered in [2,3], they have received much attention because
of their applicability in problems of optimization, signal processing, image processing, solving nonlinear alge-
braic equation, pattern recognition, associative memories and so on. The qualitative analysis of neural dynam-
ics plays an important role in the design of practical neural networks.
To solve problems of optimization, neural control and signal processing, neural networks have to be
designed in such way that, for a given external input, they exhibit only one globally asymptotically stable
steady state. This matter has been treated in [49,1] and the references therein.
On the other hand, if neural networks are used to analyze associative memories, the existence of several
locally asymptotically stable steady states is required, as they store information and constitute distributed
and parallel neural memory networks. In this case, the purpose of the qualitative analysis is the study of
the locally exponentially stable steady states (existence, number, regions of attraction) so as to ensure the recall
capability of the models. Conditions for the local asymptotic stability of the steady states (and estimations of
their regions of attraction) for Hopeld-type neural networks have been derived din [1015] using single
Lyapunov functions and in [16,17] using vector Lyapunov functions.
The aim of this paper is to show, that in some conditions, for some values of the external input I there exists
a conguration of at least 2
n
asymptotically stable steady states for (1) and (2). Therefore, there exist 2
n
paths
of steady states dened on a set of values of the external input. Estimations of the regions of attraction of these
steady states are given. Finally, we address the problem of controllability of these congurations of steady
states.
2. Paths of steady states
A steady state x = (x
1
, x
2
, . . . , x
n
)
T
of (1) and (2) corresponding to the external input I = (I
1
, I
2
, . . . , I
n
)
T
is a
solution of the equation:
Ax Tgx I 0: 5
For a given external input vector I 2 R
n
the system (5) may have one solution, several solutions or it may hap-
pen that it has no solutions. On the other hand, for any state x 2 R
n
there exists a unique external input
Ix 2 R
n
such that x is a steady state of (1) and (2) corresponding to the input I(x). Clearly, the external input
function I : R
n
!R
n
is of class C
1
or R
n
and is dened by
Ix Ax Tgx: 6
The set I dened by
I fI 2 R
n
=9x 2 R
n
: I Ax Tgxg 7
is the collection of those inputs I for which the system (5) has at least one solution. If I R
n
then for any
input I 2 R
n
the system (5) has at least one solution. If Iis strictly included in R
n
then there exist input vectors
I for which system (5) has no solution.
E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946 935
In the followings, before dening the concept of path of steady states, a few preliminary results will be pre-
sented concerning the existence of steady states of (1) and (2) in rectangles included in R
n
.
Proposition 1. If D is a rectangle in R
n
, i.e. for i 1; n there exist a
i
; b
i
2 R, a
i
< b
i
such that D = (a
1
, b
1
)
(a
2
, b
2
) (a
n
, b
n
) and det(A TDg(x)) 50 for any x 2 D then the function Ij
D
(the restriction of the
external input function to D) is injective.
Proof. Let be x
0
, x
00
2 D, x
0
5x
00
. For every i 1; n there exists c
i
2 x
0
i
; x
00
i
such that g
i
x
0
i
g
i
x
00
i

g
0
i
c
i
x
0
i
x
00
i
. Therefore, we have
Ix
0
Ix
00
Ax
0
x
00
Tgx
0
gx
00
A TDgcx
0
x
00
; 8
where c c
1
; c
2
; . . . ; c
n

T
2 x
0
1
; x
00
1
x
0
2
; x
00
2
. . . x
0
n
; x
00
n
& D. Hence, the matrix A TDg(c) is non-singu-
lar, which provides that I(x
0
) 5I(x
00
). Thus, Ij
D
is injective. h
Corollary 2. Let be a rectangle D & R
n
such that det(A TDg(x)) 50 for any x 2 D. Then for any I 2 I(D) the
system (1) and (2) has a unique steady state in D.
In [9], it has been shown that in certain conditions (similar to those from the following Corollary) for a
given input vector I 2 R
n
the neural network dened by (1) and (2) has a unique steady state in R
n
.
Corollary 3. Let be a rectangle D & R
n
. If g
0
i
s > 0 for any s 2 R and i 1; n and:
T
ii

a
i
g
0
i
x
i

i6j
jT
ji
j < 0 8i 1; n; 8x 2 D 9
then for any I 2 I(D) the system (1) and (2) has a unique steady state in D. If I
i
> 0 for any i 1; n then the coor-
dinates of the steady state are positive.
Next, we dene the concept of path of steady states and we give some sucient conditions for the existence
of one ore several paths of steady states for (1) and (2).
Denition 4. We call path of steady states for (1) and (2) a continuous function u : U & I !R
n
such that
AuI TguI I 0 8I 2 U; 10
i.e. u(I) is a steady state of (1) and (2) for any input I 2 U. If u is of class C
1
on U then we say that u is a
C
1
-path.
Let be an external input I
H
2 I and x
H
2 R
n
a steady state corresponding to this input, i.e.
Ax
w
+ Tg(x
w
) + I
w
= 0. The following theorem holds for the existence of a C
1
-path of steady states of
(1) and (2) which contains x
w
:
Theorem 5. If the matrix A TDg(x
w
) is non-singular, then there exists a unique maximal domain V
H
& R
n
containing x
w
, a unique maximal domain U
H
& R
n
containing I
w
and a uniquely dened function u : U
w
! V
w
of
class C
1
having the following properties:
(i) u(I
w
) = x
w
;
(ii) Au(I) + Tg(u(I)) + I = 0 for any I 2 U
w
;
(iii) the matrix A TDg(x) is non-singular on V
w
.
Proof. Direct consequence of the implicit function theorem and the continuous dependence of
det(A TDg(x)) on x. h
Let be the set C dened by
C fx 2 R
n
= detA TDgx 0g 11
and the set G R
n
n C. The set G is an open set and for any x 2 G we have that det(A TDg(x)) 50.
936 E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946
Let be {G
a
}
a
the set of the open connected components of G, i.e. for any a the set G
a
5; is open and con-
nected,

a
G
a
G and G
a
0 \ G
a
00 ; if a
0
5a
00
.
Theorem 6. If G
a
is a rectangle in R
n
then there exists a unique function u
a
: H
a
! G
a
of class C
1
having the
following properties:
(i) H
a
= I(G
a
) where I(x) = Ax Tg(x) for any x 2 R
n
;
(ii) Au
a
(I) Tg(u
a
(I)) = I for any I 2 H
a
;
(iii) the matrix Du
a
(I) is non-singular on H
a
.
Proof. According to Proposition 1, Ij
G
a
is injective. Consider H
a
= I(G
a
) and remark that Ij
G
a
is a bijection of
class C
1
. Now we can consider u
a
Ij
G
a

1
which satises ii. and iii. h
It is clear that for every a, the function u
a
given by Theorem 6 is C
1
-path of steady states of (1) and (2).
Remark 7. If the set G
a
is not a rectangle in R
n
, consider D
a
the largest rectangle included in G
a
. For the
rectangle D
a
the statements of Theorem 6 are fullled, i.e. there exists a unique C
1
-path of steady states
u
a
: I(D
a
) !D
a
of (1) and (2).
In this way, it has been shown that R
n
can be decomposed as R
n

a
G
a
_ _
[ C and sucient conditions
have been found assuring that for an input vector I in a certain set H
a
& R
n
there exists a unique steady state
in the largest rectangle D
a
included in G
a
. This is the case in general, when several paths of steady states exist.
This kind of a result can be important in the design of Hopeld-type neural networks used to analyze asso-
ciative memories.
3. Controllability
Denition 8. A change at a certain moment of the external input from I
0
to I
00
is called maneuver and it is
denoted by I
0
#I
00
.
We say that the maneuver I
0
#I
00
made at t = t
0
is successful on the path u
a
: H
a
!D
a
of (1) if I
0
, I
00
2 H
a
and if the solution of the initial value problem
_ x Ax Tgx I
00
xt
0
u
a
I
0
12
tends to u
a
(I
00
) as t !1.
We say that the maneuver I
0
#I
00
made at p = p
0
is successful on the path u
a
: H
a
!D
a
of (2) if I
0
, I
00
2 H
a
and if the solution of the initial value problem
x
p1
Bx
p
Tgx
p
I
00
x
p
0
u
a
I
0
13
tends to u
a
(I
00
) as p !1.
The system (1) and (2) is said to be controllable along a path of steady states if any two steady states
belonging to the path can be transferred one in the other by a nite number of successive maneuvers.
If there exists a unique path of globally exponentially stable steady states u : R
n
!R
n
of (1) and (2) then
any maneuver I
0
#I
00
is successful on the path u, therefore, the system (1) and (2) is controllable along the
path u.
If the steady states u
a
(I) of the path u
a
are only locally exponentially stable, then it may happen that some
maneuvers are not successful along this path. In such cases, it is appropriate to use the following result [1820]:
Theorem 9. For two steady states u
a
(I
w
) and u
a
(I
ww
) belonging to the path u
a
: H
a
!D
a
of locally exponentially
stable steady states of (1) and (2), there exists a nite number of values of the external inputs I
1
, I
2
, . . . , I
p
2 I(D
a
)
such that all the maneuvers
I
H
7!I
1
7!I
2
7! 7!I
p
7!I
HH
14
are successful on the path u
a
.
E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946 937
Theorem 9 states that the system (1) and (2) is controllable along the path u
a
of locally exponentially stable
steady states. In fact, the transfer from a steady state u
a
(I
w
) to a steady state u
a
(I
ww
) is made through the
regions of attraction of the states u
a
(I
1
), u
a
(I
2
), . . . , u
a
(I
n
), u
a
(I
ww
).
Corollary 10. If
a2C
H
a
5; for a certain set C of indexes a and the paths u
a
:
a2C
H
a
!D
a
(a 2 C) are locally
exponentially stable, then for two congurations of steady states {u
a
(I
w
)}
a2C
and {u
a
(I
ww
)}
a2C
where
I
w
, I
ww
2
a2C
H
a
there exists a nite number of external input vectors I
1
, I
2
, . . . ,I
p
2
a2C
H
a
such that the
maneuvers
I
H
7!I
1
7!I
2
7! 7!I
p
7!I
HH
15
transfer the conguration {u
a
(I
w
)}
a2C
into the conguration {u
a
(I
ww
)}
a2C
.
Example 11. Let be the following Hopeld-type neural network [12]:
_ x
1
x
1

17 ln 4
15
tanh x
2
I
1
;
_ x
2
x
2

17 ln 4
15
tanh x
1
I
2
:
_

_
16
It is easy to see that for (I
1
, I
2
) = (0, 0) the system (16) has three steady states: (0, 0)
T
which is unstable and
(ln4, ln4)
T
and (ln4, ln4)
T
which are locally exponentially stable. The external input function is
I : R
2
!R
2
dened by
Ix
1
; x
2
x
1

17 ln 4
15
tanh x
2
; x
2

17 ln 4
15
tanh x
1
_ _
T
: 17
The Jacobi matrix of the system is
1
17 ln 4
15cosh x
2

2
17 ln 4
15cosh x
1

2
1
_
_
_
_
_
_
_
_
_
_
18
which is non-singular if and only if cosh x
1
cosh x
2
6
17 ln 4
15
. It follows that the set G has two open connected
components:
G

x x
1
; x
2

T
2 R
2
: cosh x
1
cosh x
2
<
17 ln 4
15
_ _
; 19
G

x x
1
; x
2

T
2 R
2
: cosh x
1
cosh x
2
>
17 ln 4
15
_ _
: 20
All the steady states belonging to the set G
-
are unstable, as the eigenvalues of the Jacobi matrix in a point
x = (x
1
,x
2
)
T
2 G
-
are
17 ln 4
15 cosh x
1
cosh x
2
1, one of them being positive, and the other negative.
In the other connected component G
+
there exist at least two paths of steady states, one of them containing
(ln4, ln4)
T
and the other containing (ln4, ln4)
T
. All the steady states belonging to G
+
are locally
exponentially stable.
We denote by u the path included in G
+
which contains the steady state (ln4, ln4)
T
. Let us analyze some
characteristics of the steady states belonging to the path u which correspond to external inputs of the form
I = (I
1
, I
2
)
T
with I
1
= I
2
. One can prove that the steady states which correspond to a given input (I
1
, I
1
)
T
are of
the form (x
1
, x
1
)
T
. It is obvious that for any steady state (x
1
, x
1
)
T
from the rst bisector it corresponds an input
(I
1
, I
1
)
T
where I
1
x
1
x
1

17 ln 4
15
tanh x
1
.
In Fig. 1 we have represented four steady states which belong to the rst bisector: x
1
= (1, 1)
T
for which
I
1
1
0:196566, x
2
= (ln4, ln4) which corresponds to I
2
1
0, x
3
= (3, 3) for which I
3
1
1:43664 and x
4
= (4, 4)
which corresponds to I
4
1
2:42992. All these steady states belong to G
+
, therefore, they are locally
938 E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946
exponentially stable. In Fig. 1, the estimates of the regions of attraction of each steady state x
i
, i 1; 4 are also
presented. These estimates have been found using the method proposed in [21].
One can see that x
1
is in the estimate of the region of attraction of x
4
, therefore, the maneuver
I : I
1
1
; I
1
1

T
7!I
4
1
; I
4
1

T
is successful and transfers the neural network from the steady state x
1
to the steady
state x
4
directly. On the other hand, x
4
does not belong to the estimate of the region of attraction of x
1
,
therefore, we are not sure that the direct maneuver I : I
4
1
; I
4
1

T
7!I
1
1
; I
1
1

T
is successful. On the other hand, we
observe that x
4
2 D
a
(x
3
), x
3
2 D
a
(x
2
) and x
2
2 D
a
(x
1
) (where D
a
denotes the region of attraction), hence the
neural network can be transferred from x
4
to x
1
by the following successive maneuvers:
I : I
4
1
; I
4
1

T
7!I
3
1
; I
3
1

T
7!I
2
1
; I
2
1

T
7!I
1
1
; I
1
1

T
: 21
4. Congurations of 2
n
steady states
In this section, we will consider the following hypothesis for the activation functions:
(H
1
) The activation functions g
i
; i 1; n are bounded:
jg
i
sj 6 1 for any s 2 R; i 1; n: 22
(H
2
) There exists a 2 (0, 1) such that the functions g
i
; i 1; n satisfy:
g
i
s Pa if s P1 and g
i
s 6 a if s 6 1: 23
(H
3
) The derivatives of the activation functions g
i
; i 1; n satisfy:
jg
0
i
sj <
a
i

n
j1
jT
ji
j
8jsj P1: 24
These hypothesis are not too restraining. Indeed, if the activation functions are bounded, but not by 1, one
can consider the new activation functions
g
i
sup
s2R
jg
i
sj
and replace the matrix T by the matrix
T
ij
sup
s2R
jg
j
sj
_ _
nn
. These new activation functions satisfy hypothesis (H
1
).
More, the activation functions g
i
are usually chosen to satisfy the following conditions: g
i
(s) !1 as s !1,
g
i
(s) ! 1 as s !1 and g
0
i
s ! 0 as s ! 1. Hence, for a 2 (0, 1) there exists M > 0 such that for any
i 1; n one has:
Fig. 1. Estimates of the regions of attraction of x
i
i 1; 4.
E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946 939
g
i
(s) Pa if s PM, g
i
(s) 6 a if s 6 M.
jg
0
i
sj <
a
i

n
j1
jT
ji
j
for jsj PM.
If M6 1, then hypothesis (H
2
) and (H
3
) hold for the activation functions g
i
.
If M > 1, consider the suitable rescaling in the system (3): y
1
M
x. System (3) becomes:
_ y Ay
1
M
TgMy
1
M
I 25
which describes a neural network having the activation function ~ gy
1
M
gMy and the external input
~
I
1
M
I. The functions ~ g
i
satisfy hypothesis (H
2
) and (H
3
):
~ g
i
s P~a if s P1 and ~ g
i
s 6 ~a if s 6 1 where ~a
a
M
2 0; 1:
j~ g
0
i
sj <
a
i

n
j1
jT
ji
j
for jsj P1.
The following theorem provides a bound for the set of steady states of (1) and (2) corresponding to an
input I:
Theorem 12. If hypothesis (H
1
) is fullled, then for any input vector I 2 R
n
the following statements hold:
(i) There exists at least one steady state of (1) and (2) (corresponding to I) in the rectangle
D
I
= [M
1
, M
1
] [ M
2
, M
2
] [ M
n
, M
n
] of R
n
, where
M
i

1
a
i
jI
i
j

n
j1
jT
ij
j
_ _
for any i 1; n: 26
(ii) Every steady state of (1) and (2), corresponding to I, belongs to the rectangle D
I
dened above.
(iii) If in addition det(A TDg(x)) 50 for any x 2 D
I
then the system (1) and (2) has a unique steady state,
corresponding to I, and it belongs to D
I
.
Proof. The set of steady states of (1) and (2) corresponding to I are given by Eq. (5), which is equivalent
to
x A
1
I Tgx: 27
Let be the function h : R
n
!R
n
dened by h(x) = A
1
(I + Tg(x)). For any x 2 R
n
and i 1; n, one has
jh
i
xj
1
a
i
I
i

n
j1
T
ij
g
j
x
j

_ _

6
1
a
i
jI
i
j

n
j1
jT
ij
j
_ _
M
i
: 28
Therefore, hR
n
& D
I
, which proves ii. More, one gets that h(D
I
) & D
I
, and as h is a continuous function,
Brouwers xed point theorem guarantees the existence of at least one steady state of (1) and (2) corresponding
to I in D
I
, so the statement i holds. Statement iii follows directly from Corollary 2. h
For every e 2 { 1}
n
we dene the rectangle D
e
= J(e
1
) J(e
2
) J(e
n
) where J(1) = (1, 1) and
J(1) = (1, 1).
For continuous-time Hopeld-type neural networks, the following theorem holds:
Theorem 13 (the continuous case). Suppose that hypothesis (H
1
) and (H
2
) hold. If the external input I 2 R
n
satises
jI
i
j < T
ii
a a
i

i6j
jT
ij
j 8i 1; n 29
then the following statements hold:
940 E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946
(i) In every rectangle D
e
, e 2 { 1}
n
, there exists at least one steady state of (1) corresponding to the input I.
(ii) Every D
e
, e 2 { 1}
n
, is invariant to the ow of system (1).
(iii) More, if hypothesis (H
3
) holds as well, then the steady state of (1) corresponding to the input I, which lies in
the rectangle D
e
, e 2 { 1}
n
is unique, it is exponentially stable and its region of attraction includes D
e
.
Proof. Let be an input I satisfying (29) and e 2 { 1}
n
.
(i) Consider the function h : R
n
! D
I
dened by h(x) = A
1
(I + Tg(x)) and the rectangle D
I
given in the
Theorem 12. For x 2 D
e
we have that e
i
x
i
P1 for any i 1; n and therefore
e
i
h
i
x
e
i
a
i
T
ii
g
i
x
i

j6i
T
ij
g
j
x
j
I
i
_ _
P
1
a
i
T
ii
a

j6i
jT
ij
j jI
i
j
_ _
> 1: 30
This means that h
i
(x) 2 J(e
i
) for any i 1; n and therefore, h
i
(x) 2 D
e
. We have just proved that
hD
e
& D
e
\ D
I
and Brouwers xed point theorem guarantees the existence of at least one steady state
of (1) corresponding to the input I in D
e
\ D
I
.
(ii) Let be x
0
2 D
e
. Suppose that there exists t
0
P0 and i 2 1; n such that x
i
(t
0
) = e
i
, where x
i
(t) = x
i
(t;x
0
,I).
Consider y
i
t e
i
a
i
x
i
t

n
j1
T
ij
g
j
x
j
t I
i
. Based on (29) and hypothesis (H
1
), we have that
y
i
t
0
e
i
a
i
e
i
T
ii
g
i
e
i

j6i
T
ij
g
j
x
j
t
0
I
i
_ _
Pa
i
T
ii
a

j6i
jT
ij
j jI
i
j > 0: 31
Therefore, there exists t
1
> t
0
such that y
i
(t) > 0 for any t 2 [t
0
, t
1
]. This implies that e
i
_ x
i
t y
i
t > 0 for
any t 2 [t
0
, t
1
] and therefore the function e
i
x
i
is strictly increasing on [t
0
, t
1
]. Hence
e
i
x
i
t > e
i
x
i
t
0
e
2
i
1 for any t 2 (t
0
, t
1
]. This means that x
i
(t) 2 J(e
i
) for any t 2 (t
0
, t
1
]. It follows that
the solution x(t; x
0
, I), x
0
2 D
e
will remain in D
e
for any t P0.
(iii) Consider b
i
such that jg
0
i
sj 6 b
i
<
a
i

n
j1
jT
ji
j
for any jsj P1 and i 1; n. We will rst show that the steady
state of (1), corresponding to the input I, which lies in D
e
is unique. Suppose the contrary, i.e. there exist
two steady states x, y 2 D
e
, x 5y of (1). For every i 1; n, one has:
a
i
jx
i
y
i
j

n
j1
T
ij
g
j
x
j
g
j
y
j

n
j1
jT
ij
jb
j
jx
j
y
j
j: 32
Therefore,

n
i1
a
i
jx
i
y
i
j 6

n
i1

n
j1
jT
ij
jb
j
jx
j
y
j
j <

n
j1
a
j
jx
j
y
j
j 33
which is absurd. Therefore, there exists a unique steady state of (1) corresponding to the input I, which
lies in the rectangle D
e
. It will be denoted by x
I,e
.
Lets prove that x
I,e
is exponentially stable and its region of attraction includes D
e
. Let be x
0
2 D
e
. From ii.
we get that xt; x
0
; I 2 D
e
for any t P0. Consider the function V : R

!R

dened by
V t

n
i1
jx
i
t; x
0
; I x
I;e
i
j 8t P0: 34
E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946 941
The function V is dierentiable on (0, 1) and its derivative satises
V
0
t

n
i1
sgnx
i
t x
I;e
i
_ x
i
t

n
i1
sgnx
i
t x
I;e
i
a
i
x
i
t

n
j1
T
ij
gx
j
t I
i
_ _

n
i1
sgnx
i
t x
I;e
i
a
i
x
i
t x
I;e
i

n
j1
T
ij
gx
j
t gx
I;e
j

n
i1
a
i
jx
i
t x
I;e
i
j

n
i1
sgnx
i
t x
I;e
i

n
j1
T
ij
gx
j
t gx
I;e
j

6

n
i1
a
i
jx
i
t x
I;e
i
j

n
i1

n
j1
jT
ij
jb
j
jx
j
t x
I;e
j
j

n
i1
a
i
b
i

n
j1
jT
ji
j
_ _
jx
i
t x
I;e
i
j 6 kV t;
35
where k min
i1;n
a
i
b
i

n
j1
jT
ji
j
_ _
> 0. Therefore, we have that V(t) 6 e
kt
V(0). Hence V(t) !0 exponen-
tially as t !1. This means that x(t; x
0
,I) !x
I,e
as t !1. Thus x
I,e
is exponentially stable and its region of
attraction includes D
e
. h
For discrete-time Hopeld-type neural networks, the following theorem holds:
Theorem 14 (the discrete case). Suppose that hypothesis (H
1
) and (H
2
) hold and that a
i
2 0; 1; i 1; n. If the
external input I 2 R
n
satises
jI
i
j < T
ii
a a
i

i6j
jT
ij
j 8i 1; n 36
then the following statements hold:
(i) In every rectangle D
e
, e 2 { 1}
n
, there exists at least one steady state of (2) corresponding to the input I.
(ii) Every D
e
; e 2 f1g
n
, is invariant to the map x #f(x, I).
(iii) More, if hypothesis (H
3
) holds as well, then the steady state of (2) corresponding to the input I, which lies in
the rectangle D
e
, e 2 { 1}
n
is unique, it is exponentially stable and its region of attraction includes D
e
.
Proof. Let be an input I satisfying (36) and e 2 { 1}
n
.
(i) similar to the proof of Theorem 13(i).
(ii) Let be x 2 D
e
. One has to prove that f x; I 2 D
e
. Using (36), hypothesis (H
1
) and (H
2
), for any i 1; n it
results that:
e
i
f
i
x; I e
i
x
i
a
i
x
i
T
ii
g
i
x
i

j6i
T
ij
g
j
x
j
I
i
P1 a
i
T
ii
a

j6i
jT
ij
j jI
i
j > 1: 37
Therefore, f x; I 2 D
e
.
(iii) Suppose that jg
0
i
sj 6 b
i
<
a
i

n
j1
jT
ji
j
for any jsj P1 and i 1; n. The uniqueness of the steady state of (2)
corresponding to the input I, which lies in the rectangle D
e
is shown in the same way as in Theorem
13(iii). This unique steady state will be denoted by x
I,e
.
Let us prove that x
I,e
is exponentially stable and its region of attraction includes D
e
. Consider the function
V : R
n
!R

dened by
V x

n
i1
jx
i
x
I;e
i
j 8x 2 R
n
: 38
942 E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946
On D
e
, the function V satises:
V f x; I

n
i1
jf
i
x; I x
I;e
i
j

n
i1
j1 a
i
x
i
x
I;e
i

n
j1
T
ij
g
j
x
j
g
j
x
I;e
j
j
6

n
i1
1 a
i
jx
i
x
I;e
i
j

n
i1

n
j1
jT
ij
jjg
j
x
j
g
j
x
I;e
j
j
6

n
i1
1 a
i
jx
i
x
I;e
i
j

n
i1

n
j1
jT
ij
jb
j
jx
j
x
I;e
j
j

n
i1
1 a
i
b
i

n
j1
jT
ji
j
_ _
jx
j
x
I;e
j
j 6 kV x; 39
where k max
i1;n
1 a
i
b
i

n
j1
jT
ji
j
_ _
2 0; 1. From ii. we have that V(f
p
(x, I)) 6 k
p
V(x) for any p 2 N.
Hence V(f
p
(x, I)) !0 as p !1. This means that f
p
(x, I) ! x
I,e
as p !1. Thus x
I,e
is exponentially stable
and its region of attraction includes D
e
. h
Remark 15. From Theorem 12, it follows that if there exists an input I satisfying
jI
i
j 6 a
i

n
j1
jT
ij
j 8i 1; n 40
then there exists at least one steady state of (1) and (2) corresponding to I belonging to the rectangle [1, 1]
n
,
and there are no other steady states corresponding to I outside this rectangle. The existence of such an input
implies that a
i
> jT
ii
j for any i 1; n.
On the other hand, Theorems 13,14 guarantee that if there exists a 2 (0, 1) and an input I satisfying
jI
i
j < T
ii
a a
i

i6j
jT
ij
j 8i 1; n 41
then there exist 2
n
steady states corresponding to I outside the rectangle [1, 1]
n
(in every rectangle D
e
). The
existence of such an input implies that a
i
< T
ii
a for any i 1; n.
It is easy to see that the above conditions oppose.
We will denote by U the set dened by
U I 2 R
n
=jI
i
j < T
ii
a a
i

i6j
jT
ij
j; 8i 1; n
_ _
: 42
The consequence of Theorems 13 and 14 can be summarized as follows:
Theorem 16. Suppose that hypothesis (H
1
), (H
2
) and (H
3
) hold for the activation functions g
i
; i 1; n and that
T
ii
a a
i

i6j
jT
ij
j > 0 8i 1; n: 43
More, in the discrete case, suppose that a
i
2 0; 1; i 1; n. The following statements hold:
(i) U 5; and in each rectangle D
e
, e 2 { 1}
n
there exists a unique path of exponentially stable steady states
u
e
: U !D
e
for (1) and (2).
(ii) For every I 2 U and e 2 { 1}
n
, the region of attraction of u
e
(I) includes D
e
.
(iii) The maneuver I
0
#I
00
, with I
0
,I
00
2 U is successful along every path of steady states u
e
: U !D
e
,
e 2 { 1}
n
. It transfers the conguration of steady states u
e
(I
0
) to the conguration of steady states u
e
(I
00
).
(iv) The system (1) and (2) is controllable along every path u
e
: U !D
e
, e 2 { 1}
n
.
E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946 943
Example 17. Consider the following neural network:
_ x
1
a
1
x
1
b
1
gx
1
b
2
gx
2
I
1
;
_ x
2
a
2
x
2
b
2
gx
1
b
1
gx
2
I
2
;
_
44
where g : R !R, gs
2
p
arctan
p
2
s
_ _
.
Let be a = g(1) 0.63. One can easily check that the activation function g satises the hypothesis (H
1
) and
(H
2
). Let be b = g
0
(1) 0.28. We have that 0 < g
0
(s) 6 b for any jsj P1. If
bjb
1
j jb
2
j < a
i
< ab
1
jb
2
j i 1; 2 45
then the activation function g satises hypothesis (H
3
) and the condition (43) of Theorem 16 is also fullled.
The set U is dened in this case by:
U fI 2 R
2
=jI
i
j < ab
1
jb
2
j a
i
; i 1; 2g: 46
We will consider b
1
= 1000, b
2
= 0.5 and a
1
= a
2
= a b
1
jb
2
j 300. In this case, we have
U = (300, 300) (300, 300). Therefore, we have four paths of exponentially stable steady states of (44),
which we denote by u
e
: U !D
e
, e 2 { 1}
2
. In Fig. 2, the gray rectangles represent the four sets of steady
states u
e
(U).
The four spirals in Fig. 2 represent the steady states corresponding to the inputs I
u
= (20ucos u, 20usinu)
with u 2 [0, 4p].
Example 18. We consider a discrete Hopeld-type neural network with the non-monotone activation function
g(x) = tanh(5x) tanh(10x
2
1):
x
1
p1
0:5x
1
p
20gx
1
p
gx
2
p
I
1
;
x
2
p1
0:5x
2
p
gx
1
p
20gx
2
p
I
2
:
_
47
It has been shown [22] that in some cases, the absolute capacity of an associative neural network can be im-
proved by using non-monotone activation functions instead of the usual sigmoid ones.
Consider a = f(1) 2 (0, 1). It can be veried that the activation function g satises hypothesis (H
1
), (H
2
) and
(H
3
) and that condition (43) of Theorem 16 is also fullled. The set U is in this case the rectangle
U fI 2 R
2
=jI
i
j < 18:4982; i 1; 2g: 48
-3 -2 -1 0 1 2 3 4
-3
-2
-1
0
1
2
3
4
Fig. 2. The sets of steady states u
e
(U) for (44).
944 E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946
Therefore, we have four paths of exponentially stable steady states of (47), which we denote by u
e
: U !D
e
,
e 2 { 1}
2
. In Fig. 3, the gray rectangles represent the four sets of steady states u
e
(U).
In Fig. 3, we have also represented the four steady states corresponding to the input I = (0, 0)
T
, namely:
(38, 38)
T
, (42, 42)
T
, (42, 42)
T
and (38, 38)
T
and the four steady states corresponding to the input
I = (10, 10)
T
, namely: (58, 58)
T
, (22, 62)
T
, (62, 22)
T
and (18, 18)
T
.
The maneuver I : (0, 0)
T
#(10, 10)
T
transfers the conguration of steady states {(38, 38)
T
, (42, 42)
T
,
(42, 42)
T
, (38, 38)
T
} to the conguration of steady states {(58, 58)
T
, (22, 62)
T
, (62, 22)
T
, (18, 18)
T
}.
5. Conclusions
For continuous and discrete-time Hopeld-type neural networks, conditions ensuring the existence of 2
n
paths of exponentially stable steady states dened on a certain set of external input values have been derived.
More, it has been shown that the system is controllable along each of these paths of steady states. Finding
similar conditions for CohenGrossberg neural networks may constitute a direction for future research.
References
[1] S. Mohamad, K. Gopalsamy, Dynamics of a class of discrete-time neural networks and their continuous-time counterparts,
Mathematics and Computers in Simulation 53 (12) (2000) 139.
[2] J. Hopeld, Neural networks and physical systems with emergent collective computational abilities, Proceedings of the National
Academy of Sciences 79 (1982) 25542558.
[3] D. Tank, J. Hopeld, Simple neural optimization networks: an a/d converter, signal decision circuit and a linear programming circuit,
IEEE Transactions on Circuits and Systems 33 (1986) 533541.
[4] M. Forti, A. Tesi, New conditions for global stability of neural networks with applications to linear and quadratic programming
problems, IEEE Transactions on Circuits and Systems 42 (7) (1995) 345366.
[5] M. Cohen, S. Grossberg, Absolute stability of global pattern formation and parallel memory storage by competitive neural networks,
IEEE Transactions on Systems, Man, and Cybernetics 13 (5) (1983) 815826.
[6] N. Dinopoulos, A study of the asymptotic behavior of neural networks, IEEE Transactions on Circuits and Systems 36 (1989) 863
867.
[7] A. Denbo, O. Farotimi, T. Kailath, Higher order absolute stable neural networks, IEEE Transactions on Circuits and Systems 38
(1991) 5765.
[8] K. Urakama, Global stability of some class of neural networks, Transactions of the IEICE E72 (1989) 863867.
[9] S. Chen, Q. Zhang, C. Wang, Existence and stability of equilibria of continuous time Hopeld neural network, Journal of
Computational and Applied Mathematics 169 (1) (2004) 117125.
-75 -50 -25 0 25 50 75
-75
-50
-25
0
25
50
75
Fig. 3. The sets of steady states u
e
(U) for (47) and the maneuver I : (0, 0)
T
#(10, 10)
T
.
E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946 945
[10] J. Cao, An estimation on domain of attraction and convergence rate of Hopeld continuous feedback neural networks, Physics
Letters A 325 (56) (2004) 370374.
[11] J. Cao, Q. Tao, Estimation on domain of attraction and convergence rate of Hopeld continuous feedback neural networks, Journal
of Computer and Systems Sciences 62 (2001) 528534.
[12] Z. Yi, P. Heng, A.W. Fu, Estimate of exponential convergence rate and exponential stability for neural networks, IEEE Transactions
on Neural Networks 10 (6) (1999) 14871493.
[13] S. Guo, L. Huang, L. Wang, Exponential stability of discrete-time Hopeld neural networks, Computers and Mathematics with
Applications 47 (2004) 12491256.
[14] Z. Yuan, D. Hu, L. Huang, Stability and bifurcation analysis on a discrete-time system of two neurons, Applied Mathematical Letters
17 (2004) 12391245.
[15] Z. Yuan, D. Hu, L. Huang, Stability and bifurcation analysis on a discrete-time neural network, Journal of Computational and
Applied Mathematics 177 (2005) 89100.
[16] S. Koksal, S. Sivasundaram, Stability properties of the Hopeld-type neural networks, Dynamics and Stability of Systems 8 (3) (1993)
181187.
[17] V. Lakshmikantham, V. Matrosov, S. Sivasundaram, Vector Lyapunov functions and stability analysis of nonlinear systems,
Mathematics and its Applications, Vol. 63, Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 1991.
[18] S. Balint, Considerations concerning the maneuvering of some physical systems, An. Univ. Timisoara, seria St. Mat. XXIII (1985) 8
16.
[19] E. Kaslik, A. Balint, A. Grigis, S. Balint, Control procedures using domains of attraction, Nonlinear Analysis: Theory, Methods and
Applications 63 (57) (2005) e2397e2407.
[20] E. Kaslik, L. Braescu, S. Balint, On the controllability of the continuous-time Hopeld-type neural networks, in: Proceedings of the
7th International Symposium on Symbolic and Numeric Algorithms for Scientic Computing (SYNASC 2005), Workshop: Natural
Computing and Applications, IEEE Computer Society Press, 2006.
[21] E. Kaslik, A. Balint, S. Balint, Methods of determination and approximation of domains of attraction, Nonlinear Analysis: Theory,
Methods and Applications 60 (4) (2005) 703717.
[22] M. Morita, Memory and learning of sequential patterns by nonmonotone neural networks, Neural Networks 9 (8) (1996) 1477
1489.
946 E. Kaslik, St. Balint / Applied Mathematics and Computation 182 (2006) 934946

Das könnte Ihnen auch gefallen