Sie sind auf Seite 1von 26

Chapter 3

Solving Linear Equations


using the Hopfield Neural
Network

3.1 Introduction

3.1

Introduction

As discussed in Chapter 2, the Hopfield network is known to be one of the most


influential and popular models in neural systems research [180]. Many other
models that appeared in the technical literature after the advent of Hopfield
networks, like the bidirectional associative memories [93], Botzmann machines
[2], Q-state attractor neural networks [92], etc., are either direct variants or
generalizations of the Hopfield network [180]. The Hopfield neural network has
been put to use in a variety of applications, including but not limited to, Content Addressable Memory (CAM), solution of Travelling Salesman Problem
(TSP), analog-to-digital conversion, signal decision and linear programming.
To embed and solve problems related to optimization, neural control and signal processing, the Hopfield neural network has to be designed to possess only
one and globally stable equilibrium point, so as to avoid the risk of spurious
responses arising due to the common problem of local minima [61]. In order
to achieve these goals, it is customary to impose constraint conditions on the
interconnection matrix of the system [11]. For instance, most of the study and
application of the Hopfield-type neural networks has been carried out with the
assumption that the interconnection weight matrix (W in (2.2)) is symmetric
[180].
This chapter explores the possibility of solving a system of simultaneous
linear equations using Hopfield neural network based approaches. Section 3.2
presents an overview of Hopfields original network along with a discussion
on why the standard Hopfield network is not suitable for the task of solving linear equations. Section 3.3 presents a modified Hopfield neural network
applied to solve linear equations for the case when the coefficient matrix, A
(=(aij )nn in (2.46)) corresponding to the system of linear equations is symmetric. PSPICE simulation results for various systems of simultaneous linear
equations with symmetric coefficient matrices are also presented. Section 3.4
deals with the application of the modified Hopfield network for the solution of
linear equations in case of the coefficient matrix A being asymmetric. Issues

49

3.2 The Hopfield Network

Figure 3.1: i-th neuron of Hopfield neural network

related to the stability and convergence of the aysmmetric Hopfield network


are also discussed. Section 3.5 contains the results of PSPICE simulations of
the proposed aysmmetric neural network applied to solve various sets of linear
equations of varying sizes. Section 3.7 contains remarks about the practicability and limitations of the proposed linear equation solver based on asymmetric
Hopfield neural network.

3.2

The Hopfield Network

A brief overview of the Hopfield network has already been presented in Chapter
2 wherein the differential equation governing the behaviour of the i-th neuron
in the network (reproduced in Figure 3.1) was given as
n

dui X
ui
Ci
=
Wij vj
+ ii
dt
Ri
j=1

vi = gi (ui ), i = 1, 2, . . . , n

(3.1)

where Ci 0, Ri 0, and ii are the capacity, resistance, and bias, respectively,


and ui and vi are the input and output of the i-th neuron respectively and

50

3.2 The Hopfield Network

gi (.) is the characteristic of the i-th neuron [73, 74, 75]. Wij are the elements
of the weight matrix W and these weights are implemented using resistances
Rij (= 1/Wij ) in the electronic realization of the Hopfield network as shown in
Figure 3.1. Hopfield studied such networks under the assumptions that
Wii = 0 for all i

(3.2)

Wij = Wji for all i, j

(3.3)

and

While the assumption of (3.2) implies that no self-interactions (self-feedbacks)


are allowed for the neurons in the original Hopfield network, the assumption
of (3.3) puts a more stringent condition on the interactions between different
neurons wherein only symmetric interactions are allowed. This means that the
coefficient weight matrix W (=(Wij )nn ) is assumed to be symmetric. For
such cases, where there are no self-interactions and a symmetric interconnection matrix exists, the Hopfield network is totally stable [159]. Under such
conditions, the energy function corresponding to the Hopfield network, the
i-th neuron of which is shown in Figure 3.1, has already been derived in (2.11)
which is reproduced here for reference (3.4).
X
X 1
1 XX
E=
Wij vi vj
ii v i +
2
Ri
i

vi

gi 1 (v) dv

(3.4)

The last term in (3.4) is only significant near the saturating values of the
opamp and is usually neglected [155]. A plot showing a typical energy function
for a 2neuron system with a symmetric weight matrix with zero diagonal
elements (fulfilling the requirements of (3.2) and (3.3)), and no external bias
ii , is shown in Figure 3.2. It is evident that the stable states for the plot shown
in Figure 3.2 lie at the corners of the hypercube [-Vm ,Vm ] where Vm are the
biasing voltages of the operational amplifiers used to implement the activation
function of the neuron.
Therefore, the standard Hopfield network is not suited for the task of
solving linear equations because of the following reasons:

51

3.2 The Hopfield Network

Figure 3.2: Typical energy function plot for the Hopfield network for a 2 neuron
system for the case of symmetric weight matrix W with zero diagonal elements

Equation (3.2) demands that the weight matrix must have zero diagonal
elements
Equation (3.3) demands that the weights matrix must be symmetric
The minima of the energy function lie at the corners of the hypercube
and cannot be made to occur at a point of interest, for example, the
solution point of the system of linear equations
Hence, it is clear that suitable modifications need to be incorporated in the
standard Hopfield network to make it amenable for the task of solving systems
of simultaneous linear equations. For instance, interchanging the inverting and
non-inverting inputs of the operational amplifiers in the standard Hopfield
network would cause the energy function of (3.4) to become
X
X 1 Z vi
1 XX
gi 1 (v) dv
Wij vi vj +
ii v i
E=
2
Ri 0
i

(3.5)

The above modification coupled with allowance of self-feedback in the neurons


i.e. relaxation in the condition of (3.2) causes the minimum of the energy

52

3.3 Modified Hopfield Network for Solving Linear Equations

Figure 3.3: Typical energy function plot for the Hopfield network for a 2 neuron
system

function to occur at the center of the hypercube. For a 2neuron system, the
typical energy function plot is presented in Figure 3.3.
Although the suitably modified Hopfield network, as discussed above, can
be made to attain a unique minimum in the energy function, the minimum
exists at the center of the hypercube i.e. at vi = 0, for all i. Therefore, the
network is still not suitable for application in the task of solution of linear
equations since for a network to be able to solve a system of linear equations,
the minimum in the energy function must correspond the the solution point
of the system of linear equations, which may not necessarily be at the origin.
The next section presents further modifications to make the network suitable
for solving linear equations.

3.3

Modified Hopfield Network for Solving Linear


Equations

This section deals with the description of a modified Hopfield neural network
that is suitable for the solution of a system of linear equations. Let the simultaneous linear equations to be solved be

53

3.3 Modified Hopfield Network for Solving Linear Equations

AV = B
where

A=

a11
a21
..
.

a12
a22
..
.

(3.6)

. . . a1n
. . . a2n
..
...
.

an1 an2 . . . ann

b1
b2

B= .
.
.

(3.7)

(3.8)

bn

V=

V1
V2
..
.

(3.9)

Vn
where V1 , V2 , . . . , Vn are the variables and aij and bi are constants. Since
a voltage-mode linear equation solver is presented, the decision variables are
designated as voltages V1 , V2 , . . . , Vn to correspond to the output states of
the neurons. It will be assumed that the coefficient matrix A is invertible,
and hence, the system of linear equations (3.6) is consistent and not underdetermined. In other words, the linear system (3.6) has a uniquely determined
solution. Moreover, the solution should lie within the operating region of the
neural circuit i.e. inside the hypercube defined by | Vi | Vm (i = 1, 2, . . . , n).
The i-th neuron of the voltage-mode modified Hopfield neural network
based circuit for solving the system of linear equations (3.6) is presented in
Figure 3.4. It is to be noted that the output of the neuron is now designated
as Vi (instead of vi in Figure 3.1) to conform to the notation used for decision
variables in (3.9). Rpi and Cpi are the parasitic resistance and capacitance of
the opamp corresponding to the i -th neuron. These parasitic components are
included to model the dynamic nature of the opamp. As can be seen from
Figure 3.4, individual equations from the set of equations (3.6) are scaled by
a factor si before application to the neuron amplifiers. This scaling is done

54

3.3 Modified Hopfield Network for Solving Linear Equations

Figure 3.4: i-th neuron of the modified Hopfield neural network for solving
simultaneous linear equations in n-variables

to ensure that all [aij/si] coefficients are less than unity thereby facilitating
their implementation by passive voltage dividers. As is explained later in this
section, the scaling factors may be chosen independently for all the equations.
In that case, the scaling factor for the i-th equation, si , would be such that
si

n
X

aij

(3.10)

j=1

All equations may also be scaled by the same factor s, which must then
be the greatest of all scaling factors chosen for individual equations i.e.
s = max(si );

for all i

(3.11)

Node equation for node A gives the equation of motion of the i -th neuron
as



1
V1
V2
Vn
dui
Cpi
=
+
+...+
ui
dt
Ri1 Ri2
Rin
Reqv,i

(3.12)

where ui is the internal state of the i -th neuron, and


Reqv,i = Ri1 kRi2 k Rin . . . kRi k Rpi

55

(3.13)

3.3 Modified Hopfield Network for Solving Linear Equations

In order to arrive at a valid energy function for the modified Hopfield


neural network for solving linear equations presented in Figure 3.4, we proceed
as follows. The relationship between ui and Vi for the operational amplifier in
Figure 3.4 is given by

Vi = gi

bi
ui
si


(3.14)

which can be rewritten as


ui =

bi
gi 1 (Vi )
si

(3.15)

Also, for such dynamical systems like the one shown in Figure 3.4, the
gradient of the energy function E is related to the time evolution of the network
as given by (3.16) [190].
E
dui
= Cpi
;
Vi
dt

for all i

(3.16)

Using (3.12) and (3.16), along with (3.3), the energy function corresponding to the network of Figure 3.4 can be written as
X bi /si
X 1
1 XX
E=
Wij Vi Vj
Vi +
2
Reqv,i
Reqv,i
i

Vi

gi 1 (V) dV

(3.17)

The last term in (3.17) is negligible in comparison to the first two terms
and is usually neglected [155, 190]. Therefore, the energy function expression
can be simplified to
E=

X bi /si
1 XX
Vi
Wij Vi Vj
2
Reqv,i
i

(3.18)

From (3.18), it is evident that the minimum of the energy function will
also be governed by the values of the elements of the vector B in (3.8), and
therefore, the minimum will now not be at the center of the hypercube as was
the case with the energy function in (3.5).
The stationary point of the energy function of (3.18) can be found by
setting
E
= 0;
Vi

i = 1, 2, . . . , n

56

(3.19)

3.3 Modified Hopfield Network for Solving Linear Equations

from where, we get


V2
Vn
bi /si
V1
+
+...+

= 0;
Ri1 Ri2
Rin Reqv,i

i = 1, 2, . . . , n

(3.20)

The system of linear equations of (3.6) can be written as


ai1 V1 ai2 V2
ain Vn
bi
+
+ ... +
= 0;
si
si
si
si

i = 1, 2, . . . , n

(3.21)

In order for the stationary point of the energy function of (3.18), as found
in (3.20) to coincide with the solution point of the system of linear equations
given in (3.21), the values of the resistances in the network should be
Rij =

si
;
aij

i = 1, 2, . . . , n
j = 1, 2, . . . , n

(3.22)

The value of the resistance Ri , can be found by equating the last terms in
(3.20) and (3.21), thereby yielding
1
=1
Reqv,i

(3.23)

1
1
1
1
1
+
+ ... +
+
+
=1
Ri1 Ri2
Rin Ri Rpi

(3.24)

which can be written as

Also, since Rpi is much larger than all the other resistances in (3.24), it
can be neglected while computing the parallel equivalent of all resistances
connected at the i-th node, and therefore
1
1
1
1
+
+ ... +
+
=1
Ri1 Ri2
Rin Ri

(3.25)

Substituting the values of resistances Rij from (3.22) into (3.25), we get
Ri =

si

s
Pin

j=1 aij

(3.26)

From (3.26), the constraint enforced on the scaling factor as given in (3.10)
can also be obtained since choosing a scaling factor in violation of (3.10) would
result in a negative resistance according to (3.26).

57

3.3 Modified Hopfield Network for Solving Linear Equations

Using (3.22), the values of all the weight resistances in the modified Hopfield network applied to solve linear equations can be given by

1
1
. . . a1n1/s1
R11 R12 . . . R1n
a11 /s1
a12 /s1
1
1
R21 R22 . . . R2n
. . . a2n1/s2
a21 /s2
a22 /s2

..
..
.. =
..
..
..
..
..
.
.
.
.
.
.
.
.
1
1
1
Rn1 Rn2 . . . Rnn
.
.
.
an1 /sn
an2 /sn
ann /sn

(3.27)

The complete symmetric Hopfield neural network for solving a system of


n simultaneous linear equations in n variables obtained by employing neurons
as presented in Figure 3.4 is shown is Figure 3.5.
In order to ascertain that the modified Hopfield neural network based linear equation solver of Figure 3.5 is able to provide solutions for systems of
simultaneous linear equations for which the coefficient matrix A is symmetric,
the network was tested using PSPICE simulations. A system of linear equations in 2 variables (3.28) was first solved using the network of Figure 3.5.
The values of the resistances as obtained from (3.27) and (3.26) for the chosen
system of linear equations in 2 variables (3.28) are given below. The scaling
factors for the equations were taken as s1 = 10 and s2 = 10.


and

R11
R21

2.5 K 3.33 K
3.33 K
2 K

R1
R2


=

V1
V2


4 3
3 5
 
R12
=
R22

1
9

3.33 K
5 K

(3.28)

(3.29)


(3.30)

The circuit to solve a system of 2 linear equations, as obtained from Figure 3.5, is presented in Figure 3.6. Results of PSPICE simulation for the
circuit of Figure 3.6 used to solve (3.28) using the resistance values given in
(3.29) and (3.30) are presented in Figure 3.7 from where it can be seen that
the obtained node voltages are V(1) = 2.00 V and V(2) = 3.00 V which
correspond exactly with the algebraic solution, V1 = 2 and V2 = 3.

58

3.3 Modified Hopfield Network for Solving Linear Equations

Figure 3.5: Complete circuit of the modified Hopfield neural network applied
for solving n simultaneous linear equations in n-variables

59

3.4 Asymmetric Hopfield Network for Solving Linear Equations

Figure 3.6: The modified Hopfield neural network applied for solving simultaneous linear equations in 2 variables, with symmetric interconnection matrix

Figure 3.7: Results of PSPICE simulation for the network of Figure 3.6 applied
to solve (3.28)

3.4

Asymmetric Hopfield Network for Solving Linear Equations

In the previous section, it was shown how the Hopfield network could be suitably modified for the solution of simultaneous linear equations albeit with the
restriction that the coefficient matrix corresponding to the system of equations
is symmetric. In fact, the theory of stability of the Hopfield network and the
existence of a valid energy function, as put forward by Hopfield was under
the assumption that the weight matrix W is symmetric. Typically, since the

60

3.4 Asymmetric Hopfield Network for Solving Linear Equations

weights are implemented using resistors in the standard Hopfield network, the
condtion of symmetry of the weight matrix demands that precisely controlled
resistance values be employed in order to preserve the symmetry. However,
when the Hopfield network is applied to practical applications in the form of
hardware implementation, it is unrealistic to assume that the interactions are
symmetric, since this requires guaranteeing that two physical quantities (such
as resistances or the gains of the operational amplifiers) are exactly equal and
therefore a physically realized network actually turns out to be asymmetric.
Furthermore, as Vidyasagar has pointed out
The consequences of even slight asymmetries in the interactions are disastrous to the theory of stability as put forward by Hopfield [159]
a lot of research effort has been put into studying the qualitative properties of
stability, oscillation, and convergence for asymmetric Hopfield neural networks
[180, 61, 159, 27, 11].
The sufficient conditions guaranteeing that the asymmetric Hopfield neural network has a unique (exponentially) stable equilibrium state have been
presented in various forms:
boundedness of the activation functions [101]
restrictions on the interconnection matrix [181]
negative semidefiniteness of a matrix derived from the interconnection
matrix [50]
M -matrix characteristics exhibited by a matrix derived from the weight
matrix [118]
diagonal stability [86]
diagonally row or column dominance property in the weight matrix[11]
More recently, a new set of simple sufficient conditions have been presented
for the existence of a unique equilibrium and stability of an asymmetric Hopfield neural network [61]. The condition for which a normalized asymmetric

61

3.4 Asymmetric Hopfield Network for Solving Linear Equations

Hopfield network is stable, as derived in [61], is reproduced here for ease of


reference. As a prerequisite, we first define the M -matrix.
Let T (=(tij )nn ) be a square matrix with nonpositive off-diagonal elements. The matrix T is called an M-matrix if the leading principal minors of
T are all positive [61].
To ascertain whether an asymmetric Hopfield network corresponding to a
given weight matrix W, a square matrix T (=(tij )nn ) is first computed using
elements of the interconnection matrix W by using (3.31).

1 Wij ; i = j
tij =
|Wij | ; i 6= j

(3.31)

If the elements of T, as calculated using (3.31), are such that T is a


valid M -matrix, then the asymmetric Hopfield network corresponding to the
interconnection matrix W is globally asymptotically stable [61].
Equation (3.31) therefore puts restrictions on the interconnection matrix
W in the sense that the asymmetric Hopfield network will not yield a unique
and stable equilibrium point for all W. Instead, only those W, for which the
corresponding T matrix calculated using (3.31), is an M -matrix are permissible. Therefore, any application employing an asymmetric Hopfield network
will have limited applicability due to the restriction being imposed on W.
The same is true for the linear equation solver presented in the present chapter wherein it is seen that the network is able to solve only those sets of
simultaneous linear equations for which the coefficient matrix is in accordance
with (3.31).
In the next section, the asymmetric Hopfield neural network is applied
to solve sets of simultaneous linear equations of varying sizes. It is to be
noted that although the possibility of the use of asymmetric Hopfield network
to solve linear equations has existed since the advent of Hopfield network,
the technical literature does not contain any such work. To the best of our
knowledge, the work embodied in this chapter is the first attempt to apply
asymmetric Hopfield neural networks for the task of solving linear equations.

62

3.5 Circuit Simulation Results

3.5

Circuit Simulation Results

Extensive computer simulations using PSPICE software were performed to ascertain the proper working of the asymmetric Hopfield neural network applied
for the solution of linear equations. Various problems sets of 2 to 20 variables
were solved using the network and the results were found to be in accordance
with the mathematical solutions.
The circuit of Figure 3.6 was employed to solve the following system of
equations in 2 variables:


3 2
7 8



V1
V2


=

1
9


(3.32)

The values of the resistances as obtained from (3.22) and (3.26) for the
chosen system of linear equations in 2 variables (3.32) are given below. The
scaling factors for the equations were taken as s1 = 6 and s2 = 21.

and

R11 R12
R21 R22


R1
R2


=

2 K
3 K
3 K 2.625 K


=

6 K
3.5 K


(3.33)


(3.34)

Figure 3.8: Results of PSPICE simulation for the asymmetric Hopfield neural
network applied to solve (3.32)

63

3.5 Circuit Simulation Results

Results of PSPICE simulation for the circuit of Figure 3.6 used to solve
(3.32) using the resistance values given in (3.33) and (3.34) are presented in
Figure 3.8 from where it can be seen that the obtained node voltages are
V(1) = 1.01 V and V(2) = 2.01 V which correspond well with the algebraic
solution, that being V1 = 1 and V2 = 2. Thereafter, various sets of linear equations in 2 variables were solved using the asymmetric Hopfield neural
network. The results, as obtained after PSPICE simulations, are presented in
Table 3.1 from where it can be seen that the network is able to provide correct solutions for all the cases for which the stability criteria discussed in the
previous section is satisfied. It is to be noted that the T matrix in Table 3.1
has been calculated using A instead of A, since the stability criterion presented in Section 3.4 has been found for the case when the neuronal amplifier
in the asymmetric Hopfield network has an activation function which is monotonically increasing whereas for the case of the asymmetric neural network
applied to solve linear equations, the amplifier has monotonically decreasing
characteristics.
Next, a 3 variable system of equations (3.35) was solved using the proposed
network. The values of the resistances as obtained from (3.22) and (3.26) for
the chosen system of linear equations in 3 variables (3.35) are given below.
The scaling factors for the equations were taken as s1 = s2 = s3 = 10.

R11 R12
R21 R22
R31 R32
and

4 2 1
V1
2
2 7 1 V2 = 10
3 1 6
V3
13

R13
2.5 K
5 K
10 K
R23 =
5 K 1.428 K
10 K
R33
3.33 K
10 K 1.666 K

R1
3.33 K
R2 =

R3

(3.35)

(3.36)

64

(3.37)

3.5 Circuit Simulation Results

Figure 3.9: Results of PSPICE simulation for the asymmetric Hopfield neural
network applied to solve (3.35)

Figure 3.10: Results of PSPICE simulation for the asymmetric Hopfield neural
network applied to solve the first set of linear equations in Table 3.3

The proposed circuit was applied to solve (3.35) using the resistance values
given in (3.36) and (3.37). Results obtained after PSPICE simulation are presented in Figure 3.9 from where it can be seen that the obtained node voltages
are V(1) = 1.00 V, V(2) = 1.99 V and V(3) = 1.99 V which correspond well
with the exact mathematical solution of (3.35) which is V1 = 1, V2 = 2 and
V3 = 2.
Next, various sets of linear equations in 3 variables were solved using the
proposed network. The results, as obtained after PSPICE simulations, are
presented in Table 3.2 from where it can be seen that the network is able

65

3.5 Circuit Simulation Results

Figure 3.11: Results of PSPICE simulation for the asymmetric Hopfield neural
network applied to solve the 10-variables system of linear equations in Table 3.4

Figure 3.12: Results of PSPICE simulation for the asymmetric Hopfield neural
network applied to solve the 20-variables system of linear equations in Table 3.4

66

3.5 Circuit Simulation Results

Table 3.1: Results of PSPICE simulations for the asymmetric Hopfield neural
network applied to solve various sets of linear equations in 2 variables

condition of stability as discussed in Section 3.4

67

3.5 Circuit Simulation Results

Table 3.2: Results of PSPICE simulations for the asymmetric Hopfield neural
network applied to solve various sets of linear equations in 3 variables

condition of stability as discussed in Section 3.4

68

3.5 Circuit Simulation Results

Table 3.3: Results of PSPICE simulations for the asymmetric Hopfield neural
network applied to solve various sets of linear equations in 5 variables

condition of stability as discussed in Section 3.4

69

3.5 Circuit Simulation Results

Table 3.4: Results of PSPICE simulations for the asymmetric Hopfield neural
network applied to solve sets of linear equations in 10 and 20 variables

to provide correct solutions for all the cases for which the stability criteria
discussed in the previous section is satisfied.
Different systems of 5 linear equations in 5 variables were also solved using the proposed asymmetric neural network. The results, as obtained after
PSPICE simulations, are presented in Table 3.3 from where it can be seen
that the network is able to provide correct solutions for all the cases for which
the stability criteria discussed in the previous section is satisfied.

70

3.6 Hardware Test Results

Figure 3.13: Obtained result of experimental verification for the asymmetric


Hopfield neural network applied to solve the 2variable problem (3.38)

The proposed asymmetric Hopfield neural circuit for solving linear equations was then applied for solving larger systems of linear equations in 10 and
20 variables. The results, as obtained after PSPICE simulations, are presented
in Table 3.4 from where it can be observed that the network is able to provide
correct solutions for both the sets of equations. The PSPICE output plots for
linear equation solvers for the 10 and 20 variable problems are presented in
Figure 3.11 and Figure 3.12 respectively.

3.6

Hardware Test Results

Although the PSPICE simulations ascertain the validity of the proposed approach, further confirmation of the circuit operation was obtained by per-

71

3.6 Hardware Test Results

Table 3.5: Results of experimental verification for the asymmetric Hopfield


neural network applied to solve various sets of linear equations

forming breadboard implementation for small sized problems. Apart from


verification of the working of the proposed circuit, the actual hardware implementation using discrete components also served the purpose of testing
the convergence of the circuit to the solution starting from different initial
conditions. The noise present in any electronic circuit acts as a random initial condition for the convergence of the neural circuit. Standard laboratory
components viz. the A741 opamp and resistances were used for the purpose.
The proposed network was tested by solving a 2-variable system of linear
equations (3.38). A snapshot of the results as obtained, both on a CRO and
a multimeter, is presented in Figure 3.13. As can be seen, the obtained values
of the neuron outputs are V(1) = 0.99 V and V(2) = 2.96 V which are in
close agreement with the mathematical solution i.e. V1 = 1 and V2 = 3.

72

3.7 Conclusion

3 4
2 3



V1
V2


=

9
7


(3.38)

Further, various sets of equations in 2, 3 and 4 variables were solved using


breadboard implementation with each circuit tested for 10 runs. The results
of hardware implementation are presented in Table 3.5 from where it can be
seen that the circuit is able to converge to the correct solutions in all cases
listed and for all runs. The percentage errors for all solutions are also listed
in Table 3.5 and it is readily observable that the maximum error is 2%.

3.7

Conclusion

A Hopfield neural network based circuit for the solution of a system of simultaneous linear equations is presented. The weight matrix corresponding to the
neural network for the solution of linear equations is governed by the coefficient matrix of the system of linear equations, and is therefore not necessarily
symmetric. The network thus belongs to the category of asymmetric Hopfield
networks [180, 27]. Since the asymmetry of the weight matrix imposes restrictions on the stability of the asymmetric Hopfield neural network, the linear
equation solver discussed in this chapter is applicable to a restricted class of
problems, i.e. for only those sets of linear equations for which the coefficient
matrix satisfies the stability criterion as given in (3.31).
Since the stabilty and convergence of the asymmetric Hopfield neural network is not guaranteed for all W, any application of such networks, including
the linear equation solver discussed in this chapter, is bound to have limited
applicability. Alternative neural network topologies are therefore needed in
order to obtain a linear equation solving network that is capable of providing
valid solutions for all sets of linear equations provided a unique solution exists.
The NOSYNN-based non-linear feedback neural circuit presented in the next
chapter is one such network solution.

73

Das könnte Ihnen auch gefallen