Sie sind auf Seite 1von 6

Genetic Learning Algorithms for Fuzzy Neural Nets

P. V. Krishnamraju' J. J. Buckleyt K. D. Reilly' Y. Hayashi*


* Dept. of Computer and Information Sciences, University of Alabama at Birmingham, Birmingham, AL 35294.
t Mathematics Department, University of Alabama at Birmingham, Birmingham, AL 35294.
$ Department of Computer and Information Sciences, Ibaraki University, Hitachi-shi, Ibaraki 316, Japan.

Abstract
In this paper we present a genetic learning algorithm for fuzzy neural nets. Illustrations are provided.

1 Introduction
This paper is concerned with learning algorithms for fuzzy neural nets. In this section we first introduce
the basic notation to be employed and then discuss what we mean by a fuzzy neural net. Then we briefly
survey the literature on learning algorithms for fuzzy neural nets. The second section contains a description
of our genetic learning algorithm. The third section contains experimental results. The last section has a
brief summary and conclusions.
All our fuzzy sets are fuzzy subsets of the real numbers. We place a bar over a symbol if it represents a
fuzzy set. So A, B, ..., m,
V, are all fuzzy subsets of the real numbers. The membership function of a fuzzy
set A evaluated at t is written as A ( z ) . The a-cut of a fuzzy set B is

B[aI = {zlB(z)) 2 a} 9 (1)


for 0 < a 5 1. We separately specify B[O],the support of B , as the closure of the union of the B[a],0 < a 5
1. A triangular fuzzy number N is defined by three numbers a < b < c where: (1) N ( z ) = 0 for z 5 a and
2 2 c and N ( z ) = 1 if and only if t = b; and (2) the graph of y = N ( z ) is a straight line segment from ( a ,
0) to (a, l ) , ((a, 1) to ( c , 0)) on [a,b] (on [b, c]). If the graph of y = N(z) is a continuous, monotonically
increasing (decreasing), curve on [ a , b] (on [b, c ] ) from ( a , 0) to (b, 1) (from (b, 1) to to ( c , 0)) then we say
N is a triangular shaped fuzzy number. All our fuzzy sets are either triangular fuzzy numbers, or triangular
shaped fuzzy numbers. We abbreviate a triangular fuzzy number, or a triangular shaped fuzzy number, as
N = ( a / b / c). We say N 2 0 if a 2 0. A measure of fuzziness of N is b - a. We write f u z z (N) = b - a.
We say A? is more fuzzy than N if f u z z (A?) 2 f u z z ( N ) . We employ standard fuzzy arithmetic, based on
the extension principle, throughout the paper. If fi(z)5 N ( z ) all x, then we write A? 5 N .
All our fuzzy neural nets are layered (three layers), feedforward, networks. In this paper a fuzzy neural
net has to have fuzzy signals and/or fuzzy weights. The basic fuzzy neural net used in this paper is shown
in Figure 1. The input 8 ,the weights I?i' and c,
are all triangular fuzzy numbers. Due to fuzzy arithmetic
the output (and target T ) can be a triangular shaped fuzzy number.

I Hiddem Layer
I
1.prt L.ycr O r l p u l Layer

Figure 1: Fuzzy Neural Net

All neurons, except the input neuron, have a transfer function y = f(z). This f i s assumed to be a
continuous, non-decreasing, mapping from 9? into [-7,T ] for T some positive integer. The input node acts

0-7803-1896-x/94 $4.00 01994 E E E 1969


like the identity mapping so the input to node i in the hidden layer is

Node i’s output is


Zi = f(Ii), 1 5 i 54 , (3)
evaluated using the extension principle. Therefore, the input to the output node is
Ill = 21 . v1 + ...+ 2 4 . v4 , (4)
with final output
Y =f(I0). (5)
Standard fuzzy arithmetic is used to evaluate equations (2)-(5).
The training data for the fuzzy neural net is (XI, z),
1 11 5 L, where z
is the desired output when
X I is the input. If XIis the input, then let % be the actual output, 1 5 I < L. The learning problem for
fuzzy neural nets is to find the weights (Wi, g ) ,so that 8 is close to 3, 1 5 1 5 L.
In [2, 4, 5, 6, 11, 12, 131 the authors developed a fuzzy backpropagation algorithm for a fuzzy neural net.
Actually, what they did was to directly fuzzify the standard delta rule in backpropagation to update the
values of the weights. It is interesting to note that this procedure can fail to converge to the correct weights.
What they found [4] was that there are values for the weights that make their error measure sufficiently
small, but do not make close to 3,1 5 1 5 L. The algorithm has been corrected but no new results have
been reported.
In a series of papers [14, 15, 16, 17, 18, 19, 201 these authors have also developed a backpropagation
based learning algorithm for fuzzy neural nets. They assumed that the inputs (XI), the weights and the bias
(they employed a sigmoidal j) terms are all symmetric triangular fuzzy numbers with X i 20, all 1. Let E
denote their error measure, which is based on a-cuts of % and z, 1 5 1 5 L. Also, let Wi[O] = [ W i l , wi2],
E[O] = [ujl, u~s]. Their algorithm is based on a E / a w l l , ..., a E / d ~ 4 2 .These derivatives are complicated
and become more complicated if we allow more general fuzzy sets for X I , mi, and G. This method does not
seem to generalize to more complicated fuzzy inputs and/or weights.
Genetic algorithms [9, 10, 251 are finding more and more applications in fuzzy systems [3,8]. One of the
authors of this paper (with Y.Hayashi) has proposed a genetic algorithm, and a fuzzy genetic algorithm, to
train a fuzzy neural network. Until now, no computer experiments have been presented for genetic learning
algorithms for a fuzzy neural net. We discuss our method in the next section.
In [23, 241 the authors have: (1) real number signals; (2) monotone increasing membership functions
for the fuzzy weights; and (3) a special fuzzy error measure. They employ a learning algorithm, inspired
by standard backpropagation, so that the fuzzy neural net can learn the weights. Yamakawa’s new fuzzy
neuron [28] has a learning algorithm for the weights. The learning algorithm for the real weights is similar
to standard backpropagation. The fuzzy neural net in [22, 271 is similar to Yamakawa’s fuzzy neuron. These
papers have a learning algorithm for both the real weights and the trapezoidal fuzzy numbers.

2 Genetic Learning Algorithms


Genetic algorithms are a method of directed random search. We do not present the fundamentals of genetic
algorithms in this paper but instead refer the reader to the popular text [lo] on genetic algorithms.
The first thing to discuss is the error measure to be minimized. Let %[[a] = [yrl(a), y12(a)] and %[a]=
[til(a), tiz(a)], for a in the set (0.0, 0.1, ..., 0.9, 1.0). Define

1970

m
and
E = ( E l Ez) . +
The genetic algorithm is to search for the fuzzy weights to drive E to zero.
The transfer function f, in each hidden neuron and in the output neuron, is
2 5 -7,
f(x) = (9)
2 2 r.
where r is a positive integer. We choose the value of r for the application. The value of r is always one in
the output neuron because all our target fuzzy sets T are in the interval [-1, 11.
We employed tournament selection [21, 261 instead of the more familiar roulette wheel selection [lo] to
choose population members for mating. The values of the parameters (probability of crossover, etc.) in the
genetic algorithm can vary slightly from experiment to experiment but their approximate values are: (1)
population size = 2000; (2) probability of crossover = 0.80; and (3) probability of mutation = 0.0003.
In this paper the fuzzy weights are assumed to be symmetric triangular fuzzy numbers. Let W;=
(wil/wi2/wi3), + +
= (viI/?&z/vi3), 1 5 i 5 4 . Then wiz = (wil ~ ; 3 ) / 2 ,vi2 = (vi1 V i 3 ) / 2 , all i , and W;
( G ) is completely known if you know w;1, w;3 (vil, v i 3 ) , 1 5 i 5 4 . So, the genetic algorithm just needs to
keep track of the supports of the fuzzy weights. A member of the population is

= (w111w131. . . I v4112143) I (10)


coded in binary notation (zeros and ones).
Now we may discuss the results of our computer experiments on genetic learning algorithms for fuzzy
neural nets.

3 Experiments
The complete experimental design is shown in Table 1. In the Input column real means XI= real number,
1 5 1 5 L, and fuzzy means 8 1 = symmetric triangular fuzzy number, all 1. Recall that the training data
is (XI, 3)with XIinput and desired output = 3.In the output column real means the target output %
= real number, 1 5 15 L, and fuzzy designates 3 = triangular shaped fuzzy number in [-1, 11, all 1. The
mixed case, case 3, has XI= real for some 1 and XIfuzzy otherwise, and the same for %. In cases 5-7 the
more (less) fuzzy in the Output column stands for: (1) fuzz (XI) < f u z z (Z), 1 5 1 5 L , is Output = more
fuzzy; (2) fuzz ( X I ) > fuzz (Z), all I, is Output = less fuzzy; and (3) fuzz (XI) < fuzz (3) some 1 and
fuzz (XI) > fuzz (3) otherwise is Output = more and less fuzzy. In case 5 we have fuzz (XI) = f u z z (Z),
l<l<L.
In [16] the authors conjectured that: (1) if fuzz(output) 5 fuzz(input), then all the weights can be
real numbers; and (2) if fuzz(output) > fuzz(input), then the weights are fuzzy. Our experiments were
designed to test this conjecture. We discuss the outcome in the last section.

Mi
In this paper we report results on cases 4-6. Additional results are available, as subject fare for the
conference and as part of a more detailed study for a journal length paper.
Table 1: Experimental Design
Input output

real fuzzy
fuzzy
6 real and fuzzy real and fuzzy
fuzzy fuzzy
fuzzy more fuzzy
fuzzy less fuzzy
fuzzy more and less fuzzy

1971
No. Input (X) Desired Output (T)
1 (-1.OO/-O.75/-O.S0) (1.50/1.75/2.00)
2 (-0.25/0.00/0.25) (0.75/1.00/1.25)
3 (0.50/0.75/1.00) (0.00/0.25/0.s0)

No. Input (X) fuzz(X) Output (2') fu%%(T)


1 (-0.50/-0.25/0.00) 0.50 (-1.001-31810.00) 1.00
2 (-0.25/0.00/0.25) 0.50 (-0.5010.00 1 0.50) 1.00
3 (0.00/0.25/0.50) 0.50 (0.001 318 11.00) 1.00

The value of r, in the hidden layer and the output layer, was set equal to one. The fuzzy neural net learned
the training data perfectly (zero error), with test results given at the conference. All weights were fuzzy in
the neural net.
3.3 Case 6
The training set comes from T = F ( 8 ) = l/x for 8 in (-00, -11 or [l,00) so that T is in [-1,11. This is a
contraction mapping because fuzz ( T ) < fuzz (8).We restricted 8 to be in [l, 31 for training and, as in
the previous case, T is a triangular shaped fuzzy number. The training data is presented in Table 4. The
squashing function in the hidden neurons used r=3.
Table 4: Training Data For Case 6.
No. Input (X) fuzz(X) Output (2') f u % t ( T )*
1 (1.00/1.25/1.50) 0.50 (213 / 415 /1.00) 113
2 (1.50/1.75/2.00) 0.50 (112 1417 1 213) 116
3 (2.00/2.25/2.50) 0.50 (215 1 419 1 112) 1/10
4 (2.50/2.75/3.00) 0.50 (113 1 4/11 1 215) 1/15

The fuzzy neural net was unable to learn the training data. We felt this was mainly due to the fact
that the piece-wise (%segment) linear squashing function may not be a sufficiently good approximation for
learning this non-linear function. We changed the squashing function to a nonlinear mapping, with results
given at the conference and available in subsequent publications.

4 Summary and Conclusions


In this paper we presented a genetic algorithm for training a fuzzy neural net. We showed that it worked
well for modeling the mapping T=-8+1 and 8 where A and 8 are triangular fuzzy numbers. It did
not work well in modeling T=l/X putatively because the piece-wise linear squashing function we used may

1972
not well approximate a nonlinear function. It should work well using a nonlinear squashing function. It has
been shown [l, 71 that a (regular) fuzzy neural net (like that in this paper) is not,a universal approximato;
because it is a monotone increasing mapping. What this means is that if 8 5 X are inputs, then Y 5 Y
are the corresponding outputs. All functions that we tried to approximate were monotone increasing so we
see no theoretical reasons why the fuzzy neural net should not be able to model these mappings.
Our results show that you do not necessarily get real weights if fuzz(output) 5 f u z z ( i n p u t ) [16].
However, we used a different squashing function than the one employed in [16]. Further research is needed
on this conjecture.

References
J.J. Buckley and Y. Hayashi. Can fuzzy neural nets approximate continuous fuzzy functions? Fuzzy
Sets and Systems. To Appear.
J.J. Buckley and Y. Hayashi. Fuzzy backpropagation for fuzzy neural networks. Unpublished
Manuscript.
J.J. Buckley and Y. Hayashi. Fuzzy genetic algorithm and applications. Fuzzy Sets and Systems. To
Appear.
J.J. Buckley and Y. Hayashi. Fuzzy neural nets: A survey. Fuzzy Sets and Systems. To Appear.
J.J. Buckley and Y. Hayashi. Fuzzy neural networks. In R.R. Yager and L.A. Zadeh, editors, Fuzzy
Sets, Neural Networks and Soft Computing. To Appear.
J.J. Buckley and Y. Hayashi. Fuzzy neural nets and applications. Fuzzy Systems and AI, 1:ll-41,1992.
J.J. Buckley and Y. Hayashi. Are regular fuzzy neural nets universal approximators? In Proc. of
International Joint Conference on Neural Networks, volume 1, pages 721-724, Nagoya, Japan, October
25-29 1993.
J.J. Buckley and Y. Hayashi. Fuzzy genetic algorithms for optimization. In Proc. of International Joint
Conference on Neural Networks, volume 1, pages 725-728, Nagoya, Japan, October 25-29 1993.
L. Davis. Handbook of Genetic Algorithms. Van Nostrand Reinhold, New York, 1991.
D. E. Goldberg. Genetic algorithms in search, optimization, and machine learning. Addison-Wesley,
Reading, MA, 1989.
Y. Hayashi and J.J. Buckley. Direct fuzzification of neural networks. In Proc. of First Asian Fuzzy
Systems Symposium. Singapore, November 23-26 1993. In Press.
Y. Hayashi, J.J. Buckley, and C. Czogala. Fuzzy neural network with fuzzy signals and weights. Inter.
J. Intelligent Systems, 8:527-537,1993.
Y. Hayashi, J.J. Buckley, and E. Czogala. Direct fuzzification of neural network and fuzzified delta rule.
In Proc. of the Second International Conference on Fuzzy Logic and Neural Networks (IIZlJKA’92),
pages 73-76, Iizuka, Japan, July 17-221992.
H. Ishibuchi, R. Fujioka, and H. Tanaka. An architecture of neural networks for input vectors of fuzzy
numbers. In Proc. of IEEE International Conference on Fuzzy Systems (FUZZ-IEEE’92), pages 1293-
1300,San Diego, September 7-10 1992.
H. Ishibuchi, R. Fujioka, and H. Tanaka. Neural networks that learn from fuzzy if-then rules. IEEE
Transactions Fuzzy Systems, 1:85-97, 1993.
H. Ishibuchi, K. Kwon, and H. Tanaka. Implementation of fuzzy if-then rules by fuzzy neural networks
with fuzzy weights. In Proc. First European Congress on Fuzzy and Intelligent Technologies, volume I,
pages 209-215, Aachen, Germany, September 7-10 1993.

1973
[17] H. Ishibuchi, K. Kwon, and H. Tanaka. Learning of fuzzy neural networks from fuzzy inputs and fuzzy
targets. In Proc. Fiflh IFSA World Congress, volume I, pages 147-150, Seoul, Korea, July 4-9 1993.
[18] H. Ishibuchi, H. Okada, and H. Tanaka. Interpolation of fuzzy if-then rules by neural networks. In
Proc. of the Second International Conference on Fuzzy Logic and Neural Networks (IIZUKA'92), pages
337-340, Iizuka, Japan, July 17-22 1992.
[19] H. Ishibuchi, H. Okada, and H. Tanaka. Learning of neural networks from fuzzy inputs and fuzzy
targets. In Proc. of International Joint Conference on Neural Networks, volume 111, pages 447-452,
Beijing, China, November 3-6 1992.
[20] H. Ishibuchi, H. Okada, and H. Tanaka. Fuzzy neural networks with fuzzy weights and fuzzy biases. In
Proc. of IEEE International Conference Neural Networks, volume 111, pages 1650-1655, San Francisco,
March 28-April 1 1993.
[21] J.R. Koza. Genetic Programming: O n the Programming of Computers b y Means of Natural Selection.
MIT Press, Cambridge, MA, 1993.
[22] K. Nakamura, T. Fujimaki, R. Horikawa, and Y. Ageishi. Fuzzy network production system. In Proc. of
the Second International Conference on Fuzzy Logic and Neural Networks (IIZUKA '92), pages 127-130,
Iizuka, Japan, July 17-22 1992.
[23] D. Nauck and R. Kruse. A neural fuzzy controller learning by fuzzy error backpropagation. In Proc. of
NAFIPS, volume 11, pages 388-397, Puerto Vallarta, Mexico, December 15-17 1992.
[24] D. Nauck and R. Kruse. A fuzzy neural network learning fuzzy control rules and membership functions by
fuzzy error backpropagation. In Proc. of IEEE International Conference on Neural Networks, volume 11,
pages 1022-1027, San Francisco, March 28-April 1 1993.
[25] R. Serra and G. Zanarini. Complex Systems and Cognitive Processes. Springer-Verlag, 1990.
[26] R.E. Smith, D.E. Goldberg, and J.A. Earickson. Sgac: A c-language implementation of a simple genetic
algorithm. Technical Report TCGA Report No. 91002, The Univerity of Alabama, The Clearinghouse
for Genetic Algorithms, Department of Engineering Mechanics, Tuscaloosa, AL 35487, 1991.
[27] M. Tokunaga, K. Kohno, K. Hashizume, Y. Hamatani, M. Watanabe, K. Nakamura, and Y. Ageishi.
Learning mechanism and an application of ffs-network reasoning system. In Proc. of the Second Inter-
national Conference on Fuzzy Logic and Neural Networks (IIZUKA '92), pages 123-126, Iizuka, Japan,
July 17-22 1992.
[28] T. Yamakawa, E. Uchino, T. Miki, and H. Kusanagi. A neo fuzzy neuron and its application to
fuzzy system identification and prediction of the system behavior. In Proc. of the Second International
Conference on Fuzzy Logic and Neural Networks (IIZUKA '92), pages 477-483, Iizuka, Japan, July 17-22
1992.

1974

Das könnte Ihnen auch gefallen