Beruflich Dokumente
Kultur Dokumente
Abstract
In this paper we present a genetic learning algorithm for fuzzy neural nets. Illustrations are provided.
1 Introduction
This paper is concerned with learning algorithms for fuzzy neural nets. In this section we first introduce
the basic notation to be employed and then discuss what we mean by a fuzzy neural net. Then we briefly
survey the literature on learning algorithms for fuzzy neural nets. The second section contains a description
of our genetic learning algorithm. The third section contains experimental results. The last section has a
brief summary and conclusions.
All our fuzzy sets are fuzzy subsets of the real numbers. We place a bar over a symbol if it represents a
fuzzy set. So A, B, ..., m,
V, are all fuzzy subsets of the real numbers. The membership function of a fuzzy
set A evaluated at t is written as A ( z ) . The a-cut of a fuzzy set B is
I Hiddem Layer
I
1.prt L.ycr O r l p u l Layer
All neurons, except the input neuron, have a transfer function y = f(z). This f i s assumed to be a
continuous, non-decreasing, mapping from 9? into [-7,T ] for T some positive integer. The input node acts
1970
m
and
E = ( E l Ez) . +
The genetic algorithm is to search for the fuzzy weights to drive E to zero.
The transfer function f, in each hidden neuron and in the output neuron, is
2 5 -7,
f(x) = (9)
2 2 r.
where r is a positive integer. We choose the value of r for the application. The value of r is always one in
the output neuron because all our target fuzzy sets T are in the interval [-1, 11.
We employed tournament selection [21, 261 instead of the more familiar roulette wheel selection [lo] to
choose population members for mating. The values of the parameters (probability of crossover, etc.) in the
genetic algorithm can vary slightly from experiment to experiment but their approximate values are: (1)
population size = 2000; (2) probability of crossover = 0.80; and (3) probability of mutation = 0.0003.
In this paper the fuzzy weights are assumed to be symmetric triangular fuzzy numbers. Let W;=
(wil/wi2/wi3), + +
= (viI/?&z/vi3), 1 5 i 5 4 . Then wiz = (wil ~ ; 3 ) / 2 ,vi2 = (vi1 V i 3 ) / 2 , all i , and W;
( G ) is completely known if you know w;1, w;3 (vil, v i 3 ) , 1 5 i 5 4 . So, the genetic algorithm just needs to
keep track of the supports of the fuzzy weights. A member of the population is
3 Experiments
The complete experimental design is shown in Table 1. In the Input column real means XI= real number,
1 5 1 5 L, and fuzzy means 8 1 = symmetric triangular fuzzy number, all 1. Recall that the training data
is (XI, 3)with XIinput and desired output = 3.In the output column real means the target output %
= real number, 1 5 15 L, and fuzzy designates 3 = triangular shaped fuzzy number in [-1, 11, all 1. The
mixed case, case 3, has XI= real for some 1 and XIfuzzy otherwise, and the same for %. In cases 5-7 the
more (less) fuzzy in the Output column stands for: (1) fuzz (XI) < f u z z (Z), 1 5 1 5 L , is Output = more
fuzzy; (2) fuzz ( X I ) > fuzz (Z), all I, is Output = less fuzzy; and (3) fuzz (XI) < fuzz (3) some 1 and
fuzz (XI) > fuzz (3) otherwise is Output = more and less fuzzy. In case 5 we have fuzz (XI) = f u z z (Z),
l<l<L.
In [16] the authors conjectured that: (1) if fuzz(output) 5 fuzz(input), then all the weights can be
real numbers; and (2) if fuzz(output) > fuzz(input), then the weights are fuzzy. Our experiments were
designed to test this conjecture. We discuss the outcome in the last section.
Mi
In this paper we report results on cases 4-6. Additional results are available, as subject fare for the
conference and as part of a more detailed study for a journal length paper.
Table 1: Experimental Design
Input output
real fuzzy
fuzzy
6 real and fuzzy real and fuzzy
fuzzy fuzzy
fuzzy more fuzzy
fuzzy less fuzzy
fuzzy more and less fuzzy
1971
No. Input (X) Desired Output (T)
1 (-1.OO/-O.75/-O.S0) (1.50/1.75/2.00)
2 (-0.25/0.00/0.25) (0.75/1.00/1.25)
3 (0.50/0.75/1.00) (0.00/0.25/0.s0)
The value of r, in the hidden layer and the output layer, was set equal to one. The fuzzy neural net learned
the training data perfectly (zero error), with test results given at the conference. All weights were fuzzy in
the neural net.
3.3 Case 6
The training set comes from T = F ( 8 ) = l/x for 8 in (-00, -11 or [l,00) so that T is in [-1,11. This is a
contraction mapping because fuzz ( T ) < fuzz (8).We restricted 8 to be in [l, 31 for training and, as in
the previous case, T is a triangular shaped fuzzy number. The training data is presented in Table 4. The
squashing function in the hidden neurons used r=3.
Table 4: Training Data For Case 6.
No. Input (X) fuzz(X) Output (2') f u % t ( T )*
1 (1.00/1.25/1.50) 0.50 (213 / 415 /1.00) 113
2 (1.50/1.75/2.00) 0.50 (112 1417 1 213) 116
3 (2.00/2.25/2.50) 0.50 (215 1 419 1 112) 1/10
4 (2.50/2.75/3.00) 0.50 (113 1 4/11 1 215) 1/15
The fuzzy neural net was unable to learn the training data. We felt this was mainly due to the fact
that the piece-wise (%segment) linear squashing function may not be a sufficiently good approximation for
learning this non-linear function. We changed the squashing function to a nonlinear mapping, with results
given at the conference and available in subsequent publications.
1972
not well approximate a nonlinear function. It should work well using a nonlinear squashing function. It has
been shown [l, 71 that a (regular) fuzzy neural net (like that in this paper) is not,a universal approximato;
because it is a monotone increasing mapping. What this means is that if 8 5 X are inputs, then Y 5 Y
are the corresponding outputs. All functions that we tried to approximate were monotone increasing so we
see no theoretical reasons why the fuzzy neural net should not be able to model these mappings.
Our results show that you do not necessarily get real weights if fuzz(output) 5 f u z z ( i n p u t ) [16].
However, we used a different squashing function than the one employed in [16]. Further research is needed
on this conjecture.
References
J.J. Buckley and Y. Hayashi. Can fuzzy neural nets approximate continuous fuzzy functions? Fuzzy
Sets and Systems. To Appear.
J.J. Buckley and Y. Hayashi. Fuzzy backpropagation for fuzzy neural networks. Unpublished
Manuscript.
J.J. Buckley and Y. Hayashi. Fuzzy genetic algorithm and applications. Fuzzy Sets and Systems. To
Appear.
J.J. Buckley and Y. Hayashi. Fuzzy neural nets: A survey. Fuzzy Sets and Systems. To Appear.
J.J. Buckley and Y. Hayashi. Fuzzy neural networks. In R.R. Yager and L.A. Zadeh, editors, Fuzzy
Sets, Neural Networks and Soft Computing. To Appear.
J.J. Buckley and Y. Hayashi. Fuzzy neural nets and applications. Fuzzy Systems and AI, 1:ll-41,1992.
J.J. Buckley and Y. Hayashi. Are regular fuzzy neural nets universal approximators? In Proc. of
International Joint Conference on Neural Networks, volume 1, pages 721-724, Nagoya, Japan, October
25-29 1993.
J.J. Buckley and Y. Hayashi. Fuzzy genetic algorithms for optimization. In Proc. of International Joint
Conference on Neural Networks, volume 1, pages 725-728, Nagoya, Japan, October 25-29 1993.
L. Davis. Handbook of Genetic Algorithms. Van Nostrand Reinhold, New York, 1991.
D. E. Goldberg. Genetic algorithms in search, optimization, and machine learning. Addison-Wesley,
Reading, MA, 1989.
Y. Hayashi and J.J. Buckley. Direct fuzzification of neural networks. In Proc. of First Asian Fuzzy
Systems Symposium. Singapore, November 23-26 1993. In Press.
Y. Hayashi, J.J. Buckley, and C. Czogala. Fuzzy neural network with fuzzy signals and weights. Inter.
J. Intelligent Systems, 8:527-537,1993.
Y. Hayashi, J.J. Buckley, and E. Czogala. Direct fuzzification of neural network and fuzzified delta rule.
In Proc. of the Second International Conference on Fuzzy Logic and Neural Networks (IIZlJKA’92),
pages 73-76, Iizuka, Japan, July 17-221992.
H. Ishibuchi, R. Fujioka, and H. Tanaka. An architecture of neural networks for input vectors of fuzzy
numbers. In Proc. of IEEE International Conference on Fuzzy Systems (FUZZ-IEEE’92), pages 1293-
1300,San Diego, September 7-10 1992.
H. Ishibuchi, R. Fujioka, and H. Tanaka. Neural networks that learn from fuzzy if-then rules. IEEE
Transactions Fuzzy Systems, 1:85-97, 1993.
H. Ishibuchi, K. Kwon, and H. Tanaka. Implementation of fuzzy if-then rules by fuzzy neural networks
with fuzzy weights. In Proc. First European Congress on Fuzzy and Intelligent Technologies, volume I,
pages 209-215, Aachen, Germany, September 7-10 1993.
1973
[17] H. Ishibuchi, K. Kwon, and H. Tanaka. Learning of fuzzy neural networks from fuzzy inputs and fuzzy
targets. In Proc. Fiflh IFSA World Congress, volume I, pages 147-150, Seoul, Korea, July 4-9 1993.
[18] H. Ishibuchi, H. Okada, and H. Tanaka. Interpolation of fuzzy if-then rules by neural networks. In
Proc. of the Second International Conference on Fuzzy Logic and Neural Networks (IIZUKA'92), pages
337-340, Iizuka, Japan, July 17-22 1992.
[19] H. Ishibuchi, H. Okada, and H. Tanaka. Learning of neural networks from fuzzy inputs and fuzzy
targets. In Proc. of International Joint Conference on Neural Networks, volume 111, pages 447-452,
Beijing, China, November 3-6 1992.
[20] H. Ishibuchi, H. Okada, and H. Tanaka. Fuzzy neural networks with fuzzy weights and fuzzy biases. In
Proc. of IEEE International Conference Neural Networks, volume 111, pages 1650-1655, San Francisco,
March 28-April 1 1993.
[21] J.R. Koza. Genetic Programming: O n the Programming of Computers b y Means of Natural Selection.
MIT Press, Cambridge, MA, 1993.
[22] K. Nakamura, T. Fujimaki, R. Horikawa, and Y. Ageishi. Fuzzy network production system. In Proc. of
the Second International Conference on Fuzzy Logic and Neural Networks (IIZUKA '92), pages 127-130,
Iizuka, Japan, July 17-22 1992.
[23] D. Nauck and R. Kruse. A neural fuzzy controller learning by fuzzy error backpropagation. In Proc. of
NAFIPS, volume 11, pages 388-397, Puerto Vallarta, Mexico, December 15-17 1992.
[24] D. Nauck and R. Kruse. A fuzzy neural network learning fuzzy control rules and membership functions by
fuzzy error backpropagation. In Proc. of IEEE International Conference on Neural Networks, volume 11,
pages 1022-1027, San Francisco, March 28-April 1 1993.
[25] R. Serra and G. Zanarini. Complex Systems and Cognitive Processes. Springer-Verlag, 1990.
[26] R.E. Smith, D.E. Goldberg, and J.A. Earickson. Sgac: A c-language implementation of a simple genetic
algorithm. Technical Report TCGA Report No. 91002, The Univerity of Alabama, The Clearinghouse
for Genetic Algorithms, Department of Engineering Mechanics, Tuscaloosa, AL 35487, 1991.
[27] M. Tokunaga, K. Kohno, K. Hashizume, Y. Hamatani, M. Watanabe, K. Nakamura, and Y. Ageishi.
Learning mechanism and an application of ffs-network reasoning system. In Proc. of the Second Inter-
national Conference on Fuzzy Logic and Neural Networks (IIZUKA '92), pages 123-126, Iizuka, Japan,
July 17-22 1992.
[28] T. Yamakawa, E. Uchino, T. Miki, and H. Kusanagi. A neo fuzzy neuron and its application to
fuzzy system identification and prediction of the system behavior. In Proc. of the Second International
Conference on Fuzzy Logic and Neural Networks (IIZUKA '92), pages 477-483, Iizuka, Japan, July 17-22
1992.
1974