Sie sind auf Seite 1von 8

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/261166887

Training ANFIS system with de algorithm

Conference Paper · October 2011


DOI: 10.1109/IWACI.2011.6160022

CITATIONS READS
19 720

4 authors, including:

Mohammad Manthouri Mohammad Teshnehlab


Shahed University Khaje Nasir Toosi University of Technology
33 PUBLICATIONS   324 CITATIONS    227 PUBLICATIONS   2,827 CITATIONS   

SEE PROFILE SEE PROFILE

Ali Khaki Sedigh


Khaje Nasir Toosi University of Technology
233 PUBLICATIONS   1,915 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

PhD thesis View project

Code **** Matlab ****Binary Cat Swarm Optimization Algorithm View project

All content following this page was uploaded by Mohammad Manthouri on 21 January 2017.

The user has requested enhancement of the downloaded file.


Fourth International Workshop on Advanced Computational Intelligence
Wuhan, Hubei, China; October 19-21, 2011

Training ANFIS System with DE Algorithm


Allahyar Z. Zangeneh, Mohammad Mansouri, Mohammad Teshnehlab, and Ali K. Sedigh

Abstract— In this study, a new type of training the adaptive It has been proved that with convenient number of rules, a
network-based fuzzy inference system (ANFIS) is presented by TSK system could approximate every plant [6]. The TSK
applying different types of the Differential Evolution branches. recurrent fuzzy networks (TRFN) [7] uses a global feedback
The TSK-type consequent part is a linear model of exogenous structure, where the firing strengths of each rule are summed
inputs. The consequent part parameters are learned by a and fed back as internal network inputs. The TSK systems
gradient descent algorithm. The antecedent fuzzy sets are
are widely used in the form of a neural-fuzzy system called
learned by basic differential evolution (DE/rand/1/bin) and then
with some modifications in it. This method is applied to Adaptive Network-based Fuzzy Inference System (ANFIS)
identification of the nonlinear dynamic system, prediction of the [8]. The ANFIS is a class of adaptive networks that are
chaotic signal under both noise-free and noisy conditions and functionally equivalent to fuzzy inference systems. The
simulation of the two-dimensional function. Instead of ANFIS architecture stands for adaptive network-based fuzzy
DE/rand/1/bin, this paper suggests the complex type inference system or semantically equivalent adaptive neural
(DE/current-to-best/1+1/bin & DE/rand/1/bin) on predicting of fuzzy inference system [9]. This adaptive network has good
Mackey-glass time series and identification of a nonlinear ability and performance in system identification, prediction
dynamic system revealing the efficiency of proposed structure. and control and has been applied in many different systems.
Finally, the method is compared with pure ANFIS to show the
The ANFIS has the advantage of good applicability as it can
efficiency of this method.
be interpreted as local linearization modeling and
conventional linear techniques for state estimation and
I. INTRODUCTION
control are directly applicable.

T HE topologies of recurrent networks which are used to


store past information consist of feedback loops. In
contrast to pure feed-forward architectures which exhibit
The training method of ANFIS parameters is the main
problem. Most of them are based on gradient and calculation
of gradient in some steps is not easy. The chain rule must be
static input-output behavior, recurrent networks are able to used also may cause local minimum. Here, we try to propose
memorize information from the past (e.g., last system states), a hybrid method which can update the premise parameters
and are thus more appropriate for the analysis of dynamic easier and faster than the gradient method. In the gradient
systems. Some recurrent fuzzy neural networks (FNNs) have method convergence of parameters is very slow and depends
already been proposed [1][2] to deal with temporal on initial value of parameters and as a result, finding the best
characteristic problems, and have been shown to outperform learning rate is very difficult. But, in this new method called
feed-forward FNNs and recurrent neural networks. One Differential Evolution (DE), we do not need the high number
branch of recurrent FNNs uses feedback loops from the of epochs. Thus, the convergence of parameters is fast.
network output(s) as a recurrence structure [2]. DE is a stochastic, population-based search strategy
(TSK) type [3][4] is a fuzzy system with developed by and Price [10] in 1995. While DE shares
crisp functions in consequent, which perceived proper for similarities with other evolutionary algorithms (EA), it
complex applications [5]. Instead of working linguistic rules differs significantly in the sense that distance and direction
of the kind introduced in the -type fuzzy rule- information from the current population is used to guide the
based systems, Takagi, , and Kang proposed a new search process. Furthermore, the original DE strategies were
model based on rules where the antecedent was composed of developed to be applied to continuous-valued landscapes.
the linguistic variables and the consequent was represented The rest of the paper is organized as follows: In section II,
by a function of the input variables. we review ANFIS. In section III, we discuss hybrid method.
An overview of the proposed method and the application of
Manuscript received July 15, 2011. This work was supported in part by this method to nonlinear identification are presented in
the Faculty of Engineering, Department of Computer Engineering, Science section IV. Finally, section V presents our conclusions.
and Research Branch, Islamic Azad university of Tehran.
A. Z. Zangeneh is with the Computer Engineering Department, Science
and Research Branch, Islamic Azad university of Tehran, Iran (e-mail:
zohooriz@hotmail.com). II. THE CONCEPT OF ANFIS
M. Mansouri and M. Teshnehlab are with the Intelligent System
laboratory (ISLAB), Control Department, K. N. University of A.ANFIS Structure
Technology, Tehran, Iran (e-mail: {mohammad.mansouri,
teshnehlab}@ee.kntu.ac.ir). Jang presents a new algorithm called ANFIS that defines
A. K. Sedigh is with the Advanced Process Automation & Control the composition of data base. A fuzzy inference system is
laboratory (APAC), Control Department, K. N. University of implemented in a neural network that uses a hybrid method
Technology, Tehran, Iran (e-mail: sedigh@ee.kntu.ac.ir).
to adjust the parameters in its nodes. Both neural network
and fuzzy inference systems [11] are model-free estimators

978-1-61284-375-9/11/$26.00 @2011 IEEE 310


and share the common ability to deal with the uncertainties Layer 5: The single node in this layer computes the overall
and noises. Both of them encode the information in a parallel output as the sum of all incoming signals.
and distributed architecture in a numerical framework. Hence,
it is possible to convert fuzzy inference system architecture (7)
to a neural network and vice-versa. The proposed ANFIS can
construct an input-output mapping based on both expert
knowledge (in the form of linguistic form) and specified (8)
input-output data pairs.
An adaptive network is a multilayer feed-forward network In order to model complex nonlinear systems, the ANFIS
in which each node performs a node function on the model carries out input space partitioning that splits the input
incoming signal as well as a set of parameters pertaining to
this node of which its output depends. These parameters can
be fixed or variable, and it is through the change of the last
ones that the network is tuned. ANFIS has nodes with
variable parameters, called square nodes which will
represent the membership functions of the antecedents, and
the linear functions for TSK-type consequent. The nodes in
the intermediate layers connect the antecedent with the
consequent. Their parameters are fixed and they are called
circular nodes. Moreover, the network obtained this way Fig. 1. The TSK neural fuzzy network with 2 inputs and 2 MF for an input
would not remain a black box, since this network would have
fuzzy inference system capabilities to interpret in terms of space into many local regions from which simple local
linguistic variables [12]. models (linear functions or even adjustable coefficients) are
The ANFIS structure is demonstrated in five layers. It can employed. The ANFIS uses fuzzy MFs to split each input
be described as a multi-layered neural network as shown in dimension; the input space covered by MFs with overlapping
Fig. 1. that means several local regions can be activated
simultaneously by a single input. As simple local models are
Layer 1: The first layer executes a process. adopted in ANFIS model, the ANFIS approximation ability
The parameters in this layer are referred to as premise will depend on the resolution of the input space partitioning
parameters. In fact, any differentiable function such as bell which is determined by the number of MFs in ANFIS and
and triangular-shaped membership functions (MFs) are valid the number of layers.
for the nodes in this layer. Every node i in this layer is a
square node with a node function. Usually MFs are used as B.Learning Algorithms
Gaussian with maximum equal to 1 and minimum equal to 0 The subsequent to the development of ANFIS approach, a
such as: number of methods have been proposed for learning the
parameters and for obtaining an optimal number of MFs.
i=1, 2 (1) Four methods to update the parameters of ANFIS structure
are introduced by Jang [8] as listed below:
i=3, 4 (2)
 All parameters are updated by the gradient decent.
Where { , } are the parameters of MFs which are  After the network parameters are set with their initial
affected in shape of MFs. values, the consequent part parameters is adjusted
through the LSE which it is applied only once at the
Layer 2: Each node represents the firing strength of a rule i very beginning. Then, the gradient decent updates all
through a conjunction operator. The function considered is parameters.
the fuzzy AND. They are circular nodes with a node function:
 Using extended filter to update all parameters.
(3)  The hybrid learning combining GD (gradient descent)
Layer 3: It calculates the ratio of rule’s firing strength and LSE.
with respect to the sum of all the rule’s firing strengths. In this paper we introduced a hybrid method which has
less complexity and fast convergence.
(4)
III. Hybrid Method
Layer 4: Every node i in this layer is a square node with a ANFIS’s network organizes two parts like fuzzy systems.
node function: The first part is the antecedent part and the second part is the
(5) consequent part which they are connected to each other by
rules in network form. These two parts can be adapted by
(6) different optimization methods, one of which is the hybrid
learning procedure combining GD and DE. In a conventional
fuzzy inference system, the number of rules is determined by

311
an expert who is familiar with the target system to be used here contains =16 rules, with four membership
modelled. In our simulation, however, no expert is available functions assigned to each input variable. The total number
and the number of MFs assigned to each input variable is of fitting parameters is 64, including 16 premises (nonlinear)
chosen empirically-that is, by plotting the data sets and parameters (8 centres of MFs and 8 standard deviations) and
examining them visually, or simply by trial and error. For 48 consequent (linear) parameters. (We also tried ANFIS
data sets with more than three inputs, visualization model with 64 rules, because the first model is too simple to
techniques are not very effective and most of the time we describe the highly nonlinear function.)To train the
have to rely on trial and error. This situation is similar to that premise parameters we used DE and proposed two methods
of neural networks; there is just no simple way to determine for construction of the initial population.
in advance the minimal number of hidden units needed to
achieve a desired performance level. Method1: A good uniform random initialization method is
used to construct method1.
A. Gradient Descent
Method2: For each input variable we have four MFs, on the
Gradient-based algorithms are the most common and other hand, have four centres of MF that are equally
important nonlinear local optimization techniques [13].The distributed along the range of ( ).
back propagation is a gradient-based technique that applies Slip shows the distance between two centres of MF
to neural network systems [17]. It is possible to decrease the
belonging to the two neighbouring individuals of population,
difference between the actual output of ANFIS structure and
and in this paper, slip is calculated as below:
the desired output of the ANFIS using gradient-based
methods. Consider an error function as follows: (13)
(9)
Then
Where, is the output of the ANFIS structure, and is
the desired output. We can optimize by using the partial (14)
derivatives in the differentiation chain rules [18].After the
partial derivatives are computed, the linear equations can be
used to update the consequent parameters from the Therefore the initial values of the centres of MFs in
iteration to the iteration as follows: population are distributed in an interval around them.
The initial values for standard deviations calculate as
(10) below:

(15)
(11)

Therefore the initial values of the standard deviations of


(12)
MFs in population are distributed in an interval around a
fixed value and the magnitude of the ( ) which
B. Basic Differential Evolution determines the magnitude of this interval.
For the other Evolutionary Algorithms (EAs), variation C. Difference Vectors
from one generation to the next is achieved by applying
crossover and/or mutation operators. If both these operators Distances between individuals are very good indications
are used, crossover is usually applied first, after which the of the diversity of the current population, and of the order of
generated offspring are mutated. For these algorithms, magnitude of the step sizes that should be taken in order to
mutation step sizes are sampled from some probability contract the population to one point. If there are large
distribution function. DE differs from these evolutionary distances between individuals, it stands to justify that the
algorithms in that individuals should make large step sizes in order to explore
as much of the search space as possible. On the other hand, if
1) mutation is applied first to generate a trial vector which the distances between individuals are small, step sizes should
is then used within the crossover operator to produce be small to exploit local areas. ( ) indicates the number of
one offspring, and
difference vectors used.
2) Mutation step sizes are not sampled from a prior known
probability distribution function. D. Trial Vector
In DE, mutation step sizes are influenced by differences The DE mutation operator produces a trial vector for each
between individuals of the current population [10]. The individual of current population by mutating a target vector
positions of individuals provide valuable information about with a weighted differential. This trial vector will then be
the fitness landscape. The initial values of premise used by the crossover operator to produce offspring.
parameters are set in such a way that the centres of the MFs
(16)
are equally spaced along the range of each input variable.
For example in two-dimensional function, the range of Where refers to a trial vector for parent and the
each input variable is ( ) and one of the ANFIS way that the target vector is selected depends on DE strategy.

312
E. General Notation Example 2: Identification of a nonlinear dynamic system. In
A general notation was adopted in the DE literature, this example, the nonlinear system model with multiple time-
namely DE/x/y/z [10]. Using this notation, x refers to the delays is described as [16]
method of selecting the target vector, y indicates the number Table 1
Results of simulating Mackey-Glass series prediction
of difference vectors used, and z indicates the crossover
method used. The train type of The number of
The train type of the
the consequent MFs (each Test Error Train Error
antecedent part
part input)/ Epochs
IV. SIMULATION RESULTS 2 / 500 5.2051e-005 7.2074e-005
4 / 500 5.2158e-005 7.9852e-005
In this section, the way DE employed to update the ANFIS
4 / 250 1.1140e-004 1.4670e-004
antecedent part parameters is shown. The antecedent part of GD GD
6 / 167 1.0586e-004 1.5202e-004
ANFIS has two parameters which need training, the means 8 / 250 5.8830e-005 8.4107e-005
and the standard deviation (STDEV). The membership 8 / 125 1.2090e-004 1.5564e-004
functions are assumed Gaussian as in equations (1, 2). The 2 / 30 4.4669e-004 4.1855e-004

parameters of consequent part are trained by gradient descent. 3 / 30 8.9599e-005 2.0459e-005


4 / 30 1.6140e-005 4.1669e-006
Comparisons with ANFIS validate the performance of the DE/rand/1/bin GD
4 / 20 2.4909e-005 7.3149e-006
way DE. 4 / 10 5.8407e-005 7.8806e-006
4 / 250 8.7885e-006 1.0085e-006
A. How to Apply DE for training ANFIS parameters
2 / 30 3.8198e-004 3.3717e-004
There are two sets of trainable parameters in antecedent 3 / 30 6.9059e-005 2.2811e-005
part ({ , }), each of these parameters has NMF genes. DE/rand/1/expo GD
4 / 30 1.2441e-005 1.0180e-005

Where, NMF represents the number of MFs. The consequent 4 / 20 1.8841e-005 4.6117e-006
4 / 10 1.1910e-005 1.0931e-005
part parameters ( ) also are trained during 4 / 250 6.2660e-006 2.6258e-006
optimization algorithm. 2 / 30 7.8053e-004 7.2889e-004
We used a number of variations to the basic DE in our 3 / 30 5.7960e-005 1.6248e-005
simulation. The different numbers of membership functions DE/best/1/bin GD
4 / 30 1.5628e-005 3.7171e-006

with different numbers of epochs are used for the different 4 / 20 3.8646e-005 6.2848e-006
4 / 10 4.9271e-005 2.9141e-005
DE strategies.
4 / 250 1.2390e-005 2.6935e-006
The size of population has a direct influence on the 2 / 30 3.3349e-004 6.5970e-005
exploration ability of DE algorithms. The more individuals 3 / 30 3.9061e-005 1.4706e-005
are in the population, the more differential vectors are DE/best/1/expo GD
4 / 30 4.1868e-005 1.5389e-005

available, and the more directions can be explored. However, 4 / 20 1.0251e-005 1.4105e-005
4 / 10 3.3630e-005 2.2836e-005
it should be kept in mind that the computational complexity
4 / 250 5.5436e-006 5.4491e-006
per generation increases with the size of the population. 2 / 30 7.2959e-005 5.6971e-005
Empirical studies provide the guideline that is ( ). DE/rand/3/bin GD 4 / 10 2.4233e-005 8.7561e-007
Where, is population size and is the number of genes 4 / 100 8.9657e-006 1.8663e-006

for each parameter. 2 / 30 8.6586e-005 6.1899e-005


DE/rand/3/expo GD 4 / 10 1.1240e-005 2.7921e-005
Initial means parameters are distributed sequentially in the
4 / 100 1.0796e-005 3.4517e-006
domain of identification. Standard deviation parameters are 2 / 30 1.1379e-004 2.6270e-006
determined according to the number of MFs and the domain 4.2949e-
4/3 2.9242e-005
intervals. DE/best/3/bin GD 006
3.6919e-
B. Nonlinear Function Modeling 4 / 10 3.5245e-005
006
6.3421e-
Example1: Predicting chaotic dynamics. The fourth example 2 / 30 3.3180e-004
006
is to predict future values of a chaotic time series, which is 4.2255e-
DE/best/3/expo GD 4/3 2.5915e-005
006
generated by
4.5050e-
4 / 10 4.0498e-005
006
(17)
1.2009e-
2 / 30 6.4933e-004
004
Equation (17) is also known as the chaotic Mackey-Glass DE/current-to- 7.3219e-
differential delay equation. The initial conditions for x (0) best/1+1/bin
GD 4/3 9.8025e-005
006
and τ are 1.2 and 17, respectively. ANFIS structure has 4 4 / 10 4.2559e-004
3.5356e-
004
inputs and one output. We use 840/360 data as training/test
2.0101e-
[14]. 2 / 30 4.5198e-004
004
The results are illustrated in Table 1. The results of DE/current-to-
2.7051e-
best/1+1/bin& GD 4 / 10 6.0014e-006
gradient descent method and complex (DE & GD) method DE/rand/1/bin 006
are also shown in table 1 so that it could be possible to 4 / 30 3.5323e-005
1.6468e-
006
compare the results. Fig. 2 is depicted predicting of Mackey-
Glass time series.

313
Table 2
Results of simulating nonlinear dynamic system prediction
The train The number
The train type
type of the of MFs (each Test
of the Train Error
consequent input) / Error
antecedent part
part Epochs
8.5324e-
2 / 500 0.0036
005
7.7851e-
4 / 250 0.0017
005
2.5545e-
GD GD 4 / 30 0.0019
004
7.4678e- 2.8731e-
8 / 125
004 005
2.3893e-
8 / 10 0.0025
004
(a) 4 / 10 0.0057 0.0283
4 / 30 0.0050 0.0139
DE/rand/1/bin GD
6.6945e-
8 / 10 0.0048
004
4 / 10 0.0163 0.0365
4 / 30 0.0224 0.0291
DE/rand/1/expo GD
2.9706e-
8 / 10 0.0084
004
4 / 10 0.0125 0.0207
DE/best/1/bin GD 4 / 30 0.0067 0.0129
8 / 10 0.0034 0.0036
1.8155e-
4 / 10 0.0010
004
DE/best/1/expo GD
4 / 30 0.0090 0.0133
8 / 10 0.0012 0.0071
(b) 4 / 10 0.005 0.0290
4 / 30 0.0041 0.0119
Fig. 2: Mackey glass prediction. (a) Using DE to train the DE/rand/3/bin GD
antecedent part parameters in ANFIS structure (b) Using GD to 3.1786e-
8 / 10 0.0072
train the antecedent and the consequent part parameters in ANFIS 004
structure. 4 / 10 0.0066 0.0276
4 / 30 0.0054 0.0144
DE/rand/3/expo GD
3.5513e-
(17) 8 / 10 0.0061
004
Where 4 / 10 0.0049 0.0196
4 / 30 0.0061 0.0126
DE/best/3/bin GD
(18) 3.3399e-
8 / 10 0.0100
004
Here, the current output of the plant depends on three 4 / 10 0.0064 0.0213
previous outputs and two previous inputs. The ANFIS
4 / 30 0.0050 0.0119
structure, with five input nodes for feeding the appropriate DE/best/3/expo GD
5.5242e-
past values of and u were used. The system input signal 8 / 10
004
0.0123
u(k) as the following equation [16] 4 / 10 0.0062 0.0307
DE/current-to- 4 / 30 0.0053 0.0178
GD
best/1+1/bin 2.8258e-
8 / 10 0.0052
004
4 / 10 0.0350 0.0527
(19) DE/current-to-
4 / 30 0.0302 0.0384
best/1+1/bin & GD
DE/rand/1/bin 2.6240e-
8 / 10 0.0027
004

The trigonometric used here contains two hidden layers frequency and phase in hidden layers, five inputs and one
with four-two neurons sine and cosine with or without output.

314
The ANFIS structure applied here contains five inputs and parameters just with GD. Some new algorithms, preferably
different numbers of membership functions for an input. We those that have roots in nature may also be employed in the
use 597/1000 data as training/test. ANFIS structure to help it reach the globally optimal
The results are illustrated in Table 2. The results of solution. Since these algorithms are free of derivation which
different methods are also shown in Table 2 so that it could is very difficult to calculate to train, the antecedent part
parameters complexity of these approaches are less than
other training algorithms like GD. On the other hand, the
number of computation required by each algorithm has
shown that DE requires less to achieve the same error goal as
with the back propagation. Also, the local minimum problem
in GD algorithm to train DE algorithm is solved. The
effectiveness of the proposed DE method was indicated by
applying it to identification of nonlinear method.

VI. REFERENCES

[1] J. Zhang and A. J. Morris, “Recurrent neuro-fuzzy networks for


nonlinear process modelling,” IEEE Trans. Neural Networks, vol. 10,
no. 2, pp. 313-326, Feb. 1999.
[2] C. H. Lee and C. C. Teng, “Identification and control of dynamic
systems using recurrent fuzzy neural networks,” IEEE Trans. Fuzzy
Systems, vol. 8, no. 4, pp. 349-366, Aug. 2000.
(a)
[3] M. Sugeno and G. T. Kang, “Structure identification of fuzzy model,”
Fuzzy sets and systems, pp. 15-33, 1998.
[4]T. Takagi and M. Sugeno, “Fuzzy identification of systems and its
application to modelling and control,” IEEE Trans.Sys., Man &
Cybernetics, pp. 116-1321, 1985.
[5] R. Alcala, J. Casillas, O. Cordon and F. Herrera, “Learning TSK
rule-based system from approximate ones by mean of MAGUL
methodology,” Granada uni of Spain, Oct 2000.
[6] M. Mannle, “FSTM: Fast Takagi-Sugeno fuzzy modelling,” uni of
Karsruhe, 1999.
[7] C. F. Juang,“A TSK-type recurrent fuzzy network for dynamic systems
processing by neural network and genetic algorithm,” IEEE Trans.
Fuzzy Systems, vol. 10, no. 2, pp. 155-170, April 2002.
[8] Jyh-Shing Roger Jang, “ANFIS: Adaptive Network Based Fuzzy
Inference System,” IEEE Trans. Sys., Man & Cybernetics, vol. 23, No.
3, May/June 1993.
(b) [9] J.-S. R. Jang, C.-T. Sun and E. Mizutani, Neuro-Fuzzy and Soft
Computing: A Computational Approach to Learning and Machine
Fig. 3: Nonlinear dynamic system prediction. (a) Using DE to train the Intelligence, Prentice-Hall, Inc. 1997.
antecedent part parameters in ANFIS structure (b) Using GD to train
the antecedent and the consequent part parameters in ANFIS structure. [10] A. P. engelbrecht, Computational Intelligence: An introduction, Second
Edition, John Wiley & Sons, Ltd, 2007.
be possible to compare the results. Fig. 3 is depicted [11] R. R. Yager and L. A. Zadeh, Fuzzy sets Neural Network, and Soft
identification of the mentioned nonlinear system. Computing, Van Nostrand Reinhold, 1994.
[12] M. Kumar and P. G. Devendra, “Intelligent Learning of Fuzzy Logic
Controllers via Neural Network and Genetic Algorithm,” Proceedings
As results suggest training error achieved using ANFIS of 2004 JUSFA 2004 Japan – USA Symposium on Flexible
structure with DE train the antecedent part parameters is Automation Denver, Colorado, 2004.
better than training all parameters with GD. [13] O. Nelles, “Nonlinear system identification from classical approaches
to neural networks and fuzzy models,” Springer, 2000.
V. Conclusions [14] M. A. Shoorehdeli, M. Teshnehlab and A. K. Sedigh, “Training ANFIS
In this paper, a population-based optimization algorithm as an identifier with intelligent hybrid stable learning algorithm based
called Differential Evolution algorithm is proposed in order on particle swarm optimization and extended Kalman filter,”
Fuzzy Sets and Systems ,vol. 160, pp. 922–948, 2009
to train the antecedent part parameters of ANFIS structure. In
[15] K.S. Narendra and K. Parthasarathy, “Identification and control of
our novel method, we used a number of variations to the
dynamical system using neural networks,” IEEE Trans. Neural
basic DE to update the antecedent part parameters. The
Networks, vol. 1, pp. 4-27, Jan. 1990.
simulation results indicate the new approach has better
results for complex nonlinear systems than training all

315
[16] C. J. Lin and Y. J. Xu, “A selfadaptive neural fuzzy network
with group-based symbiotic evolution and its prediction
applications,” Fuzzy Sets and Systems, vol. 157, pp.1036-1056, 2006.
[17] M. M. Gupta, L. Jin, and N. Homma, Static and Dynamic Neural
Networks from Fundamentals to Advanced Theory, John Wiley &
Sons, Inc , 2003.
[18] R. Alcal´a, J. Casillas, O. Cord´on, F. Herrera, and S. J. I.Zwiry,
Techniques for Learning and Tuning Fuzzy Rule-BasedSystems
for Linguistic Modeling and their Application, E.T.S. de
Ingenier´ıa Inform´atica, University of Granada. 18071 – Granada,
Spain,1999.

316

View publication stats

Das könnte Ihnen auch gefallen