Sie sind auf Seite 1von 12

Vol. 18 No. 1 Journal of Systems Science and Complexity Jan.

, 2005

AN INTELLIGENT CONTROL SYSTEM BASED ON


RECURRENT NEURAL FUZZY NETWORK AND ITS
APPLICATION TO CSTR

JIA Li∗ YU Jinshou


(Research Institute of Automation, East China University of Science and Technology, Shanghai
200237, China. Email: chejial@nus.edu.sg; jshyu@vip.sina.com)

Abstract. In this paper, an intelligent control system based on recurrent neural fuzzy
network is presented for complex, uncertain and nonlinear processes, in which a recurrent
neural fuzzy network is used as controller (RNFNC) to control a process adaptively and a
recurrent neural network based on recursive predictive error algorithm (RNNM) is utilized
to estimate the gradient information ∂ ŷ/∂u for optimizing the parameters of controller.
Compared with many neural fuzzy control systems, it uses recurrent neural network to
realize the fuzzy controller. Moreover, recursive predictive error algorithm (RPE) is im-
plemented to construct RNNM on line. Lastly, in order to evaluate the performance of the
proposed control system, the presented control system is applied to continuously stirred
tank reactor (CSTR). Simulation comparisons, based on control effect and output error,
with general fuzzy controller and feed-forward neural fuzzy network controller (FNFNC),
are conducted. In addition, the rates of convergence of RNNM respectively using RPE
algorithm and gradient learning algorithm are also compared. The results show that the
proposed control system is better for controlling uncertain and nonlinear processes.
Key words. Recurrent neural network, neural fuzzy system, adaptive control, recursive
prediction error, CSTR.

1 Introduction
Fuzzy logic controller based on the conditional linguistic statements and approximation
reasoning has been utilized in many fields where the controlled systems are either complex
or model free. The successful use of fuzzy controllers in those fields is due to the fact that
it applies the expert control rules of conditional linguistic statements on the relationships of
system variables and has the merit of emulating the behavior of a skilled operator. But in many
situations, it is hard to acquire the knowledge and the experiences from a skilled operator. The
FLC techniques suffer from problems such as (i) how to properly determine fuzzy control rules;
(ii) how to properly determine fuzzy rules according to the knowledge of experts; (iii) how to
properly determine that structures of the fuzzy controllers are important issues in designing a
fuzzy control system; (iv) the static fuzzy controller has no mechanisms for adapting to real-time
plant change.
To overcome the above-mentioned drawbacks, there is a growing interest in bringing the
learning abilities of the neural networks to automate and realize the design of fuzzy logic control
Received April 19, 2002.
Revised July 7, 2003.
* The author is now working as a research fellow in the Department of Chemical & Biomolecular Engineering,
Faculty of Engineering, National University of Singapore, Singapore, 119260.
44 JIA LI YU JINSHOU Vol. 18

systems. Many characteristics of the neural fuzzy network contribute to this phenomenon. Some
of them are, as compared to the general neural networks, faster convergence speed, and smaller
network size. Moreover, the neural fuzzy network approach automates the design of fuzzy rules
and makes the combinational learning of numerical data as well as expert knowledge expressed
as fuzzy if-then rules possible. In contrast to the pure neural network or fuzzy system, the
neural fuzzy method possesses both of their advantages; it brings the low-level learning and
computation power of neural networks into fuzzy logic systems, and brings the high-level,
human-like thinking and reasoning of fuzzy logic systems into neural networks[1−7].
However, a major drawback of the existing neural fuzzy networks is that their application
domain is limited to static problems due to their inherent feed-forward network structure.
Inefficiency occurs for temporal problems. Hence a recurrent neural fuzzy network capable for
solving temporal problems is in need. A recurrent neural network, which naturally involves
dynamic elements in the form of feedback connections used as internal memories, has been
attracting great interest in the past few years. Unlike the feed-forward neural network whose
output is a function of its current inputs only and is limited to static mapping, recurrent
neural network involves dynamic elements in the form of feedback connections used as internal
memories and performs dynamic mapping[8−10] .
In this paper, in view of the above consideration, an intelligent control system based on
recurrent neural fuzzy network is presented for complex, uncertain and nonlinear processes.
In the proposed system, a recurrent neural fuzzy network is used as controller (RNFNC) to
control a process adaptively and a recurrent neural network based on recursive predictive error
algorithm (RNNM) is utilized to estimate the gradient information ∂ ŷ/∂u for optimizing the
parameters of controller. Comparing with many neural fuzzy control systems, our method uses
recurrent neural network to realize the fuzzy controller. Moreover, recursive predictive error
algorithm (RPE) is implemented to construct RNNM on line. Lastly, in order to evaluate the
performance of proposed control system, it is applied to continuous stirred tank reactor (CSTR).
Simulation comparisons, based on control effect and output error, with general fuzzy controller
and feed-forward neural fuzzy network controller (FNFNC) are conducted. In addition, the rates
of convergence of RNNM respectively using RPE algorithm and gradient learning algorithm
are also compared. The results show that the proposed control system is better for controlling
complex, uncertain and nonlinear processes.
This paper is organized as follows. In Section 2 the proposed intelligent system based on
recurrent neural fuzzy network will be discussed. The structure and learning algorithm of
the controller RNFNC will be summarized in Section 3. In Section 4 the structure and RPE
algorithm of the estimator RNNM will be described. Subsequently, in Section 5 we will present
the simulation results for CSTR. Finally, the conclusions derived in this work are summarized
in Section 6.

2 An Intelligent System Based on Recurrent Neural Fuzzy Network


The presented intelligent system on recurrent neural fuzzy network is shown in Fig.1, where
a recurrent neural fuzzy network is used as controller (RNFNC) to control a process adaptively
and a recurrent neural network based on recursive predictive error algorithm (RNNM) is utilized
to estimate the gradient information ∂ ŷ/∂u for optimizing the parameters of controller. In
Fig.1, u(t), yr (t) and ŷ(t) respectively represent the input of controlled process, the desired set-
point, the practical output and the output of RNNM. Two input variables, the error e(t) and
the change-of-error ce(t), are mapped through X-mapping-function (abbreviated as X MAP),
which usually are chosen as hyperbolic tangent functions, before feeding into RNFNS. Then we
get x1 and x2 , which are in the range of [−1, 1]. In the following section, we will discuss them
No. 1 A RECURRENT NEURAL FUZZY CONTROLLER 45

in detail.
+
- j


? RPE
∂ ŷ/∂u
- 
Gradient learning ŷ(t)
- j 6
- RNNM
 − +
6
X 

yr (t) e(t) x1  ū(t)
- j - - -Ku
+ M ?
b - Process -
− 6 -z −1 - A - RNFNC -Ki z 6
ce(t) x 2
u(t) y(t)
P



Fig.1 The schematic diagram of intelligent control system based on recurrent neural fuzzy

3 The Controller RNFNC


3.1 The RNFNC Structure

Fig.2 The structure of RNFNC

Fig.3 The structure of recurrent neuron Fig.4 The structure of RNNM


46 JIA LI YU JINSHOU Vol. 18

In this section, we will describe the structure and functions of the proposed RNFNC. The
structure of RNFNC is shown as Fig.2, which consists of five layers. In RNFNC, there are
two kinds of neurons. One is linear neuron represented by white circle, another is recurrent
neuron represented by hatched circle shown in Fig.3. This RNFNC is organized into two input
variables, seven-term nodes for each input variable, one output node and 7 × 7 rule nodes.
Layer one accepts input variables. Its nodes represent input linguistic variables. Layer two is
used to calculate Gaussian membership values. Layer three forms the fuzzy rule base. Links
before layer three represent the preconditions of the rule nodes, and the links after layer three
represent the consequence of the rule nodes. Layer four is the defuzzied layer. There are two
neurons in this layer. One of them connects with all neurons of the third layer through the
weight and another connects with all neurons of the third layer through unity weights. Layer
five is the output layer. The output of it is u(t).
Layer 1 Input layer
There are two neurons in this layer, which just transmit the input variables to the next
layer, expressed as follows:
Input

(1)
Ii (k) = xi (k), i = 1, 2,
xi (k) = f (e(k)), x2 (k) = f (ce(k)), (1)
1 − exp(−ξx)
f (x) = .
1 + exp(−ξx)
Output
(1) (1)
Oi (k) = Ii (k). (2)

Layer 2 Linguistic term layer


Nodes in the second layer are called input term nodes, each of which corresponds to one
linguistic label of an input variable. Each node in this layer calculates the membership value
specifying the degree to which an input value belongs to a fuzzy set. A Gaussian membership
function is used in this layer. The reason is that a multidimensional Gaussian membership
function can be easily decomposed into the product of one-dimensional membership functions,
which is described as:  (x − c )2 
i ji
µAj (xi ) = exp 2 , (3)
i σji
where Aji is a fuzzy set corresponding to the ith antecedent of the jth rule, and cji and σji
respectively are the center and width of the membership function, and
Input
(2) (1) (2)
Ili (k) = Oi (k) + θli Oli (k − 1), (4)
Output
 (I (2) (k) − c )2 
(2) li li
Oli (k) = exp , i = 1, 2, l = 1, 2, · · · , N, (5)
σli2
where θli denotes the link weight of the feedback unit. It is clear that the input of this layer
(2)
contains the memory terms Oli (k −1), which store the past information of the network. This is
the apparent difference between the FNN and RNN. Each node in this layer has three adjustable
parameters: cji , σji , θli .
No. 1 A RECURRENT NEURAL FUZZY CONTROLLER 47

Layer 3 Rule layer


This layer consists of 49 neurons, which compute the fired strength of a rule. The input and
output of every neuron are represented as follows:
Input
(3) (2)
Ij (k) = Oj (k), (6)
Output
M
(3) (3)
Y
Oj (k) = Ij (k). (7)
j=1

Layer 4 Defuzzied layer


There are two neurons in the fourth layer. One of them connects with all neurons of the
third layer through the weight hj and another one connects with all neurons of the third layer
through unity weights, described as follows:

Neuron 1
Input
(4) (3)
I1 (k) = Oj (k),
Output
N
(4) (3)
X
O1 (k) = wj Oj (k). (8)
j=1

Neuron 2
Input
(4) (3)
I2 (k) = Oj (k),
Output
N
(4) (3)
X
O2 (k) = Oj (k). (9)
j=1

Layer 5 Output layer


The last layer has a single neuron to compute y. It is connected with two neurons of the
fourth layer through unity weights. The integral function and activation function of the node
can be expressed as:
Input
I (5) = Oq(4) (k), q = 1, 2, (10)
Output
(4)
O1 (k)
O(5) (k) = (4)
, (11)
O2 (k)
u(k) = Ku u(k) = Ku O(5) (k). (12)
Finally, the overall representation of two inputs x1 , x2 and the output u is
PN Q2 h [x (k)+O(2) (k−1)θ −c ]2 i
i ji ji
j=1 wj exp − ji
i=1 2
σji
u(k) = Ku h [x (k)+O(2) (k−1)θ −c ]2 i , (13)
PN Q2 i ji ji
j=1 i=1 exp − ji
σ2 ji
48 JIA LI YU JINSHOU Vol. 18

where wj , θji , cji and σji are the tuning parameters. Obviously, using the RNFNC, the same
inputs at different times yield different outputs. But if we use FNFNC to realize fuzzy controller,
the two inputs x1 , x2 and the output u have the following representation:
[xi (k)−cji ]2
PN Q2 h i
j=1 wj i=1 exp − 2
σji
u(k) = Ku P h i . (14)
N Q2 [xi (k)−cji ]2
j=1 i=1 exp − σ 2
ji

Clearly, the RNFNC features dynamic mapping with feedback and more tuning parameters
than FNFNC.
3.2 Learning Algorithm
Firstly, instead of using random numbers, we initialize wj , cli and σli according to Table
one in paper [11]. And θli is set to zero. Simulation results show that these initial settings can
generally give more meaningful and stable starting than that of using random numbers. Then
we use supervised algorithm to optimize them. Its goal is to minimize the error function:
1
Jc = (yr − y)2 (15)
2
where y is the current output and yr is the desired output. For a training data pair, starting
at the input nodes, a forward pass is used to compute the activity levels of all the nodes, and
a backward pass is used to compute ∂E/∂y for all parameters.
That is, in order to train wj , we use

∂E
wj (k + 1) = wj (k) − α + β(wj (k) − wj (k − 1)) (16)
∂wj

where α is learning factor which effects the speed of learning and β is inertia factor.
From (8)–(15), we get
(3)
∂E ∂y Oj (k)
= −Ku (yr (k) − y(k)) P (3) . (17)
∂wj ∂u O (k) j j

Similarly, the update laws of cl1 , cl2 , σl1 , σl2 , θl1 , θl2 are:
∂E
cl1 (k + 1) = cl1 (k) − α + β(cl1 (k) − cl1 (k − 1)), (18)
∂cl1
∂E
cl2 (k + 1) = cl2 (k) − α + β(cl2 (k) − cl2 (k − 1)), (19)
∂cl2
∂E
σl1 (k + 1) = σl1 (k) − α + β(σl1 (k) − σl1 (k − 1)), (20)
∂σl1
∂E
σl2 (k + 1) = σl2 (k) − α + β(σl2 (k) − σl2 (k − 1)), (21)
∂σl2
∂E
θl1 (k + 1) = θl1 (k) − α + β(θl1 (k) − θl1 (k − 1)), (22)
∂θl1
∂E
θl2 (k + 1) = θl2 (k) − α + β(θl2 (k) − θl2 (k − 1)), (23)
∂θl2
No. 1 A RECURRENT NEURAL FUZZY CONTROLLER 49

where
(1) (2) (2)
∂E ∂y O1 (k) + θl1 Ol1 (k − 1) − cl1 (k))Ol1 (k)
= − 2Ku (yr (k) − y(k)) P (3)
∂cl1 ∂u 2(
σl1 2
j Oj (k))
X (2) X (3) X (3)
· Oq2 (k)(w(l−1)N +q Oj (k) − Oj (k)wj ), (24)
q j j

(1) (2) (2)


∂E ∂y O2 (k) + θl2 Ol2 (k − 1) − cl2 (k))Ol2 (k)
= − 2Ku (yr (k) − y(k)) P (3)
∂cl2 ∂u 2(
σl2 2
j Oj (k))
X (2) X (3) X (3)
· Oq2 (k)(w(q−1)N +l Oj (k) − Oj (k)wj ), (25)
q j j

(1) (2) (2)


∂E ∂y (O1 (k) + θl1 Ol1 (k − 1) − cl1 (k))2 Ol1 (k)
= − 2Ku (yr (k) − y(k)) P (3)
∂σl1 ∂u 3(
σl1 2
j Oj (k))
X (2) X (3) X (3)
· Oq2 (k)(w(l−1)N +q Oj (k) − Oj (k)wj ), (26)
q j j

(1) (2) (2)


∂E ∂y O2 (k) + θl2 Ol2 (k − 1) − cl2 (k))2 Ol1 (k)
= − 2Ku (yr (k) − y(k)) P (3)
∂σl2 ∂u 3(
σl2 2
j Oj (k))
X (2) X (3) X (3)
· Oq2 (k)(w(l−1)N +q Oj (k) − Oj (k)wj ), (27)
q j j

(1) (2) (2)


∂E (2) ∂y (O1 (k) + θl1 Ol1 (k − 1) − cl1 (k))Ol1 (k)
= − 2Ku (yr (k) − y(k))Ol1 (k − 1) P (3)
∂θl1 ∂u 2(
σl1 2
j Oj (k))
X (2) X (3) X (3)
· Oq2 (k)(w(l−1)N +q Oj (k) − Oj (k)wj ), (28)
q j j

(1) (2) (2)


∂E (2) ∂y (O2 (k) + θl2 Ol2 (k − 1) − cl2 (k))Ol2 (k)
= − 2Ku (yr (k) − y(k))Ol2 (k − 1) P (3)
∂θl2 ∂u 2(
σl2 2
j Oj (k))
X (2) X (3) X (3)
· Oq2 (k)(w(q−1)N +l Oj (k) − Oj (k)wj ). (29)
q j j

Clearly, the only unknown in the proposed learning algorithm is gradient of the output with
respect to the control input ∂y/∂u. It is required to be estimated. In our work, a recurrent
neural network based on recursive predictive error algorithm (RNNM) is used to estimate the
gradient information and during learning ∂y/∂u can be calculated explicitly by the process
model output substituting the process output y, namely,
∂y(k) ∂ ŷ(k)
≈ . (30)
∂u(k) ∂u(k)

4 The Estimator RNNM


50 JIA LI YU JINSHOU Vol. 18

4.1 The RNNM Structure


In this section recurrent neural network (RNNM) is used to construct the model of controlled
process and estimate the gradient information ∂ ŷ/∂u. The structure of RNNM is shown in Fig.4.
It consists of both feed-forward and feedback connections and involves dynamic elements in the
form of feedback connections used as internal memories. The relationship between input and
output of RNNM are described as follows:
Input layer 
(1) y(k − i), 1 ≤ i ≤ n,
Iim (k) = (31)
u(k − i + n), n + 1 ≤ i ≤ n + m,
(1) (1)
Oim (k) = Iim [k]. (32)
Hidden layer
(2) (1)
Ijm (k) = Oim (k), (33)
(1) (2)
!
X (Oim (k) + θjm Ojm (k − 1) − cm 2
(2) j )
Ojm (k) = exp − 2 , j = 1, 2, · · · , nh. (34)
i
2sm
j

Output layer
(3) X (2)
I m (k) = wj Ojm , (35)
j
(3) (3)
Om (k) = I m (k), (36)
(3)
ŷ(k) = Om , (37)
(j) (j)
where I m and Om are, respectively, the RNNM input and RNNM output and j denotes
the jth layer (j = 1, 2, 3). And n + m and nh are the numbers of input and hidden layer nodes.
Notice that all weights are unity except the hidden weight wjm and recurrent weight θjm . In
hidden layer, the base function is chosen as Gaussian radial base function. The center and
width of it are, respectively, cm m
j and σj .
According to formula (31)–(37), the estimated gradient information is
(2) (2)
dŷ(k) X (2) u(k) + θjm Ojm (k − 1) − cm
j
= −wjm (k)Ojm (k) m2
. (38)
du(k) j
s j

4.2 Learning Algorithm


The most general algorithms used for RNN are usually the gradient learning algorithms
that include BP through time, recurrent BP, dynamic BP and real-time recurrent learning.
These algorithms are capable of training the RNN, but they may suffer from the drawback of
slow convergence and may have difficulty in converging to zero[12]. In this paper, we consider
an RNN together with a real-time learning algorithm that is based on the recursive predictive
error algorithm. The algorithm is described as follows:
Step 1. Initialize the weighted-matrix P , namely P (0) = 104 I, here I being a unit matrix.
And randomly choose values for cm m m m
j (0), σj (0) and wj (0) and set θj (0) = 0.
Step 2. Compute ŷ(k) according to formulas (31)–(37).
Step 3. Update the parameters according to formula (39).
No. 1 A RECURRENT NEURAL FUZZY CONTROLLER 51



 e(k) = y(k) − ŷ(k),
1


P (k) = {P (k − 1) − P (k − 1)φ(k)[λ(k) + φT (k − 1)P (k − 1)






 λ(k)
· φ(k − 1)]−1 φT (k)P (k − 1)},






W (k) = W (k − 1) + P (k)φ(k)e(k),





λ(k) = λ0 λ(k − 1) + (1 − λ0 ),





m T
W (k) = [cm m m m m m m m m m m
1 c2 · · · cnh σ1 σ2 · · · σnh w1 w2 · · · wnh θ1 θ2 · · · θnh ] ,





dŷ(k)




 φ(k) = ,
dW (k)

(39)
dŷ(k) (2)
Ojm (k),

=


dwjm





(2) (1) (2)

wjm (k)Ojm (k)( Oim (k)− + θjm Ojm (k − 1) − cm
P
j )


 dŷ(k) i
= ,


 dcm 2
σjm



 j
 (2) (1) (2)
2
wjm (k)Ojm (k)( Oim (k)− + θjm Ojm (k − 1) − cm
 P
 dŷ(k) j )

i

 = 3 ,
 dsm

σjm


 j

 (2) (2) (1) (2)
−wjm (k)Ojm (k)Ojm (k − 1)( Oim (k)− + θjm Ojm (k − 1) − cm
P
dŷ(k) j )


 i

 = 2 .
dθjm σjm

Step 4. Let k = k + 1, then go to Step 2.

5 Simulation Research
In order to test the effect of the proposed algorithm, the proposed control system was applied
to an exothermic CSTR. The CSTR used for this simulation is obtained from Shacham et al. [7]
It is described by a set of nondimensional heart and mass balances:
X X2 
2
Ẋ1 = −X1 + Da(1 − X1 ) exp + ,
1 φ
X X2 
2
Ẋ2 = −(1 + δ)X2 + BDa(1 − X1 ) exp + + δu,
1 φ
Y = X2 ,

where X1 and X2 represent, respectively, the dimensionless reactant concentration and reactor
temperature. The control input u is the dimensionless cooling jacket temperature.
The physical parameters in the CSTR model equations are Da, φ, B and δ, which correspond
to the Damkhler number, the activated energy, heart of reaction and heat transfer coefficient,
respectively. On the basis of the nominal values of system parameters, Da = 0.072, φ = 20,
B = 8, δ = 3, the open-loop CSTR exhibits three steady states (X1 , X2 )A = (0.144, 0.886),
(X1 , X2 )B = (0.445, 2.750), (X1 , X2 )C = (0.765, 4.705), where the upper and lower steady
states A and C are stable, whereas the middle one, B, is unstable. The control objective is to
bring the nonlinear CSTR from the stable equilibrium point A to the unstable one B. All of
the results presented are based on reactor temperature control.
In the following simulation, we use a sampling interval of [0, 1]. RNNM consists of two
input units, five hidden units and one output unit (for the sake of simplicity the construction is
52 JIA LI YU JINSHOU Vol. 18

written as 2-5-1), and RNFSC consists of two input units, each of which is divided into seven
linguistic term, 49 fuzzy rules and one output unit. And set Ku = 5, Ki = 0.3, α = 0.04,
β = 0.002, σli (0) = 0.25, Ci (0) = [−1, − 32 , − 31 , 0, 31 , 23 , 1]T , θli (0) = 0 and initialize wj (0) ac-
cording to Table one in paper [11]. The shape parameters x1 and x2 are respectively 6 and 5.
For RNNM, we set λ0 = 0.99, λ(0) = 0.95, P (0) = 104 I, σjm (0) = 6. And wjm (0) are randomly
selected in the range of [0, 1] and cm
j (0) are randomly selected in the range of [−3, 3].

1) We compare the proposed control algorithm with conventional fuzzy control in the
presence of parameter uncertainties. In this simulation, we assume that, after 100 steps, the
values of the heart transfer coefficient and heart of reaction are suddenly changed to 0.4 and 7,
respectively. Fig.5 (a) shows the step response using conventional fuzzy control and the proposed
control algorithm. Fig.5 (b) presents the produced control input of these two algorithms (solid
line: the proposed control algorithm; dotted line: conventional fuzzy control). It is clear that
the proposed scheme has the ability of learning from the controlled process output error by
updating the fuzzy rules and can bring the nonlinear CSTR from the stable equilibrium point
A to the unstable one B in the presence of parameter uncertainties.

Fig. 5 Comparison of the proposed control algorithm with conventional


fuzzy control in the presence of parameter uncertainties

2) We compare the RNFNC controller with FNFNC controller proposed in paper [11] in
the presence of measurement noise. For simulation, we add white noise (zero mean random
numbers with stand deviation of 0.02) to temperature measurement values. Fig.6 (a) shows
the step response using RNFNC and FNFNC. Fig.6 (b) presents the produced control input
of these two controllers. Fig.6 (c) is the error between the desired output and practical out-
put (solid line: the proposed control algorithm; dotted line: conventional fuzzy control). The
simulation result demonstrates that the RNFNC controller works well in the presence of mea-
surement noise. The produced control input of RNFNC has smaller amplitude of vibration than
FNFNCs. Moreover, the output error of RNFNC is smaller than FNFNCs. From the above
simulation results, we know RNFNC possesses better performance than FNFNC and is more
adaptive to dynamic processes.

3) To evaluate the performance of RPE algorithm, we respectively use RPE algorithm and
gradient learning algorithm to realize RNNM. Fig.7 shows results using the two algorithms (solid
line: RPE algorithm; dotted line: gradient learning algorithm). It clear that the convergence
of RPE algorithm is quicker than that of gradient learning algorithm.
No. 1 A RECURRENT NEURAL FUZZY CONTROLLER 53

Fig.6 Comparison of the RNFNC controller with FNFNC controller


in the presence of measurement noise

Fig.7 The convergence of RPE algorithm and gradient learning

6 Conclusion
In this paper, an intelligent control system based on recurrent neural fuzzy network is
presented for complex, uncertain and nonlinear processes. In the proposed system, a recurrent
neural fuzzy network is used as controller (RNFNC) to control a process adaptively, which is a
recurrent multiplayer connectionist network for realizing fuzzy inference using the dynamic fuzzy
rules. And a recurrent neural network based on recursive predictive error algorithm (RNNM)
is utilized to estimate the gradient information for optimizing the parameters of controller.
54 JIA LI YU JINSHOU Vol. 18

Comparing with many other neural fuzzy control systems, our method uses recurrent neural
network to realize the fuzzy controller. Moreover, recursive predictive error algorithm (RPE) is
implemented to construct RNNM on line. In order to evaluate the performance of the proposed
control system, it is applied to a continuously stirred tank reactor (CSTR). Simulation results
comparisons, based on control effect and output error, with general fuzzy controller and feed-
forward neural fuzzy network controller (FNFNC), are conducted. In addition, the rates of
convergence of RNNM respectively using RPE algorithm and gradient learning algorithm are
also compared. The results show that the proposed control system has on-line adapting ability
for dynamic system control and is better than FNFNC. And it can apply to controlling complex,
uncertain and nonlinear processes.

References

[1] Yan Shi, Masaharu, Miaumoto, A new approach of neuro-fuzzy learning algorithm for tuning fuzzy
rules, Fuzzy Sets and Systems, 2000, 112: 99–110.
[2] Mauricio, Figueiredo and Fernando, Gomide, Design of fuzzy systems using neurofuzzy networks,
IEEE Trans. Neural Networks, 1999, 10(4): 815–829.
[3] Lin, C.T, Lee, C.S.G., Neural-network-based fuzzy logic control and decision system, IEEE Trans.
On Computer, 1991, 40(12): 1320–1342.
[4] L.X. Wang, Adaptive Fuzzy Systems and Control, Prentice-Hall, Englewood Cliffs, NJ, 1994.
[5] Yin Wang and Gang Rong, A self-organizing neural-network-based fuzzy system, Fuzzy Sets and
Systems, 1999, 103: 1–11.
[6] Meng Joo Er, Fuzzy neural networks-based quality prediction system for sintering process, IEEE
Trans. F.S., 2000, 8(3): 314–323.
[7] Li Jia, Jinshou Yu, A novel neural fuzzy network and application to modeling octane number,
Hydrocarbon Processing, 2001, (12): 57–60.
[8] S. Horikawa, T. Furuhashi, Y. Uchikawa, On fuzzy modeling using fuzzy neural networks with the
back-propagation, IEEE Trans Neural Networks, 1992, 3(5): 801–806.
[9] Chia-Fang Juang and Chin-Teng Lin, A recurrent self-organizing neural fuzzy inference network,
IEEE Trans Neural Networks, 1999, 10(4): 828–845.
[10] Ching-Hung Lee, Ching-Cheng Teng, Identification and control of dynamic systems using recurrent
fuzzy neural networks, IEEE Trans. On Fuzzy Systems, 2000, 8(4): 349–365.
[11] Chyi-Tsong Chen, Shih-Tein Peng, Intelligent process control using neural fuzzy techniques, Jour-
nal of Process control, 1999, 9: 493–503.
[12] S. A. Billing et al., A comparison of the back-propogation and recursive prediction error algorithms
for training neural networks, Mechanical Systems and Signal Processing, 1991, 233–255.

Das könnte Ihnen auch gefallen