Sie sind auf Seite 1von 10

4266 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 64, NO.

5, MAY 2017

Circuit Implementation of Data-Driven TSK-Type


Interval Type-2 Neural Fuzzy System With
Online Parameter Tuning Ability
Chia-Feng Juang, Senior Member, IEEE, and Kai-Jie Juang

AbstractThis paper proposes a new circuit for im- type-1 fuzzy rules [3], interval type-2 NFSs have attracted more
plementing a reduced-interval type-2 neural fuzzy system attention in recent years and have been shown to outperform
using weighted bound-set boundaries (RIT2NFS-WB) with their type-1 counterparts in several data-driven applications
online tuning ability. The antecedent and consequent parts
of the RIT2NFS-WB use interval type-2 fuzzy sets and [4][12]. When implemented, interval type-2 NFSs are substan-
Takagi-Sugeno-Kang (TSK) rules with interval combination tially more computationally expensive than are type-1 NFSs
parameters, respectively. In the software implementation, because of the formers inherent interval operations, making
the structure and parameters of the RIT2NFS-WB are their implementation more difficult. Considering the demands
learned through firing-strength-based rule generation and
of online real-time computation and low-cost implementation
gradient descent algorithms, respectively. The software-
designed RIT2NFS-WB is then transferred to a circuit in several real applications of data-driven techniques, this paper
implementation with online parameter-tuning ability; the proposes the circuit implementation of interval type-2 NFSs.
hardware version is called the RIT2NFS-WB(HL). The Many type-1 fuzzy chips have been proposed [13][19] and
RIT2NFS-WB(HL) is characterized by its online tuning abil- successfully applied to real processes such as water bath tem-
ity with updatable consequent and weighting parameters. perature control [17], and robot control [18]. To handle time-
To the best of our knowledge, the RIT2NFS-WB(HL) is
the first TSK-type interval type-2 neural fuzzy circuit with varying processes, several type-1 neural fuzzy chips with on-
online parameter tuning ability in the literature. To take chip parameter learning abilities have been proposed [20][25].
advantage of the inherent parallel processing property of In contrast to the relatively mature techniques for implementing
the rules, a parallel processing technique is utilized in the type-1 fuzzy chips, hardware implementation of interval type-2
RIT2NFS-WB(HL) to achieve computational speedup. The fuzzy systems (IT2FSs) is in its start-up stage mainly because
RIT2NFS-WB(HL) is applied to examples of online system
modeling and sequence prediction to demonstrate the of the complex computations performed by the system, espe-
systems functionality. cially the type reduction operation. Table I shows a comparison
of different circuit implementations of IT2FSs. A typical type
Index TermsFuzzy chips, neural fuzzy systems (NFSs),
online learning, type-2 fuzzy systems.
reduction operation, such as the KarnikMendel (KM) proce-
dure [26], requires an iterative procedure to find the system ex-
I. INTRODUCTION tended outputs and is costly in hardware implementations [27].
ITH the increasing complexity and scale of industry To reduce hardware implementation costs, a hardware imple-
W processes, data-driven techniques have emerged as a
promising approach to addressing various system problems,
mentation of the type-reduction operation using lookup tables
[28] or simplified, closed-form operations have been proposed
such as modeling, monitoring, and control, through the uti- [29][32]. Hardware implementation using a closed-form op-
lization of process data instead of physical models [1], [2]. In eration to approximate the actual extended outputs based on
data-driven approaches, two main issues are the selection of an WuMendel uncertainty bounds was proposed in [29]. A more
agent and its learning through process data. Neural fuzzy sys- economic hardware implementation of a reduced-interval type-
tems (NFSs) represent a popular data-driven approach in which 2 NFS using weighted bound-set boundaries (RIT2NFS-WB),
a fuzzy system functions as an agent and is trained using neural which uses fewer adders, multipliers, and dividers than the chip
networks. Although early works on NFSs focused on the use of in [29], was proposed and called RIT2NFS-WB(H) in [31]. The
implementation of a simplified type-reduction operation using
Manuscript received January 15, 2016; revised April 12, 2016; ac-
the average of two type-1 fuzzy systems was proposed in [30].
cepted May 4, 2016. Date of publication May 30, 2016; date of current These simplified type reduction operations and their hardware
version April 10, 2017. This work was supported by the Ministry of implementations demonstrate the tradeoff between chip imple-
Science and Technology, Taiwan, under Grant MOST 104-2221-E-005-
039-MY2.
mentation cost and the difference between the reduced outputs
The authors are with the Department of Electrical Engineering, Na- and the normal system outputs. Regarding the implemented
tional Chung-Hsing University, Taichung 402, Taiwan (e-mail: cfjuang@ types of rules, IT2FS chips with the Mamdani type [27][30],
dragon.nchu.edu.tw; ryjasonut@hotmail.com).
Color versions of one or more of the figures in this paper are available
the TakagiSugenoKang (TSK) type with crisp combination
online at http://ieeexplore.ieee.org. parameters [32], and the TSK type with interval combination
Digital Object Identifier 10.1109/TIE.2016.2574300 parameters [8], [31] have been proposed. Among these three
0278-0046 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications standards/publications/rights/index.html for more information.
JUANG AND JUANG: CIRCUIT IMPLEMENTATION OF DATA-DRIVEN TSK-TYPE INTERVAL TYPE-2 NEURAL FUZZY SYSTEM 4267

TABLE I
COMPARISON OF DIFFERENT CIRCUIT IMPLEMENTATION METHODS FOR IT2FSS

Studies [27] [28] [29] [30] [33] [31] [8] [32] This paper

Rule type Mamdani Mamdani Mamdani Mamdani Mamdani TSK (interval parameters) TSK (interval parameters) TSK (crisp parameters) TSK (interval parameters)
Type reduction Normal Simplified Simplified Simplified Simplified Simplified Simplified Simplified Simplified
Normalization Term Yes No Yes Yes No Yes No Yes Yes
Flexible No No No No No Yes No No Yes
Parameter Learning No No No No Yes No No No Yes

types of rules, hardware implementation of the last type is more implements the more complex systems with TSK-type rules
complex because of the additional circuits required to imple- with the general normal normalization terms. Because of
ment signed interval multiplications and additions. the additional inclusion of the normalization and TSK-type
The system parameters in the aforesaid hardware- consequent-parameter-tuning operations, three pipeline stages
implemented IT2FSs [8], [27][31] are fixed, i.e., these chips do are proposed in the RIT2NFS-WB(HL) instead of two such
not possess online learning ability. Recently, an interval type- stages used in the IT2NFC-OL. The additional pipeline stage
2 neural fuzzy chip with on-chip incremental learning ability is mainly dedicated to processing the tuning operation so that
(IT2NFC-OL) was proposed in the literature [33]. The IT2NFC- the computation time in each stage can be reduced.
OL uses Mamdani-type rules, where there are only two tunable The contributions of the RIT2NFS-WB(HL) are twofold.
consequent parameters for each rule. To reduce the design ef- First, almost all previous hardware circuits implement type-1
fort and implementation cost, a simplified type-reduction op- FSs/NFSs or interval type-2 FSs, which are relatively simple
eration using a simple weighted sum operation without output in terms of circuit design. Although the IT2NFS-OL chip im-
normalization terms (i.e., no dividers) is used in the IT2NFC- plements IT2NFS, it considers only the Mamdani-type rules
OL. However, this simplification operation causes the values of with the shortcomings discussed aforesaid and is simpler in
the consequent parameters in the rules and their meaning to be terms of implementation compared to TSK-type IT2NFSs. To
different from those with a normalization term [27][31]. the best of our knowledge, the RIT2NFS-WB(HL) is the first
This paper proposes a new circuit for the implementation of an TSK-type interval type-2 neural fuzzy circuit with online tuning
interval type-2 NFS with online parameter learning ability. The ability in the literature. Second, in addition to the new circuits
motivations for proposing the new circuit are explained as fol- proposed to implement the complex functions in a TSK-type
lows. First, in several applications of fuzzy systems (FSs), such IT2NFS, another characteristic of the RIT2NFS-WB(HL) is its
as fuzzy sequence prediction in power electronics [34], [35], flexibility in applications. The RIT2NFS-WB(HL) allows the
fuzzy control [23], [24], [36], and neural fuzzy modeling in sig- user to transfer an initial interval-type fuzzy model to hardware
nal processing [25], high-speed real-time operation ability has with different numbers of rules and inputs without redesigning
been shown to be necessary. In addition, regarding handling the the circuits, which provides flexibility and convenience of use.
unmolded or changing dynamics of underlying processes, these The IT2NFC-OL and most interval type-2 circuits discussed in
applications have also shown that NFSs with online parameter Table I use a fixed structure, in which circuit redesign is neces-
learning ability are necessary. Concerning the implementation sary if the number of inputs or rules changes. The RIT2NFS-
platform, the hardware implementation of NFSs benefits from WB(HL) is applied to different data-driven examples to verify
smaller platform size, lower power consumption, lower cost, and its performance.
higher execution speed, in contrast to software implementations This paper is organized as follows. Section II introduces
[17], [18], [23][25]. Second, interval type-2 NFSs consisting the functions in the RIT2NFS-WB. Section III introduces the
of TSK-type rules with interval combination parameters have structure and parameter learning in the RIT2NFS-WB. Section
been shown to outperform those consisting of Mamdani-type IV introduces the designed circuit modules in the RIT2NFS-
rules and type-1 NFSs [4], [31]. Therefore, the circuit imple- WB(HL), specifically the circuits implementing the online pa-
mentation of a TSK-type interval type-2 neural fuzzy chip with rameter tuning module. Section V presents the simulation re-
online tuning ability is demanding. To this end, this paper in- sults of the RIT2NFS-WB(HL). Finally, Section VI provides
troduces a new learning circuit into the RIT2NFS-WB(H) to the conclusions.
endow it with online tuning ability; the new circuit is called
RIT2NFS-WB(HL). II. RIT2NFS-WB OPERATIONS
In the RIT2NFS-WB(HL), all the TSK-type consequent
This section describes the inputoutput functions of the
parameters and a weighting parameter are tunable, en-
RIT2NFS-WB. Each rule in the TSK-type RIT2NFS-WB is
abling it to handle data-driven time-varying processes. The
of the following form [31]:
RIT2NFS-WB(HL) employs a parallel processing technique for
Rule j: If x1 is Aj1 and . . . and xn is Ajn
computational speedup and is flexible so that it is applicable to
interval type-2 NFSs with different numbers of inputs and rules   n  
without requiring circuit redesign. Unlike the IT2NFC-OL that then y is cj0 sj0 , cj0 + sj0 + cji sji , cji + sji xi
implements interval type-2 NFSs with Mamdani-type rules i=1
without output normalization terms, the RIT2NFS-WB(HL) j = 1, . . . , M (1)
4268 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 64, NO. 5, MAY 2017


where x1 , . . . , xn are input variables scaled to lie within the 
M M
range [1, 1]; y is an output variable; Aji , i = 1, . . . , n, are yr(0) = fj wrj IU , yr(M ) = f j wrj IL (11)
interval type-2 fuzzy sets; [cji sji , cji + sji ], sji 0 is a com- j =1 j =1
bination interval; and M is the number of rules. Fuzzy set Aji where
uses a Gaussian primary membership function (MF) with width
ij and an uncertain center [m1 ji , m2 ji ], and the output of the MF 1 1
IL = M , IU = M . (12)
is an interval set [A j , A j ], given as follows: f j j
j =1 j =1 f
i i

2

1 x mj1 i III. STRUCTURE AND PARAMETER LEARNING

exp
i
xi < m1 ji


,

2 ij A. Rule Generation


A j (xi ) = 1, mj1 i xi mj2 i

The RIT2NFS-WB uses an average firing-strength-based
i


2

1 x mj2 i method to generate a new rule according to training data. The

exp i
, xi > mj2 i number of existing rules at time t is denoted as M (t). For each

2
ij 
piece of incoming data x = (x1 , . . . , xn ), the RIT2NFS-WB
(2) finds
 
J = arg max F j x

and (13)

2 1j M (t)

1 x mj2i mj1i mj2i

i + where F j = (fj + f j )/2 is the average firing strength. If

exp , x
2 ij i 2
F J ( x) is smaller than a prespecified threshold th (0, 1) or


A j (xi ) =
2 .

1 x mj1i mj1i + mj2i
M (t) = 0, then a new rule is generated. For a new rule, the

i

exp
i
, xi > initial uncertain centers and widths of its composing antecedent

2 ij 2
interval type-2 fuzzy sets in each input variable xi are assigned
(3) as follows:
By using the algebraic product function, the firing strength,  
M (t)+1 M (t)+1
F j , is an interval given as follows: m1 i , m2 i = [xi (t) 0.1, xi (t) + 0.1] (14)
 
F j = f j , fj (4) and

where
fi x e d = 0.5, M (t) = 0


n 
n M (t )+ 1   J 2
0 . 5
fj = A j and f j = A j .
i =

n m 1 i + m 2 Ji .
i
(5)
2
1
xi , M (t) > 0
i 2
i=1 i=1 i= 1

The consequent value in rule j is an interval [wlj , wrj ], given (15)


as follows:
n 
n B. Parameter Learning
wlj = cji xi |xi | sji , x0 := 1 (6)
Parameter learning is performed using the gradient descent
i=0 i=0
algorithm to minimize the following error function:
and
1

n 
n E = [y(t + 1) y d (t + 1)]2 (16)
wrj = cji xi + |xi | sji . (7) 2
i=0 i=0 where y(t + 1) and y d (t + 1) denote the system and desired
To avoid the use of a computationally expensive iterative outputs, respectively. The update equations of the consequent
procedure, such as the KM procedure [26], to compute the parameters cji and sji are given as follows:
system output y, RIT2NFS-WB uses the following simplified
E
equation based on weighted bound-set boundaries: cji (t + 1) = cji (t) (17)
cji (t)
y = yl + (1 )y r (8)
E
where [0, 0.5] is a weighting parameter that is online tunable sji (t + 1) = sji (t) (18)
and sji (t)
   
(0) (M ) where is a learning coefficient and other terms are defined by
yl = min yl , yl , y r = max yr(0) , yr(M ) (9)
(19) and (20), shown at the bottom of the next page.
where The update equations of the weighting parameter are given
as follows:

M 
M
= f j wlj IL , yl = fj wlj IU (10)
(0) (M )
yl E
(t + 1) = (t) (21)
j =1 j =1 (t)
JUANG AND JUANG: CIRCUIT IMPLEMENTATION OF DATA-DRIVEN TSK-TYPE INTERVAL TYPE-2 NEURAL FUZZY SYSTEM 4269

where computes the output and learning modules. The circuit sends
E     an output y and updates the consequent and weight parameters.
= y y d yl y r . (22) The first five modules compute the system inputoutput func-

tions in (2)-(12). The circuits in these five modules follow those
After the update, if is smaller than 0, then it is set to 0. in the RIT2NFS-WB(H) [31] with the additional inclusion of
Similarly, if is greater than 0.5, then it is set to 0.5. Detailed input/output buses to receive/send signals from/to the learning
update equations of the antecedent parameters ij , m1 ji , and module. The online tuning ability of the RIT2NFS-WB(HL)
m2 ji can be found in [31]. is achieved by introducing the new learning module into the
RIT2NFS-WB(H), whose parameters are fixed. Section IV-A
IV. CIRCUIT IMPLEMENTATION briefly describes the functions of the first five modules and their
Given a training dataset, a RIT2NFS-WB is built using a connections to the learning module. Detailed implementation
software implementation of the structure and parameter learn- circuits of the five modules can be found in [31]. Section IV-
ing algorithms. The software-designed RIT2NFS-WB, denoted B details the circuits in the new learning module proposed to
as RIT2NFS-WB(S), is then implemented in the RIT2NFS- endow the original RIT2NFS-WB(H) with online tuning ability.
WB(HL) with the same number of rules and without an addi-
tional circuit-implemented structure-learning circuit. Because
A. Circuit Implementation of InputOutput Functions
tuning the antecedent parameters is substantially more complex
than tuning the consequent and weighting parameters, only the For the first five modules, the selection module allows the
latter is implemented in the RIT2NFS-WB(HL) to endow it with user to specify the number of rules M and input variables n for
the online parameter tuning ability without heavy implementa- different applications. The function is implemented by sending
tion cost. The RIT2NFS-WB(HL) is designed to be flexible so out the input number control signal In [7:0] and the rule num-
that it can implement IT2FSs with a different number of in- ber control signal Rule [7:0] to the multiplexers in the other
put variables n and rules M without redesigning the circuits. modules. The antecedent module computes the firing strength
Fig. 1 shows the architecture of the RIT2NFS-WB(HL), which pair f j and fj in (5), where the exponential function in (2) and
contains six modules: selection, antecedent, consequent, out- (3) is implemented using a lookup table with a resolution of
put processing, output, and learning. The implementation of 10 and 8 bits in the input and output, respectively. A total of
the six modules is divided into three stages (T1 , T2 , and T3 ). M antecedent modules are activated by the rule number control
Fig. 1 shows that stage T1 computes the selection, antecedent, signal, and these modules run in parallel to find all M firing
and consequent modules. Because the computations of the an- strength pairs, as shown in Fig. 1. The consequent module com-
tecedent firing strengths in (5) and the consequent values in putes the consequent values wlj and wrj in (6) and (7). Like the
(6) and (7) are independent in the RIT2NFS-WB(HL), the an- arrangement of the antecedent modules, a total of M conse-
tecedent and consequent modules that compute these values are quent modules are activated by the rule number control signal,
implemented in parallel in the same stage. This parallel compu- and these modules run in parallel to find all consequent values.
tation structure avoids the additional time spent on computing The RIT2NFS-WB(HL) uses additional input buses in the con-
the TSK-type consequent values when using a serial computa- sequent module to receive the updated consequent values from
tion structure and, therefore, helps to reduce the execution time. the learning module at each clock cycle. The output process-
Stage T2 computes the output processing module, and stage T3 ing module computes the four values in (10) and (11), namely,

(0) (M )

(y y d )f j IL xi , yl = yl and y r = yr







(y y d )[f IL + (1 )fj IU ] xi ,
j
yl = yl
(0)
and y r = yr
(0)
E
=
cji


(y y d )[fj IU + (1 )f j IL ] xi , yl = yl
(M ) (M )
and y r = yr





(M ) (0)
(y y d ) fj IU xi , yl = yl and y r = yr
(19)
(0) (M )

(y y d )(1 2)f j IL |xi | , yl = yl and y r = yr







(y y d )[f IL + (1 )fj IU ] |xi | ,
j
yl = yl
(0)
and y r = yr
(0)
E
= (20)
sji


(y y d )[fj IU + (1 )f j IL ] |xi | , yl =
(M )
yl and y r =
(M )
yr





(M ) (0)
(y y d )(1 2)fj IU |xi | , yl = yl and y r = yr
4270 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 64, NO. 5, MAY 2017

Fig. 1. Architecture of the RIT2NFS-WB(HL).

Fig. 2. Submodules proposed to implement part of the products in (19)


and (20).
Fig. 3. Submodule that computes the final outputs in the four conditions
in (19) and (20).
(0) (M ) (0) (M )
yl , yl , yr , and yr , in parallel. This module uses two di-
viders to implement IU and IL in (12). The RIT2NFS-WB(HL) Submodule in Fig. 2: This submodule finds the partial prod-
introduces two additional buses in this module to send IU and ucts in (19) and (20). The upper (lower) half circuit computes
IL to the learning module. The output module first computes yl IL f j (IU fj ), followed by its multiplication with the four
and y r in (9) using minimum and maximum circuits and then values , (1 ), (1 2), and in parallel. Among the
computes the final system output y using (8). The results of the four multiplication terms, only the product value in the last term
minimum and maximum circuits are also sent to the learning is negative; therefore, the technique of 2s complement is used
module. in the last product term. The upper and lower circuits compute
the eight multiplication terms in parallel.
Submodule in Fig. 3: This submodule computes the multipli-
B. Circuit Implementation of the Learning Module cations of the ten outputs from the submodule in Fig. 2 with the
The learning module implements the consequent param- error value E = y y d and xi (or |xi |) to obtain all the product
eter update equations in (17)-(20) using the submodules in terms for the eight conditions in (19) and (20). The left four cir-
Figs. 25 and the weighting parameter update equations in (21) cuits in Fig. 3(a)(d) compute the four cases of the partial value
and (22) using the submodule in Fig. 6. The following details E/cji in (19). For example, the output signal case1(cj )xi
the circuits in each submodule. in Fig. 3(a) denotes the term value (y y d )f j IL xi in the first
JUANG AND JUANG: CIRCUIT IMPLEMENTATION OF DATA-DRIVEN TSK-TYPE INTERVAL TYPE-2 NEURAL FUZZY SYSTEM 4271

In[i] = 0 (i = n + 1, . . . , n) means that the input variable xi


is not used; thus, the output of the MUX-1 is set to 16 b0. There
are a total of eight MUX-1s to account for the four cases of
E/cji and E/sji . Based on the result of the minimum and
maximum circuits in the output module, two control signals are
defined. The control signals based on the result of the minimum
(0) (M )
circuit are signal yl and signal yl , the values of which are
given as follows:

1, yl = yl(0)
(0)
signal yl = ,

0, otherwise

1, yl = yl(M )
(M )
signal yl = (23)

0, otherwise.
Similarly, the control signals based on the result of the max-
(0) (M )
imum circuit are signal yr and signal yr , the values of
which are given as follows:

1, y r = yr(0)
(0)
Fig. 4. Submodule proposed to determine E/cji and E/sji , signal yr = ,
i = 0, . . . , n of rule j in parallel.
0, otherwise

1, yr = yr(M )
(M )
signal yr = (24)

0, otherwise.
The four control signals are sent to MUX-2 to determine
which of the four cases in (19) and (20) holds. The Rule[j]
signal from the selection module controls MUX-3 to provide
the circuit with the flexibility of learning with different specified
numbers of rules. The condition Rule[j] = 0 means that the
jth rule is not used; thus, the output of MUX-3 is set to 16 b0.
Fig. 5. Submodule proposed to find the updated consequent values in Su-module in Fig. 5: This submodule computes the final con-
(17) and (18). sequent parameter update equations in (17) and (18). This sub-
module first finds the product of the learning rate with the
partial differentiation terms E/cji and E/sji using the bit
right-shift operation and then adds the result to cji (t) and sji (t)
to obtain cji (t + 1) and sji (t + 1), respectively.
Submodule in Fig. 6: This sub-module computes the update
of in (21) and (22). The difference between yl and y r received
from the output module is multiplied by the error value E to ob-
Fig. 6. Submodule proposed to find the updated weight parameter tain the partial differentiation term E/ in (22). The product
(t + 1) in (21) and (22).
of E/ with the learning rate is implemented using the bit
right-shift operation, and the result is added to (t) to obtain the
case in (19). Similarly, the right four circuits in Fig. 3(a)(d)
final (t + 1) value in (21). Two cascaded comparators com-
compute the four cases of the partial value E/sji in (20).
pare the computed (t + 1) value with 0 and 0.5 to endure that
Sub-module in Fig. 4: This submodule determines which of
(t + 1) [0. 5]. Circuits for updating cji (t), sji (t), and (t) are
the four cases in (19) and (20) holds and sends all the final values
implemented in parallel to obtain a computational speedup.
of E/cji and E/sji , i = 0, . . . , n, of rule j in parallel to
the next module according to the specified values of n and M .
The maximum number of allowed input variables is denoted as V. SIMULATIONS
n. This submodule contains three types of multiplexers, called This section presents the simulation results of the RIT2NFS-
MUX-1, MUX-2, and MUX-3, that determine the final outputs. WB(HL). The utilized software tools are the electrical design
The In signal from the selection module controls MUX-1 to automation software tool of ISE and ModelSim SE6.5. This
provide the circuit with the flexibility of learning with a specified section presents three examples of the simulation results of
number of inputs without redesigning the circuit. The condition RIT2NFS-WB(HL). When the RIT2NFS-WB(HL) is specified
4272 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 64, NO. 5, MAY 2017

TABLE II
PERFORMANCES OF THE SOFTWARE-IMPLEMENTED RIT2NFS-WB WITH/WITHOUT ONLINE LEARNING AND VARIOUS MODELS
FOR THE SERIES-A PREDICTION PROBLEM IN EXAMPLE 1

Model category Other models Type-1 NFSs IT2NFSs

Models PRMS[38] NN[38] SONFIN[3] SVM-FM[8] T2FLS-G[8] SEIT2FNN[4] T2NFS-T1(A)[8] RIT2NFS-WB(S) RIT2NFS-WB(SL)
Rule number - - 9 31 5 5 5 5 5
Test RMSE 14.226 24.6 5.938 9.707 7.16 5.766 2.985 2.627 2.614

TABLE III
TEST RMSES OF THE SOFTWARE- AND CIRCUIT-IMPLEMENTED
RIT2NFS-WB WITH/WITHOUT ONLINE LEARNING IN EXAMPLE 2

Software Circuit

Implementations RIT2NFS RIT2NFS RIT2NFS RIT2NFS


Models -WB(S) -WB(SL) -WB(H) -WB(HL)

k = 1 to 200 0.0241 0.0146 0.0251 0.0211


k = 201 to 400 0.0934 0.0362 0.0922 0.0460
k = 401 to 600 0.1784 0.0447 0.1771 0.0564

utilized for comparison used the same number of rules and


training iterations as the RIT2NFS-WB(S). The result showed
Fig. 7. Test prediction result of the real time series using the RIT2NFS- that both the RIT2NFS-WB(S) and RIT2NFS-WB(SL) achieved
WB(SL) in Example 1. smaller test RMSEs than did all the models used for the com-
parison.
to have a maximum of seven input variables and seven rules, the The RIT2NFS-WB(S) has been shown to outperform dif-
total number of gate counts is 621 164. With the inclusion of ferent type-1 and interval type-2 NFSs in other applications
the online update of the parameters, the number of fuzzy logic [31]. Therefore, the following examples focus on presenting
inferences per second (FLIPS) is 7.07 million. For RIT2FS- the online tuning ability and functionality of the RIT2NFS-
WB(H) with the same number of inputs and rules, the number WB(HL) and the accuracy difference between software- and
of FLIPS is 24.107 million. As expected, the execution speed of circuit-implemented RIT2NFS-WB models.
the RIT2NFS-WB(HL) is lower than that of the RIT2FS-WB(H) Example 2: (online system modeling). This example con-
because of the additional inclusion of the learning module in siders the online modeling of a nonlinear plant guided by the
stage three. following difference equation:
Example 1: (real time-series prediction). This example con- yd (k)
siders the prediction of the SanteFe real-time series (series A) yd (k + 1) = + u3 (k) (25)
1 + yd2 (k)
measured from a real physical system [37]. Five past values
y(t 5), . . . , y(t 1), were used to predict y(t). As in [8], where u(k) = sin(2k/100) and yd (1) = 0. In the software im-
[38], 1000 samples were used, 90% and 10% of which were plementation, the inputs of the RIT2NFS-WB(S) are yd (k) and
used for training and testing, respectively. In the software im- u(k), and the desired output is yd (k + 1). Training patterns
plementation, 1000 iterations were performed for training. Five were generated with u(k) = sin(2k/100), k = 0, . . . , 200,
rules were generated when th was set to 0.2. Table II shows and training was performed for 100 iterations. Seven fuzzy rules
the test root-mean-square-error (RMSE) of RIT2NFS-WB(S), were generated when th was set to 0.1. Table III shows the
where all system parameters in RIT2NFS-WB(S) are fixed. For test RMSEs of the RIT2NFS-WB(S) and RIT2NFS-WB(SL)
comparison, Table II also shows the test RMSE of the software- when using the 200 training data as test data. As in Example
implemented RIT2NFS-WB with online consequent and weight 1, the RIT2NFS-WB(SL) produced a smaller test error than did
parameter learning, as with the RIT2NFS-WB(HL), and the sys- the RIT2NFS-WB(S), indicating again the effect of the online
tem is denoted as RIT2NFS-WB(SL). The RIT2NFS-WB(SL) learning operation.
produced a smaller test error than did the RIT2NFS-WB(S), In the circuit implementation, the external inputs were speci-
indicating the effect of the online learning operation. Fig. 7 fied as (n)2 = 010 and (M )2 = 111. Table III shows the test
shows the prediction result of the test data using the RIT2NFS- RMSEs of the RIT2NFS-WB(H) and RIT2NFS-WB(HL). Fig. 8
WB(SL), where the predicted outputs were very close to the shows the modeling result obtained using the 200 test data points
actual values. (k = 1, . . . , 200) for the software and circuit implementations
For comparison, Table II also shows the reported results ob- with/without online learning. The RIT2NFS-WB(HL) produced
tained using different models, including type-1 and type-2 NFSs, a smaller test error than did the RIT2NFS-WB(H) because of
applied to the same prediction problem [8], [38]. The IT2NFSs the on-chip learning ability. Table III and Fig. 8 show that the
JUANG AND JUANG: CIRCUIT IMPLEMENTATION OF DATA-DRIVEN TSK-TYPE INTERVAL TYPE-2 NEURAL FUZZY SYSTEM 4273

Fig. 9. Modeling results of the RIT2NFS-WB(H) and RIT2NFS-WB(HL)


Fig. 8. Modeling results (k = 1, . . . , 200) of the (a) software- and when (a) k = 201, . . . , 200 and (b) k = 401, . . . , 600 in Example 2.
(b) circuit-implemented RIT2NFS-WB with/without the learning ability
in Example 2.
TABLE IV
TEST RMSES FOR DIFFERENT NUMBERS OF RULES IN THE SOFTWARE-
test error of the RIT2NFS-WB(HL) is similar, although larger, IMPLEMENTED RIT2NFS-WB WITH/WITHOUT ONLINE LEARNING IN
to that of the RIT2NFS-WB(SL). The performance degradation EXAMPLE 3
is mainly due to resolution lost in the computation and the finite
floating point representation accuracy. th 0.05 0.3 0.5 0.55 0.6
Rule number 3 5 7 10 12
To further test the learning ability of the RIT2NFS-WB(HL), RIT2NFS-WB(S) 0.11869 0.11674 0.11625 0.11591 0.1165
the plant model is assumed to be time varying after the time RIT2NFS-WB(SL) 0.11823 0.11627 0.11593 0.11560 0.1162
step at k = 200; the time-varying plant model is described as
follows:

y d (k ) 3
201 < k 400 where = 17 and x(0) = 1.2 The four past values x(t
1+y d2 (k ) + u (k) + 0.1,
yd (k + 1) = . 18), x(t 12), x(t 6), and x(t) were used to predict the

y d (k ) future value x(t + 85). A total of 3000 patterns generated from
2
1+y d (k )
+ u 3
(k) + 0.2, 401 < k 600
time steps t = 201 to 3200 were used for training, and the 3000
(26)
patterns generated from t = 5001 to 8000 were used for testing.
Table III shows the RMSEs of the software- and circuit-
In the software implementation, 100 iterations were performed
implemented RIT2NFS-WB with/without online learning. Fig.
for training. To show the influence of the number of rules on
9 shows the test outputs of the different implementations. An ob-
the prediction performance, Table IV presents the test RMSEs
vious difference between the outputs of the RIT2NFS-WB(H)
of the RIT2NFS-WB(S) and the RIT2NFS-WB(SL) with dif-
and the actual outputs is observed because of the change in
ferent numbers of rules generated from setting different values
the plant with time. In contrast to the RIT2NFS-WB(H), the
of th . As in Examples 1 and 2, the RIT2NFS-WB(SL) pro-
RIT2NFS-WB(HL) obtained more similar values to the actual
duced smaller errors than did the RIT2NFS-WB(S) for differ-
plant outputs, which verify the online tuning ability of the cir-
ent numbers of rules. Table IV shows that the initial increase
cuit. Similar to the test result for k = 1, . . . , 200, Table III shows
in the number of rules tends to improve the test performance.
that the test error of the RIT2NFS-WB(HL) is similar, although
However, when the number of rules is too large, the test perfor-
larger, to that of the RIT2NFS-WB(SL), which verifies the cor-
mance may degrade. The reason for this may be that a larger
rect functionality of the designed circuits.
IT2FS needs a larger number of training iterations to converge to
Example 3: (online chaotic sequence prediction). This ex-
an optimal parameter set or even when the IT2FS is well trained,
ample presents the prediction of the following MackeyGlass
the larger model may encounter the overtraining problem.
chaotic time series [31]:
Table V shows the test RMSEs of the software- and circuit-
dx(t) 0.2x(t ) implemented RIT2NFS-WB with/without online tuning abil-
= 0.1x(t) (27)
dt 1 + x10 (t ) ity when the number of rules was 7. The RMSE difference
4274 IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 64, NO. 5, MAY 2017

TABLE V
TEST RMSES OF THE SOFTWARE- AND CIRCUIT-IMPLEMENTED
RIT2NFS-WB WITH/WITHOUT ONLINE LEARNING
IN EXAMPLE 3

Implementations Software Circuit


RIT2NFS RIT2NFS RIT2NFS RIT2NFS
Models -WB(S) -WB(SL) -WB(H) -WB(HL)

t= 0.11625 0.11593 0.11704 0.11688


5001, . . . , 8000

TABLE VI
TEST RMSES OF RIT2NFS-WB(H) AND RIT2NFS-WB(HL) WHEN
DIFFERENT DISTORTING VALUES ARE ADDED TO ALL THE CONSEQUENT
PARAMETERS IN THE INITIAL MODEL IN EXAMPLE 3

Models Distorting RIT2NFS RIT2NFS


Values -WB(H) -WB(HL)

t = 5001, . . . , 8000 0.01 0.11846 0.11777


0.05 0.16197 0.13241
0.1 0.27350 0.17675

between RIT2NFS-WB(HL) and RIT2NFS-WB(SL) is smaller Fig. 10. Prediction values of the RIT2NFS-WB(H) and the RIT2NFS-
than 0.001, which again shows the correct functionality of the WB(HL) (a) in the first 1000 test samples (t = 5001 to 6000) and (b) in
designed circuits. The RIT2NFS-WB(HL) produced a smaller the 2001 to 3000 test samples (t = 7001 to 8000) in Example 3.
test error than did the RIT2NFS-WB(H). However, the differ-
ence was small due to the well-trained initial model transferred
from the RIT2NFS-WB(S) and the time-invariant property of combination parameters in the consequent of a TSK-type rule
the series. and the weighting parameter in the weighted bound-set bound-
To further demonstrate the effect of the online tuning ability ary operation, where the operation is used to simplify the type-
in the RIT2NFS-WB(HL), it is assumed that the initial con- reduction operation and reduce the circuit implementation cost.
sequent parameters in the RIT2NFS-WB(HL) are distorted by The RIT2NFS-WB(HL) with online tuning ability is suitable for
adding a distorting value to the consequent parameters of every handling data-driven problems with time-varying characteris-
rule. The distorting values of 0.01, 0.05, and 0.1 are tested. For tics, the property of which has been verified through simulations
this initially corrupted model, Table VI shows the test RMSEs of process modeling and sequence prediction. The limitation of
of the RIT2NFS-WB(H) and RIT2NFS-WB(HL), where the lat- the RIT2NFS-WB(HL) is that the structure should be learned in
ter achieves a substantially smaller error than does the former advance using software implementation, after which the fixed
for different distorting levels. Fig. 10(a) and (b) shows the pre- structure can be transferred to the circuit implementation. In
diction outputs from t= 5001 to 6000 and from t = 7001 to the future, the implementation of the structure learning ability
8000, respectively, of the RIT2NFS-WB(H) and the RIT2NFS- into the RIT2NFS-WB(HL) will be studied so that the circuit
WB(HL) with a distorting value of 0.1. Fig. 10(a) shows that, can online evolve not only system parameters but also system
at the start of the test, the prediction values from both the structure to handle time-varying processes. The application of
RIT2NFS-WB(H) and the RIT2NFS-WB(HL) are significantly the RIT2NFS-WB(HL) to a real process will be studied as well.
larger than the actual values; however, the prediction values from
the RIT2NFS-WB(HL) tend to be closer to the actual values
due to its online tuning ability. Fig. 10(b) shows that, after online REFERENCES
tuning with 2000 time steps, the predicted values obtained by [1] S. Yin, H. Gao, and O. Kaynak, Data-driven control and process mon-
the RIT2NFS-WB(HL) are close to the actual values, whereas itoring for industrial applicationsPart II, IEEE Trans. Ind. Electron.,
vol. 62, no. 1, pp. 583586, Jan. 2015.
the prediction error of the RIT2NFS-WB(H) remains large. [2] S. Yin, X. Li, H. Gao, and O. Kaynak, Data-based techniques focused
on modern industry: An overview, IEEE Trans. Ind. Electron., vol. 62,
no. 1, pp. 657667, Jan. 2015.
VI. CONCLUSION [3] C. F. Juang and C. T. Lin, An on-line self-constructing neural fuzzy
This paper has proposed a new circuit for the implementa- inference network and its applications, IEEE Trans. Fuzzy Syst., vol. 6,
no. 1, pp. 1232, Feb. 1998.
tion of an interval type-2 NFS chip with online tuning abil- [4] C. F. Juang and Y. W. Tsao, A self-evolving interval type-2 fuzzy neural
ity. The proposed RIT2NFS-WB(HL) is different from most network with on-line structure and parameter learning, IEEE Trans. Fuzzy
existing type-2 fuzzy chips, whose parameters are fixed and Syst., vol. 16, no. 6, pp. 14111424, Dec. 2008.
[5] R. H. Abiyev and O. Kaynak, Type 2 fuzzy neural structure for identi-
do not possess any online tuning ability. The learning mod- fication and control of time-varying plants, IEEE Trans. Ind. Electron.,
ule proposed in the RIT2NFS-WB(HL) tunes all the interval vol. 57, no. 12, pp. 41474159, Dec. 2010.
JUANG AND JUANG: CIRCUIT IMPLEMENTATION OF DATA-DRIVEN TSK-TYPE INTERVAL TYPE-2 NEURAL FUZZY SYSTEM 4275

[6] G. D. Wu and P. H. Huang, A vectorization-optimization-method based [29] M. A. Melgarejo and C. A. Pena-Reyes, Implementing interval type-2
type-2 fuzzy neural network for noisy data classification, IEEE Trans. fuzzy processors, IEEE Comput. Intell. Mag., vol. 2, no. 1, pp. 6371,
Fuzzy Syst., vol. 21, no. 1, pp. 115, Feb. 2013. Feb. 2007.
[7] Y. Y. Lin, J. Y. Chang, and C. T. Lin, A TSK-type-based self-evolving [30] R. Sepulveda, O. Montiel, O. Castillo, and P. Melin, Modeling and simu-
compensatory interval type-2 fuzzy neural network (TSCIT2FNN) and its lation of the defuzzification stage of a type-2 fuzzy controller using VHDL
applications, IEEE Trans. Ind. Electron., vol. 61, no. 1, pp. 447459, Jan. code, Control Intell. Syst., vol. 39, no. 1, pp. 3340, Mar. 2011.
2014. [31] C. F. Juang and K. J. Juang, Reduced interval type-2 neural fuzzy
[8] C. F. Juang and W. S. Jang, A type-2 neural fuzzy system learned through system using weighted bound-set boundary operation for computation
type-1 fuzzy rules and its FPGA-based hardware implementation, Appl. speedup and chip implementation, IEEE Trans. Fuzzy Syst., vol. 21, no. 3,
Soft Comput., vol. 18, pp. 302313, May 2014. pp. 477491, Jun. 2013.
[9] C. J. Kim and D. Chwa, Obstacle avoidance method for wheeled mobile [32] M. D. Schrieber and M. Biglarbegian, Hardware implementation and
robots using interval type-2 fuzzy neural network, IEEE Trans. Fuzzy performance comparison of interval type-2 fuzzy logic controllers for
Syst., vol. 23, no. 3, pp. 677687, Jun. 2015. real-time applications, Appl. Soft Comput., vol. 32, pp. 175188, Jul.
[10] A. K. Das, S. Sundaram, and N. Sundararajan, A self-regulated interval 2015.
type-2 neuro-fuzzy inference system for handling non-stationarities in [33] C. F. Juang and C. Y. Chen, An interval type-2 neural fuzzy chip with on-
EEGsignals for BCI, IEEE Trans. Fuzzy Syst., to be published, doi: chip incremental learning ability for time-varying data sequence prediction
10.1109/TFUZZ.2016.2540072. and system control, IEEE Trans. Neural Netw. Learning Syst., vol. 25,
[11] A. Mohammadzadeh, O. Kaynak, and M. Teshnehlab, Two-mode indirect no. 1, pp. 216228, Jan. 2014.
adaptive control approach for the synchronization of uncertain chaotic [34] C. Fleischer, W. Waag, Z. Bai, and D. U. Sauer, Self-learning state-of-
systems by the use of a hierarchical interval type-2 fuzzy neural network, available-power prediction for lithium-ion batteries in electrical vehicles,
IEEE Trans. Fuzzy Syst., vol. 22, no. 5, pp. 13011312, Oct. 2014. in Proc. IEEE Conf. Veh. Power Propulsion, 2012, pp. 370375.
[12] M. A. Khanesar, E. Kayacan, M. Reyhanoglu, and O. Kaynak, Feedback [35] H. T. Lin and T. J. Liang, Prediction of state of charge for Li-Co batteries
error learning control of magnetic satellites using type-2 fuzzy neural with fuzzy inference system based fuzzy neural networks, in Proc. 1st
networks with elliptic membership functions, IEEE Trans. Cyber., vol. 45, Int. Conf. Future Energy Electron., 2013, pp. 891896.
no. 4, pp. 858868, Aug. 2015. [36] F. J. Lin and P. H. Chou, Adaptive control of two-axis motion control
[13] H. Eichfeld, T. Kunemund, and M. Menke, A 12b general-purpose fuzzy system using interval type-2 fuzzy neural network, IEEE Trans. Ind.
logic controller chip, IEEE Trans. Fuzzy Syst., vol. 4, no. 4, pp. 460475, Electron., vol. 56, no. 1, pp. 178193, Jan. 2009.
Nov. 1996. [37] A.S. Weigend and N.A. Gersehnfield, Time Series Prediction: Forecasting
[14] M. J. Patyra, J. L. Grantner, and K. Koster, Digital fuzzy logic con- the Future and Understanding the Past. Reading, MA, USA: Addison-
troller: design and implementation, IEEE Trans. Fuzzy Syst., vol. 4, no. 4, Wesley, 1994.
pp. 439459, 1996. [38] S. Singh, Noise impact on time-series forecasting using an intelli-
[15] A. Gabrielli and E. Gandolfi, A fast digital fuzzy processor, IEEE gent pattern matching technique, Pattern Recognit., vol. 32, no. 8,
MICRO, vol. 17, no. 1, pp. 6879, Jan.-Feb. 1999. pp. 13891398, Aug. 1999.
[16] G. Ascia, V. Catania, and M. Russo, VLSI hardware architecture
for complex fuzzy system, IEEE Trans. Fuzzy Syst., vol. 7, no. 5,
pp. 553569, Oct. 1999. Chia-Feng Juang (M99SM08) received the
[17] C. F. Juang and J. S. Chen, Water bath temperature control by a recur- B.S. and Ph.D. degrees in control engineering
rent fuzzy controller and its FPGA implementation, IEEE Trans. Ind. from National Chiao-Tung University (NCHU),
Electron., vol. 53, no. 3, pp. 941949, Jun. 2006. Hsinchu, Taiwan, in 1993 and 1997, respectively.
[18] S. Sanchez-Solano, A. J. Cabrera, I. Baturone, F. J. Moreno-Velo, and Since 2001, he has been with the Department
M. Brox, FPGA implementation of embedded fuzzy controllers for of Electrical Engineering, NCHU, Taichung, Tai-
robotic applications, IEEE Trans. Ind. Electron., vol. 54, no. 4, pp. 1937 wan, where he became a Full Professor in 2007
1945, Aug. 2007. and has been a Distinguished Professor since
[19] A. H. Zavala and O. C. Nieto, Fuzzy hardware: A retrospective and 2009. He has authored or coauthored 7 book
analysis, IEEE Trans. Fuzzy Syst., vol. 20, no. 4, pp. 623635, Aug. chapters, over 90 journal papers (including over
2012. 50 IEEE journal papers), and over 100 confer-
[20] C. W. Lin, J. S. Wang, and C. C. Yu, "Synchronous pipeline hardware ence papers. His current research interests include computational in-
design for a neuro-fuzzy circuit with on-chip learning capability," Lecture telligence (CI), chip implementation of CI techniques, intelligent control,
Notes Artif. Intell., vol. 4682, pp. 192201, Aug. 2007. computer vision, and evolutionary robots.
[21] I. del Campo, K. Basterretxea, J. Echanobe, G. Bosque, and F. Doc- Dr. Juang received the Youth Automatic Control Engineering Award
tor, System-on-chip development of a neurofuzzy embedded agent for from the Chinese Automatic Control Society (CACS), Taiwan, in 2006;
ambient-intelligence environments, IEEE Trans. Syst., Man, Cybern. B, the Outstanding Youth Award from the Taiwan System Science and En-
Cybern., vol. 42, no. 2, pp. 501512, Apr. 2012. gineering Society, Taiwan, in 2010; the Excellent Research Award from
[22] G. Bosque, I. Campo, and J. Echanobe, Fuzzy systems, neural networks NCHU, Taiwan, in 2010; the Outstanding Youth Award from the Taiwan
and neuro-fuzzy systems: A vision on their hardware implementation Fuzzy Systems Association, Taiwan, in 2014; and the Outstanding Au-
and platforms over two decades, Eng. Appl. of Arti. Intell., vol. 32, tomatic Control Engineering Award from CACS, Taiwan, in 2014.
pp. 283331, Jun. 2014. Dr. Juang is an Associate Editor of the IEEE TRANSACTIONS ON FUZZY
[23] C. F. Hsu, C. J. Chiu, T. T. Lee, and J. Z. Tsai, FPGA-based intelligent SYSTEMS, IEEE TRANSACTIONS ON CYBERNETICS, Asian Journal of Con-
power regulator IC design for forward DC-DC converters, in Proc. IEEE trol, and Journal of Information Science and Engineering and an Area
Int. Conf. Intell. Comput. Intell. Syst., 2009, pp. 833837. Editor of the International Journal of Fuzzy Systems.
[24] N. K. Quang, Y. S. Kung, and Q. P. Ha, FPGA-based control archi-
tecture integration for multiple-axis tracking motion systems, in Proc.
IEEE/SICE Int. Symp. Syst. Integr., 2011, pp. 591596.
[25] M. Elloumi, M. Krid, and D. S. Masmoudi, Hardware implementation of
neural-fuzzy network based image denoising approximation, in Proc. 1st Kai-Jie Juang received the B.S. degree in au-
Int. Conf. Image Process. Appl. Syst., 2014, pp. 15. tomatic control engineering from Feng Chia
[26] J. M. Mendel, Uncertain Rule-Based Fuzzy Logic System: Introduction University, Taichung, Taiwan, China, in 2010.
and New Directions. Upper Saddle River, NJ, USA: Prentice-Hall, 2001. He is currently working toward the M.S.
[27] R. Sepulveda, O. Montiel, O. Castillo, and P. Melin, Embedding a high degree in the Department of Electrical En-
speed interval type-2 fuzzy controller for a real plant into an FPGA, Appl. gineering, National Chung-Hsing University,
Soft Comput., vol. 12, no. 3, pp. 988998, Mar. 2012. Taichung, Taiwan.
[28] C. F. Juang and Y. W. Tsao, A type-2 self-organizing neural fuzzy system His research interests include type-2 fuzzy
and its FPGA implementation, IEEE Trans. Syst. Man Cyber. B, Cybern., systems, neural fuzzy systems, and fuzzy chips.
vol. 38, no. 6, pp. 15371548, Dec. 2008.

Das könnte Ihnen auch gefallen