Sie sind auf Seite 1von 5

Proportional Integral Tuning Rules for Adaptive Neural

Networks

Steven C. Rogers
Institute for Scientific Research, Fairmont, WV, USA 26554, srogers@isr.us

Abstract

Tuning rules for adaptive neural networks have featured Lyapunov-based approaches in recent years. Although these
have some desirable qualities they have led to complex tuning procedures. In order to take more advantage of the
power of adaptive neural networks less complex and computationally expensive tuning rules are desirable. In
addition, tuning rules should be simple and provide for rapid, reliable convergence. In this paper a proportional-
integral approach to adaptive neural network tuning rules is studied. Simulation on a nonlinear system is used for a
demonstration.

Keywords: Adaptive Neural Networks, Tuning Rules, Adaptive Radial Basis Function

Introduction

The need to ensure good, reliable convergence in neural network applications presents challenges for control or
signal processing engineers. Adaptive neural networks are used in system identification and control, among other
applications. To achieve quality tracking in control applications or accurate identification requires responsive
adaptation rules. Many times rapid adaptation must be sacrificed to enable stable responses. Basic tuning rules used
currently are based on integral tuning, such as the MIT tuning rule or the Widrow-Hopf approach. These generally
do not capture or track higher frequency signals or dynamics. Lyapunov-based tuning rules [1] are commonly used
at present. Lyapunov ideas were used to develop tuning rules, which satisfy stability requirements and also suffer
from lack of ability to track. To the authors knowledge simple control principles have not been applied to the
design of update rules. If simple control principles are applied a rich set of tuning options that may be used to satisfy
both stability and tracking requirements open up. The paper is organized as follows: adaptive radial basis function
neural networks, Lyapunov-based tuning rules, feedback control-based tuning rules, and simulation of an adaptive
linear combiner with a proportional integral (PI) tuning rule.

Adaptive Radial Basis Function Neural Networks

An RBFN (radial basis function network) is a two-layer network whose outputs are a linear combination of the
hidden layer functions. They are frequently used for system identification and control because of their good
properties for on-line or sequential adaptation. The following figure shows the general structure of a radial basis
function neural network [1].
The basis function F is generally a sigmoidal function. Mathematically, the output of an RBFN is,
( ) ( )
( )

=
+ =

=
2
1
1
0
2
exp

k
k
h
k
k
T
k
T
w w f

where is the input vector of the network, h indicates the total number of hidden neurons,
k
and
k
refers to the
center and width of the kth hidden neuron. ||.|| is the Euclidean norm. The function f(.) is the output of an RBFN,
which represents the network approximation to the actual output. The coefficient w
k
is the connection weight vector

of the i
th
-hidden neuron to the output neurons and w
0
is the bias term.


Lyapunov-based Tuning Rules

Any good identification scheme that utilizes the RBFN scheme should satisfy two criteria: 1) the parameters of the
RBFN are tuned properly to satisfy stability and performance needs, and 2) the parameter adaptive law should be
efficient to allow real-time operation. A thorough discussion of Lyapunov based tuning rules is given in [1] and its
references. The RAN (resource allocating network) [2] was developed to tune all the RBF parameters. Other tuning
rules only adjusted the connection weight vector and left the center and width vectors untuned [3]. It was determined
in [2] that the performance of the weight vector tuning approach was generally poor unless the designer has
unusually good insight into the locations of the centers and widths.
The basic tuning rule is [1]:
( ) ( ) ( ) ( ) n Pe n n n + = + 1 where is the vector of parameters to be tuned including the connection weights,
centers, and widths, is the user selected learning rate (positive scalar), ( )
n
g n

) ( = is the gradient of the


function () g with respect to the parameter vector evaluated at (n). P is the solution of the Lyapunov derived

equation ) ( P A PA Q
T
+ = . Q is a user selected positive definite matrix; A is a user selected Hurwitz stable matrix.
e(n) is the error vector. Additional discussion is in [1].
Another common approach [4] for tuning rules is
2
2
2
2 2
2






e k ew
e k ew
ew k e w w
w w w

+ =

+ =
+ =

The 2
nd
term moves the discrete pole away from the unit circle, i.e., from being a pure integrator. Although this may
slow down convergence, it improves stability, and should remove oscillations.

Feedback Control-Based Tuning Rules

If the update mechanism is considered a plant to be controlled, control system principles may be applied. The
following figure of a linear combiner illustrates the idea. The basic linear combiner equations are shown below:

e w
w y
y y e

=
=
=
!


where e is the error to be driven to zero, y is the actual signal to be tracked, yhat is the estimate, is an appropriate
set of plant measurements, w is the connection weight vector, and is the learning rate. The equations for an RBFN
may be diagrammed in an analogous way.

The bottom part of the figure shows how a control structure may be inserted into the linear combiner. The simplest
control structure is the standard learning rate . A proportional integral (PI) structure is the next simplest controller.
It has the form:
( )
s
a s
Kp
+
, which gives another integrator plus a zero. Note also that Kp may be combined with .
This structure has been used to improve state estimators.
Obviously, any control structure may be used including lead-lag, PID, servo type PIDs, etc.

Simulation

Two linear combiner tuning strategies were compared by using a matlab optimization procedure. A signal was
generated and a tracking linear combiner was implemented using the approach from [4] and adding a PI structure.
The signal shown in the following plot was generated from a linear Navion model taken from [5]. The signal being
tracked is a pitch rate motion in rad/sec, i.e., e = q-qhat. The linear combiner has one set of inputs of roll, pitch, yaw,
and elevation rates with a delay and another without a delay to make an input vector with 8 signals.
The pitch made a doublet with a frequency of ~ 1/3 Hz. The PI strategy could easily keep up with the signal, but the
standard routine could not optimize to a realistic set of gains. This is due to the inherent sluggish response of an
integral approach to weight tuning. mu ~ 0.05 is the proportional gain combined with the learning rate. The zero is
located ~ 1.1 rad/sec. The zero will attract any pole near the origin that might tend to oscillate.



Conclusions

An approach has been shown toward developing a generic RBF tuning strategy. In the above test the PI approach
was shown to be superior on a linear combiner. Future studies will incorporate different types of signal
characteristics, different control structures, implementation on a complete RBFN, and tests on feedback control
loops for a more complete understanding of the tuning strategy.

References

1. N. Sundararajan etal, Fully Tuned Radial Basis Function Neural Networks for Flight control, Kluwer
Academic Publishers, 2002, ISBN 0-7923-7518-1
2. J.C. Platt, A Resource Allocating Network for Function Interpolation, Neural Computation, v. 3, 1991, pp.
213-225.
3. S. Chen and S. A. Billings, Neural networks for system identification, Int. J. Control, v. 56, 1992, pp. 319-346
4. F.L. Lewis etal, Neuro-Fuzzy Control of Industrial Systems with Actuator Nonlinearities, Society for Industrial
and Applied Mathematics, 2002, ISBN 0-89871-505-9
5. A. E. Bryson, Applied Linear Optimal Control, Cambridge University Press, 2002, ISBN 0-521-81285-2

Das könnte Ihnen auch gefallen