Beruflich Dokumente
Kultur Dokumente
generalization
ability
in
comparison
with
other
algorithms Peng [4]. It gives a new method to construct causal
networks by using EP. The evolving algorithm is first used to
learn the activation values of nodes, and then it is used again to
simultaneously acquire both the topology and weights of causal
networks. Some researchers used non-EA base method for
determining NN structures [4, 5, and 6]. Fujita [4] proposed
statistical estimation of the number of hidden units for FNN by
adding hidden units one by one. He first derives the decrease in
output error per hidden unit based on the least-square
approximation, and then he theoretically estimates the expected
maximum value of the decrease as the largest value in the largest
number of samples from an ideal distribution of the hidden unit.
In other words, the best-hidden unit is selected out of a finite
number of candidate units that have the various functions of
hidden units. The expected largest value depends on not only the
number of candidate units but also the number of learning data
sets that represent the complexity and difficulty of learning tasks.
The expected largest decrease per hidden unit is used to estimate
the total number of hidden units required to reduce the output
error to a desired value. De - Shunag [5] Makes an exhaustive
analysis of structural properties for FNNs , from the viewpoint of
optimization, i.e., minimizing the least square error cost function
(objective function) defined at the network outputs.He discusses
two cases of adding a hidden node and adding an input training
sample one-by-one.The changing process of the objective
function is analyzed. In addition, for the case of adding hidden
unit, he presents the optimization method for some hidden nodes
output vector. The problem of classification performance and
structure decision of MLPNNs was discussed briefly in the
paper of Dianzhi[6].
Introduction:
Evolutionary algorithms (EA) have been applied to evolve the
network weights and topologies (structure) simultaneously [1, 2].
EC methodologies are sometime used in combinations with other
methodologies. For example, Montana and Davis [3] described
the use of evolutionary algorithm such as (GAs)to train NN.
Instead of replacing the entire population each generation, only
one or two individuals produced, which then has to compete to be
included in the new populations. The network weights were
represented as real, rather than binary numbers; Montana and
Davis paradigm includes an option for improving population
members using back propagation (BP). This hill-climbing
capability, however, did not result in better results than when
using (GAs) alone [1]. Several evolutionary computation (EC)
techniques for improving structure determination were developed
for (NNs) in literature. Yao [2] used a new evolutionary system
i.e., evolutionary programming EP Net, for evolving (NNs). The
EA used in EP Net is based on fugal is evolutionary
programming. EP Net evolves (NNs)architectures and
connection weights (including biases) simultaneously in order to
reduce the noise in fitness evaluation. The parsimony of evolved
(NNs) is encouraged by preferring node connection deletion to
addition. EP Net has been tested on a number of benchmark
problems in machine learning and NNs , the experimental results
show that EP Net can produce very compact NNs with good
123
2.2. The
Simultaneous
Evolution
Architectures and Weights.
of
4.
Both
Weight of Neuro-Controller
Un used positions
3. Problem Description.
Neural network has been widely used in pattern recognition,
forecasting optimization, control of dynamic systems and
language learning, but its mapping capability is dependent on
their structure, and there is not yet a problem-independent way to
choose a good network topology [7]. Generally, the numbers of
input and output nodes are fixed by definition of the problem.
The number of hidden nodes and hidden layers, however, can be
varied, and as it is known that, the number of network weights
increases linearly as the number of hidden nodes, and as the
square of number of hidden layers [8].
W2
W3
W4
W5
pop
and the
.
nh
Step3: First take the number of hidden nodes nh (in this work it
is consider to be between 2 to 8 nodes) as an integer number from
the end of each chromosome from the population, then partially
train each network on the training set to evaluate the objective
function (for example MSE). Calculate the fitness function as
equation (2) .The mean square error (MSE) is used as an
objective function:
(1)
N P [ y ( k ) y ( k )]2
p
m
(for SISO plants)
K 1
Np
MSE =
layers.
This equation shows the total number of weights for
FNNs without including the basis is weights. As
mentioned earlier, the number of hidden nodes can be
varied; a possible strategy to find a near-optimal number of
hidden nodes is to start with a small number of hidden
nodes, for example, two. Training the network is then
started, when validation error is seen to have stopped
decreasing, and training is halted. Then the number of
hidden nodes is increased by one, network weights are
reinitialized, and training is restart. Validation error is
( n n )*n ( n l ) * n
i
o
h
l
h
Where:
(2)
124
FTNESS=
(3)
[otherwise]
(a)
(4)
objective function
5.
(b)
(c)
Simulation Results.
(d)
(e)
Fig-2.The MRAC of Example1: (a) Optimal Hidden Node
Selection. (b) Model-Reference and Plant Output. (c) Control
Signal. (d) Output Error. (e) Best MSE.
Example 1:
The second order stable nonlinear plant (linear in output and
nonlinear in input) can be described with the following difference
equation:
Example 2:
y p (k 1)
1 y p (k ) y p (k 1)
2
(5)
M
(6)
y p (k) + y p (k-1)))*
1.5 * y p (k ) y p (k 1)
125
SAE ( y p (k ) y m (k ) )
(7)
(s) =
(a)
(8)
Equation (8) relates high-pressure-spool speed and lowpressure-spool speed to changes in jet-pressure-nozzle area and
fuel-flow rate. The proposed neuro -genetic controller can be
applied to the above linear multi-variable plant with a sampling
time of Ts=0.1s.The complete difference equations of (8). The
two-channel model reference (of third order) is chosen to be
stable, linear and is described by the difference equations (9 and
10) respectively: [10]
(b)
y m1 (k-2) + r1 (k)
(c)
(9)
(10)
(a)
(e)
(b)
(f)
(c)
(g)
Fig-3.The MRAC of Example 2: (a) Optimal Hidden Node
Selection. (b) Model-Reference and Plant Output with MSE. (c)
Control signal. (d) Output Error. (e) Best MSE. (f) ModelReference and Plant Output with SAE. (g) Best SAE.
Example 3:
(d)
126
6.
It is obvious from the content of table (5.1) that neuro genetic controller could control different plants to follow the
desired model with an acceptable accuracy as well as shows
the power of GAs to find the optimal hidden nodes for this
controller without reducing its performance.
Fig.2 (d), Fig.3 (d) and Figs.4 ((f), and (g)) represent the
(e)
(f)
(g)
(h)
Fig-4.The MRAC of Example 3: (a) Optimal Hidden Node
Selection. (b) First Model-Reference and Plant Output. (c)
Second Model-Reference and Plant Output. (d) First Control
signal. (e) Second Control signal. (f) First Output Error. (g)
Second Output Error. (h) Best MSE.
For the previous examples, table (5.1) shows the optimal node
selection by the proposed previously method for the neuro genetic controller, output of the controller plant and output of the
model-reference, control signal, modeling error, best MSE against
the generation and finally the reference input.
Example
No.
Example 1
Example 2
Example 3
Optimal
Number
Hidden
Nodes
After
3000
Generatio
ns
Output
the
Plant/
Output
of the
Model
Refere
nce
Final
Value of
MSE after
3000
Generation
s
Contro
l Signal
Output
Error
Fig2.b
Fig 2.c
Fig2.d
9.10E_07
0.2
Fig3.b
Fig 3.c
Fig3.d
4.10E_06
0.2
Fig4.b
Fig4.d
Fig4.f
3
Fig4.e
In
Refe
renc
e
Inpu
t
0.4
ym
(k) is
approximately zero.
The best MSE against the generations always goes to zero;
this means that the output of the plant tracks the model
reference output, see Figs.3 (e), and Fig.4 ((e), and (h)).
The number of hidden nodes ( n h ) cannot be changed
according to the changing performance index. For example,
Fig.3 (a) shows the optimal number of hidden nodes for
neuro -controller with MSE and SAE performance index,
which is equal to 2 nodes.
The proposed method gives some deterministic rather than
heuristic selection method for the structure of NNs .
Figs.4 ((b), and (c)) show the simulation results of gas
turbine plant, under the control of neuro -genetic controller.
It is evident from these figures, that neuro -genetic
controller gives good performance when handling strong
loop interaction.
As shown in Fig.2(c), Fig.3(c) and Fig.4 ((d), and (e)), the
generated neural networks control action is a sinusoidal
which is smooth and without sharp spikes.
References
2k
2k
r (k ) sin(
) sin(
)
25
10
(k) and
0.5
Fig4.g
yp
7. Conclusions
1.50E-06
Fig4.c
Discussion.
(11)
127
[2] Xin Yao, Senior Member, IEEE, and Yong LiuYao,a New
Evolutionary System for Evolving Artificial Neural Networks,
IEEE Transactions on Neural Networks, VOL. 8, NO 3, MAY 1997.
[3] Melanie Mitchell, an Introduction to Genetic Algorithms, 1st MIT
Press Cambridge 1998.
[4] Osamu Fujita, Statistical estimation of the number of hidden units for
feed forward neural networks, Neural Networks, VOL11, pp 851
859, 1998.
[5] De-Shung H., an Analysis of Structure Properties for Image
Segmentation, Proc, Of ICNN&B98, (Int. Conference on Neural
and Brains); ppl5-pp. PL463-PL466; Bijing, China, 27-30 Oct 1998.
[6] Dianzh Z, and Xiquen X., Study on Structure Adaptation of
Multilayer Feed forward Neural network, Proc Of ICNN&B98,
(Int. Conference on Neural and Brains); ppl5-pp. PL496-PL499;
Bijing, China, 27-30 Oct 1998.
[7] Peng G, an Evolutionary Algorithm to Construct Causal Networks,
Proc Of ICNN&B98, (Int. Conference on Neural and Brains); ppl5pp. PL259-PL261; Bijing, China, 27-30 Oct 1998.
[8] Rzempoluck E.J,Neural Network Data Analysis Using Stimulant,
Springer-Verlag New York, Inc, 1998.
[9] D.T. Pham and D.Karaboga, 1994, Intelligent Systems Research
Laboratory University of Wales College of Cardiff Cardiff CF2
1XH United Kingdom, Design of an Adaptive Fuzzy Logic
Controller, in: IEEE.
[10] M. S. Ahmed, LA. Tasadduq, S. 1994, Neural-Net controller for
nonlinear plants: design Approach through linearization, in: IEE
Proc.-Control Theory Appl., Vol. 141, No. 5.
128