Beruflich Dokumente
Kultur Dokumente
The synthetic or artificial neuron, which is a simple model of the biological neuron,
was first proposed in 1943 by McCulloch and Pitts. It consists of a summing function
with an internal threshold, and "weighted" inputs as shown below.
The weight value of the connection or link carrying signals from a neuron i to a
neuron j is termed wij..
wij
i j
Direction of signal flow
Transfer functions
One of the design issues for ANNs is the type of transfer function used to compute the
output of a node from its net activation. Among the popular transfer functions are:
Step function
Signum function
Sigmoid function
Hyperbolic tangent function
In the step function, the neuron produces an output only when its net activation
reaches a minimum value – known as the threshold. For a binary neuron i, whose
output is a 0 or 1 value, the step function can be summarised as:
0 if activation i T
output i
1 if activation i T
474724368.doc 1
Another type of signum function is
1 if activationi 0
output i 0 if activationi 0
1 if activation 0
i
The sigmoid transfer function produces a continuous value in the range 0 to 1. It has
the form:
1
output i gain . activation
1 e i
The parameter gain is determined by the system designer. It affects the slope of the
transfer around zero. The multilayer perceptron uses the sigmoid as the transfer
function.
A variant of the sigmoid transfer function is the hyperbolic tangent function. It has the
form:
e activationi e activationi
output i activationi activationi
e e
where u = gain . activationi. This function has a shape similar to the sigmoid (shaped
like an S), with the difference that the value of outputi ranges between –1 and 1.
474724368.doc 2
output i output i
0 0
T activation i activation i
output i
output i
1
1
Sigmoid
0.5
0
0
-1 activation i
activation i
Hyperbolic Tangent
Figure Functional form of transfer functions
Each of the perceptrons is used to identify small linearly separable sections of the
inputs.
Outputs of the perceptrons are combined into another perceptron to produce the final
output.
The hard-limiting (step) function used for producing the output prevents information
on the real inputs flowing on to inner neurons. To solve this problem, the step
function is replaced with a continuous function- usually the sigmoid function.
474724368.doc 3
The Architecture of the Multilayer Perceptron
In a multilayer perceptron, the neurons are arranged into an input layer, an output
layer and one or more hidden layers.
The learning rule for the multilayer perceptron is known as "the generalised delta
rule" or the "backpropagation rule".
The generalised delta rule repetitively calculates an error function for each input and
backpropagates the error from one layer to the previous one.
The weights for a particular node are adjusted in direct proportion to the error in the
units to which it is connected.
The error function Ep is defined to be proportional to the square of the difference tpj -
opj
net
pj = wijopi (2)
474724368.doc 4
i
The output from each unit j is determined by the non-linear transfer function fj
opj = fj(netpj)
The delta rule implements weight changes that follow the path of steepest descent on
a surface in weight space. The height of any point on this surface is equal to the error
measure Ep. This can be shown by showing that the derivative of the error measure
with resepect to each weight is proportional to the weight change dictated by the delta
rule, with a negative constant of proportionality, i.e.,
pwi -Ep/wij
474724368.doc 5
Multilayer Perceptrons as Classifiers
Let us consider a two layer perceptron with two units in the input layer.
If one unit is set to respond with a 1 if the input is above its decision line, and the
other responds with a 1 if the input is below its decision line, the second layer
produces a solution in the form of a 1 if its input is above line 1 and below line 2.
line 1
line 2
A three layer perceptron can therefore produce arbitrarily shaped decision regions,
and are capable of separating any classes. This statement is referred to as the
Kolmogorov theorem.
The energy is a function of the input and the weights. For a given pattern, Ep can be
plotted against the weights to give the so called energy surface. The energy surface is
a landscape of hills and valleys, with points of minimum energy corresponding to
wells and maximum energy found on peaks.
The generalised delta rule aims to minimise Ep by adjusting weights so that they
correspond to points of lowest energy. It does this by the method of gradient descent
where the changes are made in the steepest downward direction.
All possible solutions are depressions in the energy surface, known as basins of
attraction.
474724368.doc 6
Learning Difficulties in Multilayer Perceptrons
Occasionally, the multilayer perceptron fails to settle into the global minimum of the
energy surface and instead find itself in one of the local minima. This is due to the
gradient descent strategy followed. A number of alternative approaches can be taken
to reduce this possibility:
The following two features characterise multilayer perceptrons and artificial neural
networks in general. They are mainly responsible for the "edge" these networks have
over conventional computing systems.
Generalisation
Neural networks are capable of generalisation, that is, they classify an unknown
pattern with other known patterns that share the same distinguishing features. This
means noisy or incomplete inputs will be classified because of their similarity with
pure and complete inputs.
Fault Tolerance
Neural networks are highly fault tolerant. This characteristic is also known as
"graceful degradation". Because of its distributed nature, a neural network keeps on
working even when a significant fraction of its neurons and interconnections fail.
Also, relearning after damage can be relatively quick.
474724368.doc 7
Applications of Multilayer Perceptrons
Speech synthesis
A very well known use of the multilayer perceptron is NETtalk [1], a text-to-speech
conversion system, developed by Sejnowski and Rosenberg in 1987.
It consists of 203 input units, 120 hidden units, and 26 output units with over 27000
synapses. Each output unit represents one basic unit of sound, known as a phoneme.
Context is utilised in training by presenting seven successive letters to the input and
the net learns to pronounce the middle letter.
90% correct pronunciation achieved with the training set (80-87% with unseen set).
Resistant to damage and displays graceful degradation.
Multilayer perceptrons are also being used for speech recognition to be used in voice
activated control systems.
Financial applications
Examples include bond rating, loan application evaluation and stock market
prediction.
Bond rating involves categorising the bond issuer's capability. There is no hard and
fast rules for determining these ratings. Statistical regression is inappropriate because
the factors to be used are not well defined. Neural networks trained with
backpropagation has consistently outperformed standard statistical techniques [2].
Pattern Recognition
474724368.doc 8
For many of the applications of neural networks, the underlying principle is that of
pattern recognition.
Target identification from sonar echoes has been developed. Given only a day of
training, the net produced 100% correct identification of the target, compared to 93%
scored by a Bayesian classifier.
Networks have been applied to the problems of aircraft identification, and to terrain
matching for automatic navigation.
Large number of iterations required for learning, not suitable for real-time
learning
2. No guaranteed solution
3. Scaling problem
Do not scale up well from small research systems to larger real systems.
Both too many and too few units slow down learning.
The question one might ask at this point is - does an effective system need to
mimic nature exactly?
474724368.doc 9
REFERENCES
Beale, R., & Jackson, T., "Neural Computing: An Introduction",
Bristol : Hilger, c1990.
(Contains full derivation of the generalised delta rule. Available at Murdoch
library)
474724368.doc 10