Sie sind auf Seite 1von 7

LAB 5 IMPLEMENTATION TUTORIAL

CE889 ARTIFICIAL NEURAL NETWORKS


Tutorial on how to implement a Feed-Forward
Backpropagation Neural Network

Head of Module : Professor Hani Hagras


Lab Assistant : Aysenur Bilgin (abilgin@essex.ac.uk)
Lab Assistant: Andrew Starkey (astark@essex.ac.uk)

Neural Network Architecture

Inputs
Hidden nodes
Output nodes

xj
hi
yk

Output weights
wki
Hidden weights
whij
Activation function
Derivative of activation function
Delta weights
Local gradients
Hidden gradients

Feed-forward
Weight updating
Backpropagation
Error calculation

Some Tips on Implementation

Write down the mathematical formulas in theory.

Construct your notations and index variables (i, j k), note them down for your
reference and stick to them throughout your implementation.

Consider your needs and plan your data types.

Define your variables such as number of inputs, parameters in formulas, etc.

Study your network model, draw if necessary, and see how formulas apply.

Think through the steps in the algorithm and make sure you understand and
reason.

Work out how to divide the complex task into smaller ones.

Backpropagation Learning Algorithm


1. Set the parameters of NN
a. Network size (number of hidden neurons)
b. Learning rates and other parameters(constants in the formulas)

2. Initialize
a. All inputs and outputs from the training data you have collected
b. Weights randomly between [-1,1] (to reduce the number of epochs required for
training)
c. Deltas, errors to zero

3. Train for all training samples


a. Forward calculation: Given a set of input, calculate the network outputs
b. Calculate error for each output
c. Backpropagation and weight updating:
I.
Calculate local gradients and delta output weights in order to update the
output weights
II.
In a similar fashion, calculate the hidden gradients and delta hidden weights in
order to update the hidden weights

4. Calculate the epoch error and go back to step 3 to continue training until
some stopping criteria is satisfied (accounts for the number of epochs)

Further Thinking

Output normalization Why do we need to normalize the output? When?


How?
o
o

Keeping track of previous weights in the network When to store?


o

Specify the minimum and maximum values in your training data


Apply (x - min)/(max - min)

Before updating the weight

Remember the bias and the momentum term What are they for?
o
o

Shift the activation function for a successful learning


Speed up the convergence of the weight updates hence speed up the training

Useful To Do

Look after your data structures and the memory they use, free as and when
necessary.

Output the error-per-epoch after the training and analyse how your error
function propagates.

Improve the network performance by playing around with the parameters by


experimentation.

References

CE889 Lecture notes 3&4 <http://courses.essex.ac.uk/ce/ce889/>

CE889 Lecture notes 5&6 <http://courses.essex.ac.uk/ce/ce889/>

Das könnte Ihnen auch gefallen