Neural Networks
*Dr H.K Anasuya Devi, **Vikram Nag Ashoka, **Rakshithkumar N,
*CCE-Faculty, **CCE-Student, Indian Institute of Science, Bangalore-560012,
*E-mail: hkadevi@yahoo.com
Introduction
In the 21st century we have made machines
smarter by imparting knowledge in them.
We have made devices smarter by instilling
a sense in them of what the user wants to do,
how the user does a particular task,
understanding the environment the machine
or the device being used.
In the past few years we have gone a step
further and are building products and
services which provide information about
things that we want to know even before we
ask them. This is all possible only because
the machines have the capacity to learn the
user's behavior and their preferences. We
have developed various techniques to guide
machines in learning and here we will be
exploring atypical technique to replicate one
machines behavior into another. This is
human computer interaction.
The objective of this paper is to build a
process that can be used to transfer the
expertise of a trained neural network to an
untrained neural network using feedback
mechanism. Using this novel approach we
can reduce the number of training examples
and the time required to train a neural
network.
2.Methodology:
2.1 Existing techniques for training
neural networks:
There are many alternative learning methods
and variants for neural networks. In the case
of feedforward multilayer networks the first
successful algorithm was the classical
backpropagation (Rumelhart et al., 1986).
Although this approach is very useful for the
learning process of this kind of neural
networks its main drawback is Slow
learning speed.
2.1.1 Various attempts made to increase
the learning speed of Backpropagation
algorithm
In order to solve the problem of slow
learning speed, several variations of the
initial algorithm and also new methods have
been proposed.
Majorly
affects
manufacturing, where many machines
are used to for working on same tasks.
By this method we can replicate fast and
perform tasks with ease and with higher
success rate.
Replicating real
time
systems: Consider an example where we
have to build a huge network of systems
which are used for recognizing human
faces. Using this technique we can train
multiple systems faster to recognize the
faces that one system has learnt.
5.REFERENCES
[1] L. B. Almeida, T. Langlois, J. D. Amaral, and A. Plakhov. Parameter adaptation in stochastic optimization. In D.
Saad, editor, On-line Learning in Neural Networks, chapter 6, pages 111134. Cambridge University Press,
1999.
[2] R. Battiti. First and second order methods for learning: Between steepest descent and Newtons method. Neural
Computation, 4(2):141166, 1992.
[3] E. M. L. Beale. A derivation of conjugate gradients. In F. A. Lootsma, editor, Numerical methods for nonlinear
optimization, pages 3943. Academic Press, London, 1972.
[4] F. Biegler-Konig and F. Barmann. A learning algorithm for multilayered neural networks based on linear leastsquares problems. Neural Networks, 6:127131, 1993.
[5] W. L. Buntine and A. S. Weigend. Computing second derivatives in feed-forward networks: A review. IEEE
Transactions on Neural Networks, 5(3):480488, 1993.
[6] E. Castillo, A. Cobo, J. M. Gutierrez, and R. E. Pruneda. Working with differential, functional and difference
equations using functional networks. Applied Mathematical Modelling, 23(2):89107,1999.
[7] E. Castillo, J. M. Gutierrez, and A. Hadi. Sensitivity analysis in discrete bayesian networks. IEEE Transactions
on Systems, Man and Cybernetics, 26(7):412423, 1997.