Sie sind auf Seite 1von 3

Login: User:

Password:

Log in

[Register]

Skip to Navigation

Hoeerh etnsa aR c
Search Heaton Research Book Store Downloads Articles Forums
Home Introduction to Neural Networks with Java Chapter 5: Understanding Back Propagation

Summary
Submitted by jeffheaton on Sun, 12/23/2007 - 21:46 in forward Neural network Get the entire book!

Stay Connected
RSS YouTube Twitter Facebook

This Book is Outdated


The material you are currently viewing is outdated, a newer edition of this book is available. Click here to access the latest edition. In this chapter you learned how a feed forward back propagation neural network functions. You saw how the JOONE neural network implemented such a neural network. The feed forward back propagation neural network is actually composed of two neural network algorithms. It is not necessary to always use "feed forward" and "back propagation" together, but this is usually the case. The term "feed forward" refers to a method by which a neural network recognizes a pattern, where as the term "back propagation" describes a process by which the neural network will be trained. A feed forward neural network is a network where neurons are only connected to the next layer. There are no connections between neurons in previous layers or between neurons and themselves. Additionally neurons will not be connected to neurons beyond the next layer. As the pattern is processed by a feed forward the bias and connection weights will be applied. Neural networks can be trained using backpropagation. Backpropagation is a form of supervised training. The neural network is presented with the training data, and the results from the neural network are compared with the expected results. The difference between the actual results and the expected results produces an error. Backpropagation is a method whereby the weights and input bias of the neural network are altered in a way that causes this error to be reduced. The feed forward back propagation neural network is a very common network architecture. This neural network architecture can applied to many cases. There are other neural network architectures that may be used. In the next chapter we will examine the Kohonen neural network. The most significant difference between the Kohonen neural network and the feed forward backpropagation neural network that we just examined is the training method. The backpropagation method uses a supervised training method. In the next chapter we will see how an unsupervised training method is used. Examining the Back Propagation Process up Chapter 6: Understanding the Kohonen Neural Network

Introduction to Neural Networks with Java


Chapter 1: Introduction to Neural Networks Chapter 2: Understanding Neural Networks Chapter 3: Using Multilayer Neural Networks Chapter 4: How a Machine Learns Chapter 5: Understanding Back Propagation Introduction A Feed Forward Neural Network Java and Threads Examining the Feed Forward Process Examining the Back Propagation Process Summary Chapter 6: Understanding the Kohonen Neural Network Chapter 7: OCR with the Kohonen Neural Network Chapter 8: Understanding Genetic Algorithms Chapter 9: Understanding Simulated Annealing Chapter 10: Eluding Local Minima Appendix A. JOONE Reference Appendix B. Mathematical Background Appendix C. Compiling Examples under Windows Appendix D. Compiling Examples under Linux/UNIX

New forum topics


Guidelines for Neural Networks for Games Guidelines for Gtaphical Neural Networks Guidelines for Financial Neural Networks Guidelines for Getting Started NEAT support and Encog Workbench more

Who's online
There are currently 0 users and 10 guests online.

Copyright 2005 - 2010 by Heaton Research, Inc.. Heaton Research and Encog are trademarks of Heaton Research. Click here for copyright and trademark information.

Das könnte Ihnen auch gefallen