Sie sind auf Seite 1von 38

Soft Computing

Neural Networks
k (1)
]É{Ç V
] V{tÉ}|tÇ f{|
f
g
Merchant Marine College
Shanghai Maritime University
Introduction
Objectives
• Gives an introduction to basic neural
network architectures and learning rules.
– The mathematical analysis of the networks
– The methods of training them
– Their
Th i application
li ti tot practical
ti l engineering
i i
problems
• pattern recognition
ii
• signal processing
• control systems.
Historical Sketch
• Pre-1940: von Hemholtz, Mach, Pavlov, etc.
– General theories of learning, vision, conditioning
– No specific mathematical models of neuron operation
• 1940s: Hebb, McCulloch and Pitts
– N
Neural-like
l lik networks
t k can compute t any arithmetic
ith ti function
f ti
– Mechanism for learning in biological neurons
• 1950s: Rosenblatt, Widrow and Hoff
– First practical networks and learning rules
• 1960s: Minsky and Papert
– Demonstrated limitations of existing neural networks, new learning
algorithms are not forthcoming, some research suspended
• 1970s: Amari
Amari, Anderson,
Anderson Fukushima
Fukushima, Grossberg,
Grossberg Kohonen
– Progress continues, although at a slower pace
• 1980s: Grossberg, Hopfield, Kohonen, Rumelhart, etc.
– Important new developments cause a resurgence in the field
Biology
• Neurons respond slowly
– 10-3 s compared
p to 10-9 s for electrical circuits
• The brain uses massively parallel computation
– ≈1011 neurons in the brain
– ≈104 connections per neuron
Neuron Model
and
Network Architectures
Single-Input Neuron
Transfer Functions
Transfer Functions
Multiple-Input Neuron
Layer of Neurons
Abbreviated Notation
Multilayer Network
Abbreviated Notation
Delays and Integrators
Recurrent Network
An
Illustrative
Example
Apple/Banana Sorter
Prototype Vectors
Perceptron
Two-Input Case
Apple/Banana Example
Testing the Network
Hamming Network
Feedforward Layer
Recurrent Layer
Hamming Operation
• First Layer
Hamming Operation
• Second layer
Hopfield Network
Apple/Banana Problem
Summary of the illustration
• Perceptron
– Feedforward Network
– Linear Decision Boundary
– One Neuron for Each Decision
• Hamming Network
– Competitive Network
– First Layer – Pattern Matching (Inner Product)
– Second Layer – Competition (Winner-Take-All)
– # Neurons = # Prototype Patterns
• Hopfield Network
– Dynamic Associative Memory Network
– Network Output Converges to a Prototype Pattern
– # Neurons = # Elements in each Prototype Pattern
Multi-Layer
Multi Layer
Perceptron
Multilayer Network
• R – S1 – S2 – S3 Network
Example
Elementary Decision Boundaries
Elementary Decision Boundaries
Total Network
Questions ?

Das könnte Ihnen auch gefallen