Beruflich Dokumente
Kultur Dokumente
Last Lecture:
Neurons, membranes, channels
Structure of Neurons: dendrites, cell body and axon
Input: Synapses on dendrites and cell body (soma)
Output: Axon, myelin for fast signal propagation
Function of Neurons:
Internal voltage controlled by Na/K/Cl… channels
Channels gated by chemicals (neurotransmitters) and membrane
voltage
Inputs at synapses release neurotransmitter, causing increase or
decrease in membrane voltages (via currents in channels)
Large positive change in voltage membrane voltage exceeds
threshold action potential or spike generated.
Input
Spikes
Output
Spike
Attributes of neuron
m binary inputs and 1 output (0 or 1)
Synaptic weights wij
Threshold i
L
M
ni t 1 wij n j t i
O
P
Nj Q ni output of unit i
step function
Output is 1 if and only if weighted wij weight from unit ij to
to ji
sum of inputs is greater than threshold i threshold
(x) = 1 if x 0 and 0 if x < 0
Key attributes
Parallel computation
Distributed representation and storage of data
Learning (networks adapt themselves to solve a problem)
Fault tolerance (insensitive to component failures)
completely
connected feedforward recurrent
(directed, a-cyclic) (feedback connections)
L
Output i M
O
wij j P
M
Nj P Q
Single-layer Multilayer
a f
wij Yi Oi j learning rate
j input
d
Yi wij j i j
Yi desired output
Oi actual output
o 1
x y
E.g. Right side network
Output is 1 if and only if x + y – 2(x + y – 1.5 > 0) – 0.5 > 0
Output neurons
output function
(Sigmoid):
One or more
layers of 1
g (a)
hidden units 1 e a
(hidden layers) g(a)
(a)
1
a
Input nodes (non-linear
squashing function)
R. Rao, Week 6: Neural Networks 15
Example: Perceptrons as Constraint Satisfaction Networks
out
y
2
1 1
1
2
1 ?
1 2
2 1
1
1 1
1
x y
1 2 x
R. Rao, Week 6: Neural Networks 16
Example: Perceptrons as Constraint Satisfaction Networks
out y 1 =0
1 x y 0
2 2
=1
1
1 x y 0
2
1
1
2 1
1
1
x y
1 2 x
R. Rao, Week 6: Neural Networks 17
Example: Perceptrons as Constraint Satisfaction Networks
out
y =0
2
=1
1
2
1 =1 =0
1
1 2 x y 0 2 x y 0
x y
1 2 x
R. Rao, Week 6: Neural Networks 18
Example: Perceptrons as Constraint Satisfaction Networks
y =0
out
2
=1
1 1
1
1
2 - >0
1 2
=1 =0
1
x y 1 2 x
R. Rao, Week 6: Neural Networks 19
Perceptrons as Constraint Satisfaction Networks
y 1 =0
out 1 x y 0
2 2
=1
1
1 1
1 1 x y 0
2
2
1
1 2 =1 =0
1
2
1 1
1 2 x y 0 2 x y 0
1
x y 1 2 x
R. Rao, Week 6: Neural Networks 20
Learning networks
We want networks that configure themselves
Learn from the input data or from training examples
Generalize from learned data
E M
1 L O
Yi wij j P
u
2 Cost function measures the
network’s performance as a
i u M N j P Q
2
differentiable function of the
weights
Then wij
E
M
L O
Yi wij j P j
u
wij M
u N j P
Q
Changes weights in the direction of smaller errors
Minimizes the mean-squared error over input patterns
Called Delta rule = adaline rule = Widrow-Hoff rule = LMS rule
Vj
x1
x4
Local minimum
(“attractor”)
of energy function
stores pattern
R. Rao, Week 6: Neural Networks 30
Radial Basis Function Networks
output neurons
one layer of
hidden neurons
input nodes
R. Rao, Week 6: Neural Networks 31
Radial Basis Function Networks
output neurons
propagation function:
n
aj i i, j
( x
i 1
) 2
input nodes
R. Rao, Week 6: Neural Networks 32
Radial Basis Function Networks
output neurons
output function:
(Gauss’ bell-shaped function)
h(a)
(a) a2
2
h( a ) e 2
input nodes
R. Rao, Week 6: Neural Networks 33
Radial Basis Function Networks
output neurons
output of network:
out j wi , j hi
i
input nodes
R. Rao, Week 6: Neural Networks 34
RBF networks
Radial basis functions
Hidden units store means and
variances
Hidden units compute a
Gaussian function of inputs
x1,…xn that constitute the
input vector x
Learn weights wi, means i,
and variances i by
minimizing squared error
function (gradient descent
learning)
R. Rao, Week 6: Neural Networks 35
RBF Networks and Multilayer Perceptrons
output neurons
RBF: MLP:
input nodes
R. Rao, Week 6: Neural Networks 36
Recurrent networks
Employ feedback (positive, negative, or both)
Not necessarily stable
Symmetric connections can ensure stability
• 10 x 10 array of
neurons
• 2D inputs (x,y)
• Initial weights w1
and w2 random
as shown on right;
lines connect
neighbors
• 10 x 10 array of
neurons
• 2D inputs (x,y)
• Weights after 10
iterations
• 10 x 10 array of
neurons
• 2D inputs (x,y)
• Weights after 20
iterations
• 10 x 10 array of
neurons
• 2D inputs (x,y)
• Weights after 40
iterations
• 10 x 10 array of
neurons
• 2D inputs (x,y)
• Weights after 80
iterations
• 10 x 10 array of
neurons
• 2D inputs (x,y)
• Topography of inputs
has been captured