Sie sind auf Seite 1von 7

Synaptic

Plasticity
and
Learning

Drawing by
1
Ramn y
Cajal

Long Term Potentiation (LTP)


LTP = Experimentally observed increase in synaptic strength
that lasts for hours or days

A
Increase in
EPSP size
for same
input over
time

B
2
Image Source: Wikimedia Commons

Long Term Depression (LTD)


LTD = Experimentally observed decrease in synaptic strength
that lasts for hours or days

Decrease in
EPSP size
for same
input over
time

B
3
Image Source: Wikimedia Commons

Hebbs Learning Rule

If neuron A repeatedly takes part in


firing neuron B, then the synapse
from A to B is strengthened

Neurons that fire


together
wire together!
4
Image Source: Wikimedia Commons

Formalizing Hebbs Rule

F Consider a single linear neuron

with steady state output:

v w u w T u uT w
F Basic Hebb Rule:

dw
uv
dt

Discrete Implementation:

w (t t ) w (t )
t
uv (or w (t t ) w (t )
uv ))
t
w

w i 1 w i uv (or w uv )

What is the average effect of the Hebb rule?


F Hebb Rule: w

dw
uv
dt

F Average effect of the rule:

dw
uv
dt

uuT w

uuT

F Q is the input correlation matrix:Q

uuT

w Qw
u

Covariance Rule
F Hebb rule only increases synaptic weights (LTP)

What about LTD?


F Covariance rule:

dw
u( v v )
dt

(Note: LTD for low or no


output given some input)

F Average effect of the rule:

dw
u( v v )
dt

u( u T u ) w

uu T u u

Cw (C is the input covariance matrix uu T u u )


T

Are these learning rules stable?


F Does w converge to a stable value or explode?
Look at what happens to the length of w over time
dw
uv
F Hebb rule: w
dt

d w
dt

2w T

dw
2 2
w grows
2 w T ( uv / w )
v 0
dt
w
without bound!

F Covariance rule: w
d w
dt

2w T

dw
u( v v )
dt

dw
2 2
2 w T ( u( v v ) / w )
(v v v )
dt
w

Averaging RHS,

d w
dt

( v2 v )
2

v2 0

w grows
without bound!
8

Ojas Rule for Hebbian Learning


F Ojas rule:

dw
uv v 2 w
dt

( 0)

F Stable?
d w

dt
i.e., w

2w T
d w

dw
2 T
2 2

w ( uv v 2 w )
( v v 2 w T w )
dt w
w

dt

2v 2 (1 w )

At steady state : w

. (w

w does not grow without bound, i.e.,


Ojas rule is stable!

Summary: Hebbian Learning


F Hebb rule:

dw
uv
dt

Unstable

(unless constraint on
||w|| is imposed)

Unstable

(unless constraint on
||w|| is imposed)

F Covariance rule:

dw
u (v v )
dt

F Ojas rule:

dw
uv v 2 w
dt

Stable

10

What does Hebbian Learning do anyway?


F Start with the averaged Hebb rule: w

dw
Qw
dt

F How do we solve this equation to find w(t)?

Eigenvectors to the rescue (again)!


F Write w(t) in terms of eigenvectors of Q: w (t )

c ( t )e
i

F Substitute in Hebb rule diff. eq. and simplify as before:

dci
i ci i.e., ci (t ) ci (0) exp(i t / w )
dt

w(t ) ci (t )e i ci (0) exp(i t / w )e i


i

For large t, largest eigenvalue term dominates:w(t ) e1


e
(For Ojas rule: w(t ) 1 )

11

The Brain can do Statistics!*


Hebbian Learning implements Principal Component Analysis (PCA)
Hebb Rule
Input mean = (0,0)

Hebb Rule
Input mean = (2,2)

Final w

Final w

Covariance Rule
Input mean = (2,2)
Final w

Initial w

Hebbian learning learns a weight vector aligned with the


principal eigenvector of input correlation/covariance matrix
(i.e., direction of maximum variance)
12

*See last weeks lecture for The Brain can do Calculus!

Image Source: Dayan & Abbott textbook

What about this input data?


Initial w

?
What does the
covariance rule
learn?
13
Image Source: Dayan & Abbott textbook

PCA does not correctly describe the data

What should a network of neurons learn from such data?


Next Lecture: Competitive Learning, Generative Models, and
14
Unsupervised Learning
Image Source: Dayan & Abbott textbook

Das könnte Ihnen auch gefallen