Beruflich Dokumente
Kultur Dokumente
Competitive Learning
Using no supervision from any teacher, unsupervised networks adapt the weights and verify
the results only on the input patterns. One popular scheme for such adaptation is the
competitive learning rule, which allows the units to compete for the exclusive right to
respond to a particular input pattern. It can be viewed as a sophisticated clustering technique,
whose objective is to divide a set of input patterns into a number of clusters such that the
patterns of the same cluster exhibit a certain degree of similarity.
A basic competitive learning network has one layer of input nodes and one layer of output
nodes.
X1
W11
1
X2
2
X3
W32
Input layer
Output layer
a j xi wij X T W j W jT X
i 1
Then, the output unit with the highest activation is selected for further processing, which is
what is implied by competitive. Assuming unit k has the maximal activation, the weights
24
leading to this unit are updated according to the competitive or so called winner-take-all
learning rule
w (t ) ( x(t ) wk (t ))
wk (t 1) k
wk (t ) ( x(t ) wk (t ))
The weight updating formula includes a normalization operation to ensure that the updated
weight is always of unit length. Notably, only the weights at the winner output unit k are
updated; all other weights remain unchanged.
The update formula implements a sequential scheme for finding the cluster centers of a data
set of which the entries are of unit length. When an input x is presented to the network, the
weight vector closest to x rotates toward it. Consequently, weight vectors move toward those
areas where most inputs appear and eventually the weight vectors become the cluster centers
for the data set.
Euclidean distance is a more general dissimilarity measure scheme of competitive learning,
in which the activation of output unit j is
1/ 2
n
2
a j ( xi wij )
X W j
i 1
Where, f is a function of both postsynaptic and pre synaptic signals, is the learning rate
(which is a positive constant) and ai, aj represent the activations of ui and uj respectively.
Thus, if both ui and uj are activated the weight of the connection from ui to uj should be
increased. . The Hebbian Rule works well as long as all the input patterns are orthogonal or
uncorrelated. The requirement of orthogonality places serious limitations on the Hebbian
Learning Rule. Delta rule can handle it when learned at supervised mode.
Kohonen Map
25
i NBc
SOM preserves the topology. Topology means local or neighborhood properties. Kohonen
networks attempt to map input patterns to nodes such that nearness is preserved.
The most well-known application of Kohonens self organizing networks is Kohonens
attempt to construct a neural phonetic typewriter that is capable of transcribing speech into
written text from an unlimited vocabulary, with an accuracy of 92% to 97%.
The essential features of SOM are
A continuous input space of activation patterns (incoming signal patterns)
A topology of the network in the form of a lattice of neurons, which defines a discrete
output space.
A time-varying neighborhood function that is defied around a winner neuron.
A learning rate parameter that starts at an initial value and then decreases gradually
with time but never goes to zero.
27