Sie sind auf Seite 1von 4

Unsupervised Learning

Competitive Learning
Using no supervision from any teacher, unsupervised networks adapt the weights and verify
the results only on the input patterns. One popular scheme for such adaptation is the
competitive learning rule, which allows the units to compete for the exclusive right to
respond to a particular input pattern. It can be viewed as a sophisticated clustering technique,
whose objective is to divide a set of input patterns into a number of clusters such that the
patterns of the same cluster exhibit a certain degree of similarity.
A basic competitive learning network has one layer of input nodes and one layer of output
nodes.
X1

W11
1

X2
2
X3

W32
Input layer

Output layer

Fig.1 A competitive learning network


An input pattern x is a sample point in the n-dimensional real or binary vector space. There
occurs as many output neurons as the number of classes and each output node represents a
pattern category. All input units (i) are connected to all output units (j) with a weight (wij).
The number of input units is the input dimension whereas the number of output units is equal
to the number of clusters (category). A clusters center position is specified by the weight
vector connected to the corresponding unit. In the network as in figure there are 3 input units
and 2 output units i.e. the three dimensional input data are divided into two clusters, and the
cluster centers, denoted as the weights are updated via the competitive learning rule.
The input vector X and the weight vector W for an output unit j are generally assumed to be
normalized to unit length. The activation value aj of output unit j is then calculated by the
inner product of the input and weight vectors.
n

a j xi wij X T W j W jT X
i 1

Then, the output unit with the highest activation is selected for further processing, which is
what is implied by competitive. Assuming unit k has the maximal activation, the weights
24

leading to this unit are updated according to the competitive or so called winner-take-all
learning rule
w (t ) ( x(t ) wk (t ))
wk (t 1) k
wk (t ) ( x(t ) wk (t ))
The weight updating formula includes a normalization operation to ensure that the updated
weight is always of unit length. Notably, only the weights at the winner output unit k are
updated; all other weights remain unchanged.
The update formula implements a sequential scheme for finding the cluster centers of a data
set of which the entries are of unit length. When an input x is presented to the network, the
weight vector closest to x rotates toward it. Consequently, weight vectors move toward those
areas where most inputs appear and eventually the weight vectors become the cluster centers
for the data set.
Euclidean distance is a more general dissimilarity measure scheme of competitive learning,
in which the activation of output unit j is
1/ 2
n
2
a j ( xi wij )
X W j
i 1

The weights of output unit with smallest dissimilarity measure is updated as


wk (t 1) wk (t ) ( x (t ) wk (t ))

In this case normalization is not necessary.


Hebbian Learning Rule
According to Hebb rule, the effect of a unit ui in the input layer on a unit uj in the output layer
is determined by the product of the activation ai of ui and the weight of the connection from ui
to uj.. The Hebbian Learning Rule is a learning rule that specifies how much the weight of the
connection between two units should be increased or decreased in proportion to the product
of their activation. The rule proposed by Hebb in 1949 states that the connections between
two neurons might be strengthened if the neurons fire simultaneously. That means, the rule
determines the change in the weight connection from ui to uj by
wij f ( ai , a j ) ai a j

Where, f is a function of both postsynaptic and pre synaptic signals, is the learning rate
(which is a positive constant) and ai, aj represent the activations of ui and uj respectively.
Thus, if both ui and uj are activated the weight of the connection from ui to uj should be
increased. . The Hebbian Rule works well as long as all the input patterns are orthogonal or
uncorrelated. The requirement of orthogonality places serious limitations on the Hebbian
Learning Rule. Delta rule can handle it when learned at supervised mode.

Kohonen Map
25

Kohonen Self-organizing networks or Kohonen feature maps or topology-preserving map or


Self Organizing Map (SOM) are a form of competition-based learning network paradigm for
data clustering. So the principle goal of SOM is to transform an incoming signal pattern of
arbitrary dimension into a one-or two-dimensional discrete map, and to perform this
transformation adaptively in a topologically ordered fashion. The network consists of a layer
of neurons all receiving inputs, as with competitive learning. The location of the nodes within
the network has some geometrical significance. The networks of this type impose a
neighborhood constraint on the output units, such that a certain topological property in the
input data is reflected in the output units weights.

Fig. 2 An architecture of SOM


The learning procedure in the networks is similar to competitive learning. That is a similarity
(dissimilarity) measure is selected and the winning units is considered to be the one with the
largest (smallest) activation. For SOM, however, not only the winning units weights are
updated, but also all of the weights in a neighborhood around the wining units are updated.
The Neighborhood function, N(i, j) measures extent which nodes i and j are neighbors. The
neighborhoods size generally decreases slowly with each iteration (neighborhood functions
value goes decreasing).
Steps of training SOM
Initialization: initialize the weights (small values picked from random a random number
generator)
Sampling: draw a sample X from the input space, the vector represents the activation pattern.
Competition or similarity matching: for each input pattern, the neurons in the network
compute their respective values of a discriminant function. This discriminant function
provides the basis for competition among the neurons. The particular neuron with the largest
value of discriminant function is declared winner of the competition. If Euclidean distance is
chosen as the dissimilarity measure, then the winning unit c satisfies the following equation:
X Wc min X Wi
i

Where, c refers to the winning unit.


26

Weight updating: neighborhood of c (say NBc) denotes a set of index corresponding to


neighborhood around winner c. The weights of the winner and its neighborhood units are
then updated by
Wi ( X Wi ),

i NBc

SOM preserves the topology. Topology means local or neighborhood properties. Kohonen
networks attempt to map input patterns to nodes such that nearness is preserved.
The most well-known application of Kohonens self organizing networks is Kohonens
attempt to construct a neural phonetic typewriter that is capable of transcribing speech into
written text from an unlimited vocabulary, with an accuracy of 92% to 97%.
The essential features of SOM are
A continuous input space of activation patterns (incoming signal patterns)
A topology of the network in the form of a lattice of neurons, which defines a discrete
output space.
A time-varying neighborhood function that is defied around a winner neuron.
A learning rate parameter that starts at an initial value and then decreases gradually
with time but never goes to zero.

27

Das könnte Ihnen auch gefallen