Sie sind auf Seite 1von 23

Self Organizing Maps

By: Ibrahim Isleem


Supervisor: Dr. Akram Abu Garad.
Abstract
 Pioneered in 1982 by Finnish professor and researcher Dr.
Teuvo Kohonen, a self-organising map is an unsupervised
learning model, intended for applications in
which maintaining a topology between input and output
spaces is of importance. The notable characteristic of
this algorithm is that the input vectors that are close —
similar — in high dimensional space are also mapped to
nearby nodes in the 2D space. It is in essence a method
for dimensionality reduction, as it maps high-dimension
inputs to a low (typically two) dimensional discretised
representation and conserves the underlying structure of
its input space.
Introduction
 Self-organizing neural networks are used to cluster
input patterns into groups of similar patterns.  They're
called "maps" because they assume a topological
structure among their cluster units; effectively
mapping weights to input data.  The Kohonen network
is probably the best example, because it's simple, yet
introduces the concepts of self-organization and
unsupervised learning easily.
Outline

 What are Self Organizing Map


 Terminology used
 Component of SOM
 Structure of the map
 The SOM Algorithm
 Training Algorithm
 Advantage and Disadvantage
 Unsupervised Learning
 Kohonen Self Organizing Maps
 Application of SOM
What are Self Organizing Map

 A self-organizing map (SOM) is a type of 


artificial neural network (ANN) that is
trained using unsupervised learning to
produce a low-dimensional (typically two-
dimensional), discretized representation of
the input space of the training samples,
called a map, and is therefore a method to
do dimensionality reduction.
What are Self Organizing Map
 Self-organizing maps differ from other
artificial neural networks as they apply
competitive learning as opposed to error-
correction learning (such as backpropagation
with gradient descent), and in the sense that
they use a neighborhood function to
preserve the topological properties of the
input space.
Terminology used

 Clustering
 Unsupervised learning
 Euclidean Distance

p  ( p1 , p 2 ,..., p n )
q  (q1 , q 2 ,..., q n )
n
ED   (p
i 1
i  qi ) 2
Component of SOM
• Sample data

• Weights

• Output nodes
Structure of the map

• 2-dimensional or 1-dimensional grid

• Each grid point represents a output node


• The grid is initialized with random vectors
The SOM Algorithm

• The self-organizing map algorithm can broken


up into 6 steps:
1. Each node ‘s weight are initialized.
2. A vector is chosen at random from the set of
training data and presented to the network.
3. Every node is examined to calculate which
one’s weights are most like the input vector.
The winning node is commonly known as
the Best Matching Unit (BMU).
The SOM Algorithm

4. Then the neighbourhood of the BMU is


calculated. The amount of neighbors decreases
over time.
5. The winning weight is rewarded with becoming
more like the sample vector. The nighbors also
become more like the sample vector. The closer a
node is to the BMU, the more its weights get
altered and the farther away the neighbor is from
the BMU, the less it learns.
6. Repeat the 2) for N iteration.
Training Algorithm

Initialize Map
For t from 0 to 1
Select a sample
Get best matching unit
Scale neighbors
Increase t a small amount
m i ( t  1)  m i ( t )  ( t )[ x ( t )  m i ( t )]
i  N c ( t ) End for
Advantage and Disadvantage

 Advantages
• Very easy to understand.
• Works well.
• SOM is Algorithm that projects high-dimensional data onto
a two-dimensional map.
• SOM still have many practical applications in pattern
recognition, speech analysis, industrial and medical
diagnostics, data mining

 Disadvantages
• computationally expensive.
• every SOM is different.
• Large quantity of good quality representative training
data required.
• No generally accepted measure of ‘quality’ of a SOM
Unsupervised Learning

How could we know what constitute “different” clusters?

Green Apple and Banana Example


two features: shape and color.
Unsupervised Learning
Kohonen SOM (Self Organizing Maps)

 The BMU and other units will adjust its


position toward the input vector.

 Self Organizing NNs are also called Topology


Preserving Maps which leads to the idea of
neighborhood of the clustering unit.

 During the self-organizing process, the weight


vectors of winning unit and its neighbors are
updated.
Kohonen SOM (Self Organizing Maps)
Kohonen SOM (Self Organizing Maps)

 Architecture of SOM
 Made up of an input nodes and
computational nodes.
 each computational node is connected to
each input node to from lattice.
Kohonen SOM (Self Organizing Maps)

 Architecture of SOM
 There are no interconnections among the
computational nodes.

 The number of input nodes is determined


by the dimension of input vector.
Kohonen SOM (Self Organizing Maps)

 Structure of Neighborhoods
Kohonen Self Organizing Maps

Architecture

neuron i
Kohonen layer
wi

Winning neuron

Input vector X
X=[x1,x2,…xn]  Rn
wi=[wi1,wi2,…,win]  Rn
Application of SOM

1) Organizing of massive document collection.

2) Meteorology and oceanography.

3) Seismic facies analysis for oil and gas explanation.

4) More,…
THANK YOU

Das könnte Ihnen auch gefallen