Beruflich Dokumente
Kultur Dokumente
It is 2-layer. Uses unsupervised learning. It is cyclic sequential encoder with feedback paths. It learns to associate bipolar or binary sequential patterns, as A1 with A2, A2 with A3 , A3 with A4 ,. An-1 with An , An with A1.
ARCHITECTURE OF TAM
ARCHITECTURE OF TAM
BSB: DEFINITION
Let W = a symmetric weight matrix whose largest eigenvalues have a positive real components. Let x(0) =initial state vector of the model, representing an input activation pattern. Assuming that, there are N neurons in the model, then we can say that the vector of the model has dimension N, the weight matrix W is an N x N matrix.
BSB: DEFINITION
10
The function of activation is a piecewise-linear function that operates on yj(n), the jth component of the vector y(n). xj(n+1) = (yj(n)) +1 if yj(n) > +1; = yj(n) if -1 < yj(n) < +1; (3) = -1 if yj(n) < -1.
11
Equation(3) constrains the state vector of the BSB model to lie within an N-dimensional unit cube centered on the origin.
LAM ALGORITHM
13
14
SELF-ORGANIZING MAP
(SOM)
A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically twodimensional), discretized representation of the input space of the training samples, called a map. SOMs useful for visualizing low-dimensional views of high-dimensional data, akin to multidimensional scaling. The model was first described as an artificial neural network by the Finnish professor Teuvo Kohonen, and is sometimes called a Kohonen map.
15
LVQ was invented by Teuvo Kohonen. An LVQ system is represented by prototypes W=(w(i),...,w(n)) which are defined in the feature space of observed data. In winner-take-all training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.
16
ADVANTAGE OF LVQ
It creates prototypes that are easy to interpret for experts in the respective application domain. LVQ systems can be applied to multi-class classification problems in a natural way. USES: Optical character recognition Converting speech to phonemes
17
(RBF) NETWORKS
RBFN are artificial neural networks for application to problems of supervised learning:
18
(RBF) NETWORKS
find surface in multidimensional space best fit to training data Use of this multidimensional surface to interpolate the test data
19
Generalization
(RBF) NETWORKS
h(x)
20
RADIAL FUNCTIONS
Characteristic
feature-their response decreases (or increases) monotonically with distance from a central point. The center, the distance scale, and the precise shape of the radial function are parameters of the model, all fixed if it is linear. Typical radial functions are:
The Gaussian RBF (monotonically decreases with distance from the center). A multiquadric RBF (monotonically increases with distance from the center).
21
A GAUSSIAN FUNCTION
A Gaussian RBF monotonically decreases with distance from the center. Gaussian like RBFs are local (give a significant response only in a neighborhood near the center) and are more commonly used than multiquadric type RBFs which have a global response.
22
A MULTIQUADRIC RBF
A multiquadric RBF which, in the case of scalar 23 input, is monotonically increases with distance from the centre.
24
RBF NETWORK
The
basic architecture for a RBF is a 3-layer network, as shown in Fig. The input layer is simply a fan-out layer and does no processing. The second or hidden layer performs a nonlinear mapping from the input space into a (usually) higher dimensional space in which the patterns become linearly separable.
25
hidden layer (w eights correspond to cluster centre, output f unction usually Gaussian)
26
THREE LAYERS
Input
layer
Hidden
layer
Output
layer
OUTPUT LAYER
The
final layer performs a simple weighted sum with a linear output. If the RBF network is used for function approximation (matching a real number) then this output is fine. However, if pattern classification is required, then a hard-limiter or sigmoid function could be placed on the output neurons to give 0/1 output values.
28
29
ADVANTAGES/DISADVANTAGES
RBF
trains faster than a MLP The hidden layer is easier to interpret than the hidden layer in an MLP. RBF is quick to train, when training is finished and it is being used it is slower than a MLP, so where speed is a factor a MLP may be more appropriate.
30