Sie sind auf Seite 1von 30

TEMPORAL ASSOCIATIVE MEMORY

It is 2-layer. Uses unsupervised learning. It is cyclic sequential encoder with feedback paths. It learns to associate bipolar or binary sequential patterns, as A1 with A2, A2 with A3 , A3 with A4 ,. An-1 with An , An with A1.

ARCHITECTURE OF TAM

ARCHITECTURE OF TAM

TEMPORAL ASSOCIATIVE MEMORY

LINEAR ASSOCIATIVE MEMORY(LAM)


In 1972, James Anderson developed linear model called Linear Associator Memory(LAM). Connection between neuron are strengthened every time when they are activated. Extension to linear associator is Brain In The Box model.

BRAIN IN THE BOX MODEL


The brain-state-in-a-box (BSB) model was first described by Anderson(1977). The model BSB is basically a positive feeedback system with amplitude limitation. Its consist of a highly interconnected set of neurons that feed back upon themselves.

BSB: DEFINITION
Let W = a symmetric weight matrix whose largest eigenvalues have a positive real components. Let x(0) =initial state vector of the model, representing an input activation pattern. Assuming that, there are N neurons in the model, then we can say that the vector of the model has dimension N, the weight matrix W is an N x N matrix.

BSB: DEFINITION

The BSB algorithm is completely defined by the following pair of equations:


y(n) = x(n) + Wx(n) x(n+1) = (y(n)) Where, = a small positive constant = the feedback factor x(n) = state vector of the model at discrete time n.

BLOCK DIAGRAM OF THE BRAIN-STATE-IN-ABOX (BSB) MODEL

SIGNAL-FLOW GRAPH OF THE LINEAR


ASSOCIATOR REPRESENTED BY THE WEIGTH
MATRIX

10

PIECEWISE-LINEAR ACTIVATION FUNCTION

The function of activation is a piecewise-linear function that operates on yj(n), the jth component of the vector y(n). xj(n+1) = (yj(n)) +1 if yj(n) > +1; = yj(n) if -1 < yj(n) < +1; (3) = -1 if yj(n) < -1.

11

PIECEWISE-LINEAR ACTIVATION FUNCTION USED IN THE BSB MODEL

Equation(3) constrains the state vector of the BSB model to lie within an N-dimensional unit cube centered on the origin.

LAM ALGORITHM

13

LEARNING VECTOR QUANTIZATION(LVQ)


Learning Vector Quantization (LVQ), is a prototypebased supervised classification algorithm. LVQ is the supervised counterpart of vector quantization systems. LVQ can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all learning-based approach. It is a related to to Self-organizing maps (SOM).

14

SELF-ORGANIZING MAP

(SOM)

A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically twodimensional), discretized representation of the input space of the training samples, called a map. SOMs useful for visualizing low-dimensional views of high-dimensional data, akin to multidimensional scaling. The model was first described as an artificial neural network by the Finnish professor Teuvo Kohonen, and is sometimes called a Kohonen map.

15

LVQ was invented by Teuvo Kohonen. An LVQ system is represented by prototypes W=(w(i),...,w(n)) which are defined in the feature space of observed data. In winner-take-all training algorithms one determines, for each data point, the prototype which is closest to the input according to a given distance measure. The position of this so-called winner prototype is then adapted, i.e. the winner is moved closer if it correctly classifies the data point or moved away if it classifies the data point incorrectly.

16

ADVANTAGE OF LVQ
It creates prototypes that are easy to interpret for experts in the respective application domain. LVQ systems can be applied to multi-class classification problems in a natural way. USES: Optical character recognition Converting speech to phonemes

17

RADIAL BASIS FUNCTION

(RBF) NETWORKS

RBFN are artificial neural networks for application to problems of supervised learning:

Regression Classification Time series prediction.

18

RADIAL BASIS FUNCTION


A

(RBF) NETWORKS

kind of supervised neural networks Design of NN as curve-fitting problem Learning

find surface in multidimensional space best fit to training data Use of this multidimensional surface to interpolate the test data
19

Generalization

RADIAL BASIS FUNCTION


Approximate

(RBF) NETWORKS

function with linear combination of Radial basis functions F(x) = S wi h(x)

h(x)

is mostly Gaussian function

20

RADIAL FUNCTIONS
Characteristic

feature-their response decreases (or increases) monotonically with distance from a central point. The center, the distance scale, and the precise shape of the radial function are parameters of the model, all fixed if it is linear. Typical radial functions are:
The Gaussian RBF (monotonically decreases with distance from the center). A multiquadric RBF (monotonically increases with distance from the center).

21

A GAUSSIAN FUNCTION

A Gaussian RBF monotonically decreases with distance from the center. Gaussian like RBFs are local (give a significant response only in a neighborhood near the center) and are more commonly used than multiquadric type RBFs which have a global response.
22

A MULTIQUADRIC RBF

A multiquadric RBF which, in the case of scalar 23 input, is monotonically increases with distance from the centre.

RADIAL BASIS FUNCTIONS NETWORKS


RBF are Usually used in a single layer network. An RBF network is nonlinear if the basis functions can move or change size or if there is more than one hidden layer.

24

RBF NETWORK
The

basic architecture for a RBF is a 3-layer network, as shown in Fig. The input layer is simply a fan-out layer and does no processing. The second or hidden layer performs a nonlinear mapping from the input space into a (usually) higher dimensional space in which the patterns become linearly separable.
25

x1 y1 x2 y2 x3 input layer (fan-out) output layer (linear w eighted sum)

hidden layer (w eights correspond to cluster centre, output f unction usually Gaussian)

26

THREE LAYERS
Input

layer

Source nodes that connect to the network to its environment

Hidden

layer

Hidden units provide a set of basis function High dimensionality

Output

layer

Linear combination of hidden functions


27

OUTPUT LAYER
The

final layer performs a simple weighted sum with a linear output. If the RBF network is used for function approximation (matching a real number) then this output is fine. However, if pattern classification is required, then a hard-limiter or sigmoid function could be placed on the output neurons to give 0/1 output values.
28

RADIAL BASIS FUNCTION


f(x) = wjhj(x)
j=1 m

hj(x) = exp( -(x-cj)2 / rj2 )


Where cj is center of a region, rj is width of the receptive field

29

ADVANTAGES/DISADVANTAGES
RBF

trains faster than a MLP The hidden layer is easier to interpret than the hidden layer in an MLP. RBF is quick to train, when training is finished and it is being used it is slower than a MLP, so where speed is a factor a MLP may be more appropriate.

30

Das könnte Ihnen auch gefallen