Sie sind auf Seite 1von 58

Neural Networks

Chapter 20.5
Introduction & Basics Perceptrons Perceptron Learning and PLR Beyond Perceptrons Two-Layered Feed-Forward Neural Networks
5/19/2013 2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Introduction
"'Artificial Neural Networks' are massively parallel interconnected networks of simple (usually adaptive) elements and their hierarchical organizations which are intended to interact with the objects of the real world in the same way as biological nervous systems do." Teuvo Kohonen
5/19/2013 2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Introduction

Known as:

Neural Networks (NNs) Artificial Neural Networks (ANNs) Connectionist Models Parallel Distributed Processing (PDP) Models

Neural Networks are a fine-grained, parallel, distributed computing model.

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Introduction

NN similar to brain:

knowledge is acquire experientially (learning) knowledge stored in connections (weights) dendrites collect input from other neurons single axon sends output to other neurons connected at synapses that have varying strength this model is greatly simplified
2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Brain composed of neurons cells:

5/19/2013

Introduction

Constraints on human info. processing:

number of neurons: 1011 number of connections: 104 per neuron neuron death rate: 105 per day neuron birth rate: ~0 connection birth rate: very slow performance: about 102 msec, about 100 sequential neuron firings for "many" tasks

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Introduction

Attractions of NN approach:

can be massively parallel


MIMD, optical computing, analog systems from a large collection of simple processing elements emerges interesting complex global behavior pattern recognition (handwriting, facial expressions, etc.) forecasting (stock prices, power grid demand) adaptive control (autonomous vehicle control, robot control) can handle noisy and incomplete data due to fine-grained distributed and continuous knowledge representation

can do complex tasks


is a robust computation

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Introduction

Attractions of NN approach:

fault tolerant

ok to have faulty elements and bad connections isn't dependent on a fixed set of elements and connections continues to function, at a lower level of performance, when portions of the network are faulty useful as a psychological model useful for a wide variety of high-performance applcations

degrades gracefully

uses inductive learning


5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Basics of Neural Networks

Neural network composition:

large number of units

simple neuron-like processing elements (PEs) directed from one unit to another

connected by a large number of links

with a weight associate with each link


positive or negative real values means of long term storage adjusted by learning
result of the unit's processing unit's output

and an activation level associated with each unit


5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Basics of Neural Networks

Neural network configurations:

represent as a graph

nodes: units edges: links

single-layered multi-layered feedback layer skipping fully connected (N2 links)


Layer 1 Layer 2

Layer 3

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Basics of Neural Networks

Unit composition:

set of input links

from other units or sensors of the environment to other units or effectors of the environment

set of output links

and an activation function


computes the activation level based on local info: the inputs from neighbors and the weights is a simple function of the linear combination of its inputs
w1

Inputs wn
5/19/2013

Activation

Output

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

10

Basics of Neural Networks

Given n inputs, the unit's activation is defined by: a = g( (w1 * x1) + (w2 * x2) + ... + (wn * xn) )
where: wi are the weights xi are the input values g() is a simple non-linear function, commonly:
let ini be sum of wi xi for all i step: activation flips from 0 to 1 when ini >= threshold sign: activation flips from 1 to +1 when ini >= 0 sigmoid: activation transistions from 0 to 1 when ini 0 1/(1+e-x) where x is ini
1 T 1 -1
2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

5/19/2013

11

Perceptrons: Linear Threshold Units (LTU)

LTUs where studied in the 1950s!


mainly as single-layered nets since an effective learning algorithm was known simple 1-layer network, units act independently composed of linear threshold units (LTU)

Perceptrons:

a unit's inputs, xi, are weighted, wi, and combined step function computes activation level, a w1

x1 xi xn
5/19/2013

S
wn

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

12

Perceptrons: Linear Threshold Units (LTU)

Threshold is just another weight (called the bias):


(w1 * x1) + (w2 * x2) + ... + (wn * xn) >= t is equivalent to (w1 * x1) + (w2 * x2) + ... + (wn * xn) + (t * -1) >= 0

-1 t x1 xn w1 a wn

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

13

Perceptrons: AND Example

AND Perceptron:

inputs are 0 or 1 output is 1 when both x1 and x2 are 1

-1

.75
x1 x2 .5 a .5
.5*1+.5*1+.75*-1 = .25 output = 1

2-D input space

.5*1+.5*0+.75*-1 = -.25 output = 0

4 possible .5*0+.5*0+.75*-1 data points = -.75 output = 0 threshold is like a separating line

x1

1
0 0 1
14

x2

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Perceptrons: OR Example

OR Perceptron:

inputs are 0 or 1 output is 1 when either x1 and/or x2 are 1

-1

.25
x1 x2 .5 a .5
.5*1+.5*1+.25*-1 = .75 output = 1

2-D input space

.5*1+.5*0+.25*-1 = .25 output = 1

4 possible .5*0+.5*0+.25*-1 data points = -.25 output = 0 threshold is like a separating line

x1

1
0 0 1
15

x2

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Perceptron Learning

How might perceptrons learn?

Programmer specifies:

numbers of units in each layer connectivity between units

So the only unknown is the weights

Perceptrons learn by changing their weights


supervised learning is used the correct output is given for each training example

an example is a list of values for the input units correct output is a list desired values for the output units

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

16

Perceptron Learning: Algorithm


1. Initialize the weights in the network
(usually with random values)

2. Repeat until all examples correctly classified or some other stopping criterion is met
for each example e in training set do
a. b. c.

O = neural_net_output(network, e) T = desired output, i.e Target or Teacher's output update_weights(e, O, T)

Unlike other learning techniques, Perceptrons need to see all of the training examples multiple times. Each pass through all of the training examples is called an epoch.
2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

5/19/2013

17

Perceptron Learning: The Rule

How should the weights be updated?

Determining how to update the weights is a case of the credit assignment problem.
Perceptron Learning Rule: wi = wi + Dwi where Dwi = a * xi * (T - O)

where xi is the value associated with ith input unit a is a constant between 0.0 and 1.0 called the learning rate

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

18

Perceptron Learning: The Rule

Dwi = a * xi * (T - O)

note it doesn't depend on wi correct output, i.e. T = O gives a * xi * 0 = 0 zero input, i.e. xi = 0 gives a * 0 * (T - O) = 0

When wont the weight be change (i.e. Dwi = 0)?


What should happen to the weight if T=1 & O=0?

Increase it so that maybe next time the weighted sum will exceed the threshold causing output to be 1 Decrease it so that maybe next time the weighted sum wont exceed the threshold causing output to be 0
2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

What should happen to the weight if T=0 & O=1?

5/19/2013

19

Perceptron Learning: Example

In class example, Learning Or

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

20

Perceptron Learning Rule (PLR)

PLR is also called the Delta Rule or the Widrow-Hoff Rule PLR is a variant of rule proposed by Rosenblatt in 1960

PLR is based on an idea of Hebb:

the strength of a connection between two units should be adjusted in proportion to the product of their simultaneous activations the product is used as a means of measuring the correlation between the values that are output by the two units
2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

5/19/2013

21

Perceptron Learning Rule (PLR)

PLR is a "local" learning rule only local information in the network is needed to update a weight PLR performs gradient descent in "weight space" this rule iteratively adjusts all of the weights so that at for each training example the error is monotonically non-increasing, i.e. ~decreases

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

22

Perceptron Learning Rule (PLR)

Perceptron Convergence Theorem says if a set of examples are learnable, then PLR will find the necessary weights

in a finite number of steps independent of the initial weights

This theorem says that if a solution exists, PLR's gradient descent is guaranteed to find an optimal solution (i.e., 100% correct classification) for any 1-layer neural network

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

23

Limits of Perceptron Learning

What are the limitations of perceptron learning?

A single perceptron's output is determined by the separating hyperplane defined by (w1 * x1) + (w2 * x2) + ... + (wn * xn) = t So, Perceptrons can only learn functions that are linearly separable (in input space).

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

24

Perceptrons: XOR Example

XOR Perceptron:

inputs are 0 or 1 output is 1 when x1 is 1 and x2 is 0 or x1 is 0 and x2 is 1

-1

???
x1 x2 .5 a .5

2-D input space with 4 possible data points positives from negatives using a straight line?
1
0

x1

How do you separate

x2 0 1
25

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Perceptron Learning Summary


In general, the goal of learning in a perceptron is to adjust the separating hyperplane which divides an n-dimensional input space where n is the number of input units by modifying the weights (and biases) until all of the examples with target value 1 are on one side of the hyperplane, and all of the examples with target value 0 are on the other side of the hyperplane.

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

26

Beyond Perceptrons

Perceptrons as a computing model are too weak because they can only learn linearly-separable functions. To enhance the computational ability, general neural networks have multiple layers of units.

The challenge is to find a learning rule that works for multi-layered networks.

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

27

Beyond Perceptrons

A feed-forward multi-layered network computes a function of the inputs and the weights. Input units (on left or bottom):

activation is determined by the environment

Output units (on right or top):

activation is the result


cannot observe directly

Hidden units (between input and output units):

Perceptrons have input units followed by one layer of output units, i.e. no hidden units

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

28

Beyond Perceptrons

NN's with one hidden layer of a sufficient number of units, can compute functions associated with convex classification regions in input space. NN's with two hidden layers are universal computing devices, although the complexity of the function is limited by the number of units.

If too few, the network will be unable to represent the function. If too many, the network will memorize examples and is subject to overfitting.

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

29

Two-Layered Feed-Forward Neural Networks


ak=Ik Wk,j I1 I2 Aj Wj,i Ai

Inputs Hidden Units Output Units Weights on links from input to hidden Weights on links from hidden to output Network Activations

I3
I4 I5 I6

a1 = O1

a2 = O2

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

30

Two-Layered Feed-Forward Neural Networks


ak=Ik Wk,j Aj Wj,i Ai

Two Layered: count layers with units computing an activation Feed-forward: each unit in a layer connects forward to all of the units in the next layer no cycles
- links within the same layer - links to prior layers

I1 I2

I3
I4 I5 I6 Layer 1 Layer 2

a1 = O1

a2 = O2

no skipping layers

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

31

Neural Networks

Chapter 20.5
Two-Layered Feed-Forward Neural Networks Solving XOR Learning in Multi-Layered Feed-Forward NN Back-Propagation Computing the Change for Weights Other Issues & Applications
5/19/2013 2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

32

Conquering XOR

XOR Perceptron?:

inputs are 0 or 1 output is 1 when I1 is 1 and I2 is 0 or I1 is 0 and I2 is 1

I1

.25 .5 .5 .5

OR

.5 O -.5

I2

Each unit in hidden layer acts like a perceptron learning a separating line

.5 .75

AND I1 1 0 0 1

top hidden unit acts like an OR perceptron bottom hidden unit acts like an AND perceptron

I2

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

33

Conquering XOR

XOR Perceptron?:

inputs are 0 or 1 output is 1 when I1 is 1 and I2 is 0 or I1 is 0 and I2 is 1

I1

.25 .5 .5 .5

OR

.5 O -.5

I2

The output unit combines I1 these separating lines by intersecting the "half-planes" 1 defined by the separating lines
when OR is 1 and AND is 0 0 0 1
34

.5 .75

AND

I2

then output O, is 1
5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

Learning in Multi-Layered Feed-Forward NN

PLR doesn't work in multi-layered feed-forward nets since desired values for hidden units aren't known. Must again solve the Credit Assignment Problem

determine which weights to credit/blame for the output error in the network determine which weights in the network should be updated and how to update them

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

35

Learning in Multi-Layered Feed-Forward NN

Back-Propogation:

method for learning weights in these networks generalizes PLR Rumelhart, Hinton, Williams (re)discovered in 1986

Back-Propagation approach:

gradient-descent algorithm to minimize the error on the training data errors are propagated through the network starting at the output units and working backwards towards the input units

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

36

Back-Propagation Algorithm
1. Initialize the weights in the network (usually random values like PLA) 2. Repeat until all examples correctly classified or other stopping criterion is met for each example e in training set do
a. b. c. d.
i. ii.

forward pass: Oi = neural_net_output(network, e) Ti = desired output, i.e Target or Teacher's output calculate error (Ti - Oi) at the output units backward pass: update_weights(network, DWj,i, DWk,j )

compute Dwj,i for all weights from hidden to output layer compute Dwk,j for all weights from inputs to hidden layer

e.

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

37

Computing the Change for Weights

Back-propagation performs a gradient descent search in weight space to learn network weights. Given a network with n weights:

each configuration of weights is a vector, W, of length n that defines an instance of the network W can be considered a point in an n-dimensional weight space, where each dimension is associated with one of the connections in the network

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

38

Computing the Change for Weights

Given a training set of m examples:

each network defined by the vector W has an associated total error, E, on all of training data E the sum of the squared error (SSE) is defined as: E = E1 + E2 + ... + Em where each Ei is the squared error of the network on the ith training example

Given n output units in the network: Ei = ((T1 - O1)2 + (T2 - O2) 2 + ... + (Tn - On) 2) / 2

Ti is the target value for the ith output unit Oi is the network output value for the ith output unit

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

39

Computing the Change for Weights


Visualized as a 2D error surface in weight space Each point in w1 w2 plane E is a weight configuration Each point has a total error E

2D surface represents errors for all weight configurations Goal is to find a lower point on the error surface (local minima) Gradient descent follows the direction of the steepest descent i.e. where E decreases the most

w2
.3

.8

w1

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

40

Computing the Change for Weights

The gradient is defined as: Gradient_E = [E/w1, E/w2, ..., E/wn] Then change the ith weight by: Dwi = - a * E/wi To compute the derivatives for calculating the gradient direction requires an activation function that is continuous, differentiable, non-decreasing and easily computed.

can't use the step function as in LTU's instead use the sigmoid function 1/(1+e-x) where x is ini the weighted sum of inputs
2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

5/19/2013

41

Computing the Change for Weights: Two-Layer Neural Network


For weights between hidden and output units generalize PLR for sigmoid activation is: Dwj,i = -a * E/wj,i = -a * -aj * (Ti - Oi) * g'(ini) = a * aj * (Ti - Oi) * Oi * (1 - Oi)

wj,i weight on link from hidden unit j to output unit i a learning rate parameter aj activation (i.e. output) of hidden unit j Ti teacher output for output unit i Oi actual output of output unit i g' derivative of sigmoid activation function, which is g' = g(1-g)

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

42

Two-Layered Feed-Forward Neural Networks


a1 w1,2

Dw1,2 = a

product of

I1 I2

learning rate

a1 activation along link (T2 O2) error O2 * (1 O2) g(in2)

I3
I4 I5 I6

O1

O2

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

43

Computing the Change for Weights: Two-Layer Neural Network

For weights between inputs and hidden units:


don't have teacher-supplied correct output values infer the error at these units by "back-propagating" error at an output units is "distributed" back to each of the hidden units in proportion to the weight of the connection between them total error is distributed to all of the hidden units that contributed to that error

Each hidden unit accumulates some error from each of the output units to which it is connected

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

44

Computing the Change for Weights: Two-Layer Neural Network

For weights between inputs and hidden units: Dwk,j = -a * E/wk,j = -a * -Ik * g'(inj) * S( wj,i * (Ti - Oi) * g'(ini) ) = a * Ik * aj * (1 - aj) * S( wj,i*(Ti-Oi)*Oi*(1-Oi) )

wk,j weight on link from input k to hidden unit j wj,i weight on link from hidden unit j to output unit i a learning rate parameter aj activation (i.e. output) of hidden unit j Ti teacher output for output unit i Oi actual output of output unit i Ik input value k g' derivative of sigmoid activation function, which is g' = g(1-g)

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

45

Two-Layered Feed-Forward Neural Networks


w1,2
product of
learning rate activation along link

Dw1,2 =
a*
I1 *

a2

W2,i

I1 I2

a2 * (1 a2)

I3
I4 I5 I6

O1

g(in2)

error from outputs distributed back weight error g(ini)

S(w2,i* (Ti-Oi)* Oi*(1-Oi))


= w2,1* (T1-O1)* O1*(1-O1) + w2,2* (T2-O2)* O2*(1-O2)

O2

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

46

Example: Comments about A4


1. Initialize the weights in the network (usually random values) 2. Repeat until all examples correctly classified or other stopping criterion is met for each example e in training set do
a. Forward Pass: Oi = neural_net_output(network, e) compute weighted sum, then sigmoid activation b. Ti = desired output, i.e Target or Teacher's output c. calculate error (Ti - Oi) at the output units d. Backward Pass: i. compute Dwj,i = a * aj * (Ti - Oi) * Oi * (1 - Oi) ii. compute Dwk,j = a * Ik * aj * (1 - aj) * S (wj,i*(Ti-Oi)*Oi*(1-Oi)) e. update_weights(network, DWj,i, DWk,j )

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

47

Other Issues

How should a network's error rate be estimated?


Report the average error rate by using an evaluation method such as cross-validation multiple times with different random initial weights.

How should the learning rate parameter be set?


Use a tuning set or cross-validation to train using several candidate values for alpha, and then select the value that gives the lowest error.

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

48

Other Issues

How many hidden layers should be used?


usually just one hidden layer is used

How many hidden units should be in a layer?


too few and the concept can't be learn too many:

examples just memorized "overfitting", poor generalization

Use a tuning set or cross-validation to determine experimentally the number of units that minimizes error.

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

49

Other Issues

How many examples should be in training set?

The larger the better, but training takes longer.


training set should be of size approximately n/e:

To obtain 1 e correct classification on test set:

n is the number of weights in the network e is test set error fraction between 0 and 1

train to classify 1 - e/2 of the training set correctly e.g. if n=80 and e=0.1 (i.e. 10% error on test set)

training set of size is 800 train until 95% correct classification should produce 90% correct classification on test set

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

50

Other Issues

When should training stop?


too soon and the concept isn't learned too late:


"overfitting", poor generalization error rate will go up on the testing set

Train the network until the error rate on a tuning set begins increasing rather than training until the error (i.e. SSE) is minimized.

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

51

Applications

NETtalk (Sejnowski & Rosenberg, 1987):


learns to say text by mapping character strings to phonemes

Neurogammon (Tesauro & Sejnowski, 1989):


learns to play backgammon

Speech recognition (Waibel, 1989):


learns to convert spoken words to text

Character recognition (Le Cun et al., 1989):


learns to convert page image to text

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

52

Applications: ALVINN

ALVINN (Pomerleau, 1988): learns to control vehicle steering to stay in the middle of its lane topology: two-layered feed-forward network using back-propagation learning topology: input

input is 480*512 pixel image 15 times per second


color image is preprocessed to obtain a 30*32 pixel image each pixel is one byte, an integer from 0 to 255 corresponding to the brightness of the image networks has 960 input units (30*32)
2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

5/19/2013

53

Applications: ALVINN

topology: output

output is one of 30 discrete steering positions

output unit 1 means sharp left output unit 30 means sharp right
Gaussian distribution with a variance of 10 centered 2 on the desired steering directions: Oi = e[-(i-d) /10] computing a least-squares best fit of output units' values to a Gaussian distribution with a variance of 10 peak of this distribution is taken as the steering direction

target output is a set of 30 values

actual output for steering determined by


error for learning is: target output - actual output only 4 hidden units with complete connectivity 960 input units to 4 hidden units to 30 output units
2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

topology: hidden

5/19/2013

54

Applications: ALVINN

Learning:

continuously learns on the fly by observing


human driver (takes ~5 minutes from random initial weights) itself (do an epoch of training every 2 seconds there after) there aren't negative examples network may overfit data in recent images (e.g. straight road) at the expense of past images (e.g. road with curves) generate negative examples by synthesizing views of the road that are incorrect for current steering maintain a buffer of 200 real and synthesized images that keeps some images in many different steering directions

problem with using real continuous data:


solutions

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

55

Applications: ALVINN

Results:

has driven at speeds up to 70 mph has driven continuously for distances up to 90 miles has driven across the continent during different times of the day and with different traffic conditions can drive on:

single lane roads and highways multi-lane highways paved bike paths dirt roads

see for yourself (video)

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

56

Summary

Advantages

parallel processing architecture robust with respect to node failure fine-grained, distributed representation of knowledge robust with respect to noisy data incremental algorithm (i.e. learn as you go) simple computations empirically shown to work well for many problem domains

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

57

Summary

Disadvantages

slow training (i.e. takes many epochs) poor interpretability (i.e. difficult to get rules) ad hoc network topologies (i.e. layouts) hard to debug because distributed representations preclude content checking may converge to local, not global, minimum of error may be hard to describe a problem in terms of features with numerical values not known how to model higher-level cognitive mechanisms with NN model

5/19/2013

2001-2004 James D. Skrentny from notes by C. Dyer, et. al.

58

Das könnte Ihnen auch gefallen