Beruflich Dokumente
Kultur Dokumente
> + |
.
|
\
|
=
otherwise
I w
O
i
i i
: 0
0 : 1 u
I1
I2
I3
W1
W2
W3
u
O
or
1
Activation Function
ECE_MTECH_SEMINAR AMIT PRASAD
12
Perceptron Example
2
1
.5
.3
u
=-1
2(0.5) + 1(0.3) + -1 = 0.3 , O=1
Learning Procedure:
Randomly assign weights (between 0-1)
Present inputs from training data
Get output O, nudge weights to gives results toward our
desired output T
Repeat; stop when no errors, or enough epochs completed
ECE_MTECH_SEMINAR AMIT PRASAD
13
USEFULNESS AND CAPABILITY
NON LINEARITY
INPUT -OUTPUT MAPPING
ADAPTIVITY
EVIDENTIAL RESPONSE
FAULT TOLERANCE
VLSI IMPLEMENTABLITY
NEUROBIOLOGICAL ANALOGY
ECE_MTECH_SEMINAR AMIT PRASAD
14
A BASIC ANN ALGORITHM
ECE_MTECH_SEMINAR AMIT PRASAD
15
STEP FUNCTION
ECE_MTECH_SEMINAR AMIT PRASAD
16
BASIC ANN MODEL
ECE_MTECH_SEMINAR AMIT PRASAD
17
LOGISTIC FUNCION
ECE_MTECH_SEMINAR AMIT PRASAD
18
BASIC ANN MODEL
ECE_MTECH_SEMINAR AMIT PRASAD
19
LINEAR FUNCTION
ECE_MTECH_SEMINAR AMIT PRASAD
20
LEARNING LAWS
HEBBS LAW
HOPFIELDS LAW
ECE_MTECH_SEMINAR AMIT PRASAD
21
HEBBS LAW
ECE_MTECH_SEMINAR AMIT PRASAD
22
HOPFIELDS LAW
ECE_MTECH_SEMINAR AMIT PRASAD
23
LIMITATIONS
Low learning rate-problem required large but
complex network
Forgetfull-forget old data and training new ones...
Imprecision-not provide precise numerial answers
Black box aproach-we cant see the physical part of
the training transfer data
Limited flexibility-implemented only one system
available
ECE_MTECH_SEMINAR AMIT PRASAD
24
CONCLUSION
Atlast I want to say that after 200 to 300
hundred years neural network will be so
developed that it can find the error of even
human beings And will be able to recify
them and makes human more intelligent
ECE_MTECH_SEMINAR AMIT PRASAD
25