Sie sind auf Seite 1von 25

PRESENTATION ON

ARTIFICIAL NEURAL NETWORK


BY:
AMI T PRASAD
M.TECH[ECE] -3RD SEM ROLL NO. : 16971
GUIDE:
PROF. [DR.] K.K. SAI NI
HEAD[R&D] ECE
DRONACHARYA COLLEGE OF ENGINEERING,
GURGAON
PROF.[DR.] K.K. SAINI
AMIT PRASAD
CONTENTS
1.INTRODUCTION
2.HISTORY
3.COMPARISION WITH OLDER TECHNOLOGIES
4.NEURAL NETWORK
4.BIOLOGICAL INSPIRATION
5.ARTIFICIAL NEURAL NETWORK
5.PERCEPTRON
6.WORKING ANALYSIS WITH A ALGORITHM
5.USEFULNESS & CAPABILITY
6.LEARNING LAWS
8.LIMITATIONS (CURRENT)
9.CONCLUSION
ECE_MTECH_SEMINAR AMIT PRASAD
2
INTRODUCTION
The most intelligent device-
HUMAN BRAIN
The machine that revolutionized the whole
world- COMPUTER
Inefficiencies of the computer has lead to
the evolution of ARTIFICIAL NEURAL
NETWORK
ECE_MTECH_SEMINAR AMIT PRASAD
3
NEURAL NETWORK HISTORY
History traces back to the 50s but became popular in the
80s with work by Rumelhart, Hinton, and Mclelland
A General Framework for Parallel Distributed Processing in
Parallel Distributed Processing: Explorations in the Microstructure
of Cognition
Peaked in the 90s. Today:
Hundreds of variants
Less a model of the actual brain than a useful tool, but still some
debate
Numerous applications
Handwriting, face, speech recognition
Vehicles that drive themselves
Models of reading, sentence production, dreaming
Debate for philosophers and cognitive scientists
Can human consciousness or cognitive abilities be explained by a
connectionist model or does it require the manipulation of
symbols?

ECE_MTECH_SEMINAR AMIT PRASAD
4
NATURAL NEURAL NETWORK
Billions of neurons
Unique functionality
A Neuron is a basic processing
element of brain
ECE_MTECH_SEMINAR AMIT PRASAD
5
Comparison of Brains and
Traditional Computers
200 billion neurons, 32
trillion synapses
Element size: 10
-6

m
Energy use: 25W
Processing speed: 100 Hz
Parallel, Distributed
Fault Tolerant
Learns: Yes
Intelligent/Conscious:
Usually
1 billion bytes RAM but
trillions of bytes on disk
Element size: 10
-9
m
Energy watt: 30-90W (CPU)
Processing speed: 10
9
Hz
Serial, Centralized
Generally not Fault Tolerant
Learns: Some
Intelligent/Conscious:
Generally No
ECE_MTECH_SEMINAR AMIT PRASAD
6
Biological Inspiration
Although heterogeneous, at a low level
the brain is composed of neurons
A neuron receives input from other neurons
(generally thousands) from its synapses
Inputs are approximately summed
When the input exceeds a threshold the neuron
sends an electrical spike that travels that
travels from the body, down the axon, to the
next neuron(s)
ECE_MTECH_SEMINAR AMIT PRASAD
7
Biological Inspiration
ECE_MTECH_SEMINAR AMIT PRASAD
8
ECE_MTECH_SEMINAR AMIT PRASAD
9
ARTIFICIAL NEURAL
NETWORK
ECE_MTECH_SEMINAR AMIT PRASAD
10
How differ from the conventional
computer
CC- single processor sequentially
dictate every piece of action
ANN-very large no of processing
elements that individually deals with
abiece of big problem
ECE_MTECH_SEMINAR AMIT PRASAD
11
PERCEPTRON
Initial proposal of connectionist networks
Rosenblatt, 50s and 60s
Essentially a linear discriminant composed of
nodes, weights
I1
I2
I3
W1
W2
W3
u
O

> + |
.
|

\
|
=

otherwise
I w
O
i
i i
: 0
0 : 1 u
I1
I2
I3
W1
W2
W3
u
O
or
1
Activation Function
ECE_MTECH_SEMINAR AMIT PRASAD
12
Perceptron Example
2
1
.5
.3
u
=-1
2(0.5) + 1(0.3) + -1 = 0.3 , O=1
Learning Procedure:
Randomly assign weights (between 0-1)
Present inputs from training data
Get output O, nudge weights to gives results toward our
desired output T
Repeat; stop when no errors, or enough epochs completed
ECE_MTECH_SEMINAR AMIT PRASAD
13
USEFULNESS AND CAPABILITY

NON LINEARITY
INPUT -OUTPUT MAPPING
ADAPTIVITY
EVIDENTIAL RESPONSE
FAULT TOLERANCE
VLSI IMPLEMENTABLITY
NEUROBIOLOGICAL ANALOGY
ECE_MTECH_SEMINAR AMIT PRASAD
14
A BASIC ANN ALGORITHM
ECE_MTECH_SEMINAR AMIT PRASAD
15
STEP FUNCTION
ECE_MTECH_SEMINAR AMIT PRASAD
16
BASIC ANN MODEL
ECE_MTECH_SEMINAR AMIT PRASAD
17
LOGISTIC FUNCION
ECE_MTECH_SEMINAR AMIT PRASAD
18
BASIC ANN MODEL
ECE_MTECH_SEMINAR AMIT PRASAD
19
LINEAR FUNCTION
ECE_MTECH_SEMINAR AMIT PRASAD
20
LEARNING LAWS
HEBBS LAW
HOPFIELDS LAW
ECE_MTECH_SEMINAR AMIT PRASAD
21
HEBBS LAW

ECE_MTECH_SEMINAR AMIT PRASAD
22
HOPFIELDS LAW

ECE_MTECH_SEMINAR AMIT PRASAD
23
LIMITATIONS
Low learning rate-problem required large but
complex network
Forgetfull-forget old data and training new ones...
Imprecision-not provide precise numerial answers
Black box aproach-we cant see the physical part of
the training transfer data
Limited flexibility-implemented only one system
available
ECE_MTECH_SEMINAR AMIT PRASAD
24
CONCLUSION
Atlast I want to say that after 200 to 300
hundred years neural network will be so
developed that it can find the error of even
human beings And will be able to recify
them and makes human more intelligent
ECE_MTECH_SEMINAR AMIT PRASAD
25

Das könnte Ihnen auch gefallen