Sie sind auf Seite 1von 38

A TECHNICAL SEMINAR REPORT ON

GESTURE RECOGNITION SYSTEM

Submitted in partial fulfillment of requirement for the award of the degree


of

BACHELOR OF TECHNOLOGY
In
COMPUTER SCIENCE AND ENGINEERING

Submitted by

TADAKA SRESHTA 16BD1A05CC


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY
(Approved by AICTE, Affiliated to JNTUH)
Narayanaguda, Hyderabad, Telangana-29
2019-20

KESHAV MEMORIAL INSTITUTE OF TECHNOLOGY


(Approved by AICTE, Affiliated to JNTUH)
Narayanaguda, Hyderabad.

CERTIFICATE

This is to certified that seminar work entitled “Gesture Recognition System”


is a bonafide work carried out in the seventh semester by “TADAKA SRESHTA
16BD1A05CC” in partial fulfilment for the award of Bachelor of Technology in
“COMPUTER SCIENCE & ENGINEERING” from JNTU Hyderabad during the
academic year 2019- 2020 who carried out the seminar work under the guidance of “
B.Anantharam, Assistant Professor” and no part of this work has been submitted
earlier for the award of any degree.

SIGNATURE OF GUIDE SIGNATURE OF CSE-HOD

INDEX
Table of Contents Page No.

1. Abstract

2. Introduction

3. Architecture

4. Design

5. Hand and Arm Gestures

6. Hand Motion Tracking Techniques

7. Applications of Gesture Recognition

8. Conclusion

9. References
ARTIFICIAL NEURAL NETWORK

ABSTRACT

This report presents an emergence of an Artificial Neural Network (ANN) as a


tool for analysis of different parameters of a system. An Artificial Neural
Network (ANN) is an information-processing paradigm that is inspired by the
way biological nervous systems such as brain, process information. ANN
consists of multiple layers of simple processing elements called as neurons. The
neuron performs two functions, namely, collection of inputs & generation of an
output. Use of ANN provides overview of the theory, learning rules, and
applications of the most important neural network models, definitions and style
of Computation. The mathematical model of network throws the light on the
concept of inputs, weights, summing function, activation function and outputs.
Then ANN helps to decide the type of learning for adjustment of weights with
change in parameters. Finally the analysis of a system is completed by ANN
implementation & ANN training and prediction quality.
HISTORY

History of the ANNs stems from the 1940s, the decade of the first electronic
computer.

However, the first important step took place in 1957 when Rosenblatt
introduced the first concrete neural model, the perceptron. Rosenblatt also took
part in constructing the first successful neurocomputer, the Mark I Perceptron.

In 1986, The application area of the MLP networks remained rather limited until
the breakthrough when a general back propagation algorithm for a multi-layered
perceptron was introduced by Rummelhart and Mclelland.

In 1982, Hopfield brought out his idea of a neural network. Unlike the neurons
in MLP, the Hopfield network consists of only one layer whose neurons are
fully connected with each other.
Examinations of humans' central nervous systems inspired the concept of
artificial neural networks. In an artificial neural network, simple artificial nodes,
known as "neurons", "neurodes", "processing elements" or "units", are
connected together to form a network which mimics a biological neural
network.
There is no single formal definition of what an artificial neural network is.
However, a class of statistical models may commonly be called "neural" if it
possesses the following characteristics:
1. Contains sets of adaptive weights, i.e. numerical parameters that are tuned by
a learning algorithm, and
2. Capability of approximating non-linear functions of their inputs.

The adaptive weights can be thought of as connection strengths between


neurons, which are activated during training and prediction.
Neural networks are similar to biological neural networks in the performing of
functions collectively and in parallel by the units, rather than there being a clear
delineation of subtasks to which individual units are assigned. The term "neural
network" usually refers to models employed in statistics, cognitive psychology
and artificial intelligence. Neural network models which command the central 6
Nervous system and the rest of the brain are part of theoretical neuroscience
and computational neuro science.

In modern software implementations of artificial neural networks, the approach


inspired by biology has been largely abandoned for a more practical approach
based on statistics and signal processing. In some of these systems, neural
networks or parts of neural networks (like artificial neurons) form components
in larger systems that combine both adaptive and non-adaptive elements. While
the more general approach of such systems is more suitable for real-world
problem solving, it has little to do with the traditional, artificial intelligence
connectionist models. What they do have in common, however, is the principle
of non-linear, distributed, parallel and local processing and adaptation.
Historically, the use of neural network models marked a directional shift in the
late eighties from high-level (symbolic) artificial intelligence, characterized by
expert systems with knowledge embodied in if-then rules, to low-level (sub-
symbolic) machine learning, characterized by knowledge embodied in the
parameters of a dynamical system.

Human brain has many incredible characteristics such as massive parallelism,


distributed representation and computation, learning ability, generalization
ability, adaptivity, which seems simple but is really complicated.
It has been always a dream for computer scientist to create a computer which
could solve complex perceptual problems this fast.
ANN models was an effort to apply the same method as human brain uses to
solve perceptual problems.
Three periods of development for ANN:
- 1940:Mcculloch and Pitts: Initial works
- 1960: Rosenblatt: perceptron convergence theorem
Minsky and Papert: work showing the limitations of a simple perceptron
- 1980: Hopfield/Werbos and Rumelhart: Hopfield's energy approach/back-
propagation learning algorithm

INTRODUCTION

In machine learning and cognitive science, artificial neural networks (ANNs)


are a family of models inspired by biological neural networks (the central
nervous systems of animals, in particular the brain) and are used to estimate or
approximate functions that can depend on a large number of inputs and are
generally unknown. Artificial neural networks are generally presented as
systems of interconnected "neurons" which exchange messages between each
other. The connections have numeric weights that can be tuned based on
experience, making neural nets adaptive to inputs and capable of learning.
For example, a neural network for handwriting recognition is defined by a set of
input neurons which may be activated by the pixels of an input image. After
being weighted and transformed by a function (determined by the network's
designer), the activations of these neurons are then passed on to other neurons.
This process is repeated until finally, an output neuron is activated. This
determines which character was read.
Like other machine learning methods – systems that learn from data – neural
networks have been used to solve a wide variety of tasks that are hard to solve
using ordinary rule-based programming, including computer vision and speech
recognition.

first wave of interest in neural networks (also known as connectionist models or


parallel Distributed processing emerged after the introduction of simplified
neurons by McCulloch and Pitts in 1943 (Mc-Culloch & Pitts 1943) .These
neurons were presented as models of biological neurons and as conceptual
components for circuits that could perform computational tasks. When Minsky
and Papert published their book Perceptrons in 1969 (Minsky and Papert 1969)
in which they showed the deficiencies of perceptron models most neural
network funding was redirected and researchers left the field.Only a few
researchers continued their efforts, most notably Teuvo Kohonen, Stephen
Grossberg, James Anderson, and Kunihiko Fukushima.The interest in neural
networks re-emerged only after some important theoretical results were attained
in the early eighties (most notably the discovery of error back-propagation)
and new hardware developments increased the processing capacities. This
renewed interest is reflected in the number of scientists, the amounts of funding,
the number of large conferences, and the number of journals associated with
neural networks, Nowadays most universities have a neural networks group,
within their psychology, physics, computer science, or biology departments.
Artificial neural networks can be most adequately characterized as
computational models with particular properties such as the ability to adapt or
learn, to generalize, or to cluster or organize data, and which operation is based
on parallel processing. However, many of the above-mentioned properties can
be attributed to existing (non-neural) models, the intriguing question is to which
extent the neural approach proves to be better suited for certain applications
than existing models. To date an equivocal answer to this question is not
found. Often parallels with biological systems are described. However, there is
still so little known (even at the lowest cell level) about biological systems, that
the models we are using for our artificial neural systems seem to introduce an
oversimplification of the biological models. Data mining is process of identify
patterns and establish relationships Data Mining defined as The nontrivial
extraction of implicit, previously unknown, and potentially useful information
from data. Data mining is the process of analyzing large amount of data stored
is a data warhorse for useful information which makes use of artificial
intelligence techniques ,neural network ,and advance statistical tools
(such as cluster analysis) to reveal trends, patterns and relationship, which
otherwise may be undetected.

An artificial neural network consists of a pool of simple processing units which


communicate by sending signals to each other over a large number of weighted
connections.A set of major aspects of a parallel distributed model include:A set
of processing units (cells). A state of activation for every unit, which equivalent
to the output of the unit.Connections between the units. Generally each
connection is defined by a weight. A propagation rule, which determines the
effective input of a unit from its external inputs.An activation function, which
determines the new level of activation based on the effective input and the
current activation.An external input for each unit. A method for information
gathering (the learning rule).
NEURON IN BRAIN

Although heterogeneous, at a low level the brain is composed of neurons


1. A neuron receives input from other neurons (generally thousands) from its
synapses
2. Inputs are approximately summed
3. When the input exceeds a threshold the neuron sends an electrical spike that
travels that travelsfrom the body, down the axon, to the next neuron(s)
Brains learn
1. Altering strength between neurons
2. Creating/deleting connections
1. Hebbs Postulate (Hebbian Learning)
2. When an axon of cell A is near enough to excite a cell B and repeatedly or
persistently takes part in firing it, some growth process or metabolic change
takes place in one or both cells such that A’s efficiency, as one of the cells firing
B, is increased.
Long Term Potentiation (LTP)
1. Cellular basis for learning and memory
2. LTP is the long-lasting strengthening of the connection between two nerve
cells in response to stimulation
3. Discovered in many regions of the cortex
A neuron ( neurone or nerve cell) is an electrically excitable cell that processes
and transmits information through electrical and chemical signals. These signals
between neurons occur via synapses, specialized connections with other cells.
Neurons can connect to each other to form neural networks. Neurons are
the core components of the nervous system, which includes the brain, and spinal
cord of the central nervous system (CNS), and the ganglia of the peripheral
nervous system (PNS). Specialized types of neurons include: sensory neurons
which respond to touch, sound, light and all other stimuli affecting the cells of
the sensory organs, that then send signals to the spinal cord and brain; motor
neurons that receive signals from the brain and spinal cord, to cause muscle
contractions, and affect glandular outputs, and interneurons which connect
neurons to other neurons within the same region of the brain or spinal
cord, in neural networks. A typical neuron possesses a cell body (soma),
dendrites, and an axon. The term neurite is used to describe either a dendrite or
an axon, particularly in its undifferentiated stage. Dendrites are thin structures
that arise from the cell body, often extending for hundreds of micrometres
and branching multiple times, giving rise to a complex ”dendritic tree”. An axon
is a special cellular extension that arises from the cell body at a site called the
axon hillock and travels for a distance, as far as 1 meter in humans or even more
in other species. The cell body of a neuron frequently gives rise to multiple
dendrites, but never to more than one axon, although the axon may branch
hundreds of times before it terminates. At the majority of synapses, signals are
sent from the axon of one neuron to a dendrite of another. There are, however,
many exceptions to these rules: neurons that lack dendrites, neurons that have
no axon, synapses that connect an axon to another axon or a dendrite to
another dendrite, etc. All neurons are electrically excitable, maintaining voltage
gradients across their membranes by means of metabolically driven ion pumps,
which combine with ion channels embedded in the membrane to generate
intracellular-versus-extracellular concentration differences of ions such as
sodium, potassium, chloride, and calcium. Changes in the cross-membrane
voltage can alter the function of voltage-dependent ion channels. If the voltage
changes by a large enough amount, an all-or-none electrochemical pulse called
anaction potential is generated, which travels rapidly along the cell’s axon,
and activates synaptic connections with other cells when it arrives. Neurons do
not undergo cell division.

In most cases, neurons are generated by special types of stem cells. A type of
glial cell, called astrocytes (named for being somewhat star-shaped), have also
been observed to turn into neurons by virtue of the stem cell characteristic
pluripotency. In humans, neurogenesis largely ceases during adulthood but in
two brain areas, the hippocampus and olfactory bulb, there is strong evidence
for generation of substantial numbers of new neurons.[Since the 1960’s,
database and information technology has been evolving systematically from
primitive pro-cessing systems to sophisticated and powerful databases systems.
The research and development in database systems since the 1970’s has led to
the development of relational database systems, data modelling tools, and
indexing and data organization techniques. In addition, users gained convenient
and edible data access through query languages, query processing, and user
interfaces. E- Clientmethods for on-line transaction processing (OLTP), where a
query is viewed as a read-only transaction, have contributed substantially to the
evolution and wide acceptance of relational technology as a major tool for e-
client storage, retrieval, and management of large amounts of data.

ANN BASIC STRUCTURE

The idea of ANNs is based on the belief that working of human brain by making
the right connections, can be imitated using silicon and wires as living neurons
and dendrites.
The human brain is composed of 100 billion nerve cells called neurons. They
are connected to other thousand cells by Axons. Stimuli from external
environment or inputs from sensory organs are accepted by dendrites. These
inputs create electric impulses, which quickly travel through the neural network.
A neuron can then send the message to other neuron to handle the issue or does
not send it forward. The human Neural system working is as shown below:

ANNs are composed of multiple nodes, which imitate biological neurons of


human brain. The neurons are connected by links and they interact with each
other. The nodes can take input data and perform simple operations on the data.
The result of these operations is passed to other neurons. The output at each
node is called its activation or node value.
Each link is associated with weight. ANNs are capable of learning, which takes
place by altering weight values. The following illustration shows a simple ANN

The basic artificial neuron is as follows-


TYPES OF ANN
There are two Artificial Neural Network topologies − FreeForward and
Feedback.
FeedForward ANN :
The information flow is unidirectional. A unit sends information to other unit
from which it does not receive any information. There are no feedback loops.
They are used in pattern generation/recognition/classification. They have fixed
inputs and outputs.

One-hidden-layer Neural network:


Below figure illustrates a one-hidden-layer FF network with inputs x1...x2 and
output y Each arrow in the figure symbolizes a parameter in the network. The
network is divided into layers. The input layer consists of just the inputs to the
network. Then follows a hidden layer, which consists of any number of
neurons, or hidden units placed in parallel. Each neuron performs a weighted
summation of the inputs, which then passes a nonlinear activation function
sigma also called the neuron function.

Mathematically ,
The network output is formed by another weighted summation of the outputs of
the neurons in the hidden layer. This summation on the output is called the
output layer. In Figure there is only one output in the output layer since it is a
single-output problem. Generally, the number of output neurons equals the
number of outputs of the approximation problem. The neurons in the hidden
layer of the network in Figure 2.5 are similar in structure to those of the
perceptron, with the exception that their activation functions can be any
differential function. The output of this network is given by

where n is the number of inputs and nh is the number of neurons in the hidden
layer.

FeedBack ANN :

Here, feedback loops are allowed. They are used in content addressable
memories.
Working of ANNs :

In the topology diagrams shown, each arrow represents a connection between


two neurons and indicates the pathway for the flow of information. Each
connection has a weight, an integer number that controls the signal between the
two neurons.
If the network generates a “good or desired” output, there is no need to adjust
the weights. However, if the network generates a “poor or undesired” output or
an error, then the system alters the weights in order to improve subsequent
results.
ANNs are capable of learning and they need to be trained. There are several
learning strategies −

Supervised Learning − It involves a teacher that is scholar than the ANN


itself. For example, the teacher feeds some example data about which the
teacher already knows the answers.

For example, pattern recognizing. The ANN comes up with guesses while
recognizing. Then the teacher provides the ANN with the answers. The network
then compares it guesses with the teacher’s “correct” answers and makes
adjustments according to errors.
In supervised training, both the inputs and the outputs are provided. The
network then processes the inputs and compares its resulting outputs against the
desired outputs. Errors are then propagated back through the system, causing
the system to adjust the weights which control the network. This process occurs
over and over as the weights are continually tweaked. The set of data which
enables the training is called the "training set." During the training of a network
the same set of data is processed many times as the connection weights are ever
refined.

Unsupervised Learning − It is required when there is no example data set with


known answers. For example, searching for a hidden pattern. In this case,
clustering i.e. dividing a set of elements into groups according to some
unknown pattern is carried out based on the existing data sets present.

At the present time, unsupervised learning is not well understood. This adaption
to the environment is the promise which would enable science fiction types of
robots to continually learn on their own as they encounter new situations and
new environments. Life is filled with situations where exact training sets do not
exist. Some of these situations involve military action where new combat
techniques and new weapons might be encountered. Because of this unexpected
aspect to life and the human desire to be prepared, there continues to be
research into, and hope for, this field. Yet, at the present time, the vast bulk of
neural network work is in systems with supervised learning. Supervised
learning is achieving results. This is also called Adaptive Learning.

Reinforcement Learning – This strategy built on observation. The ANN makes


a decision by observing its environment. If the observation is negative, the
network adjusts its weights to be able to make a different required decision the
next time.
BACK PROPAGATION

The back propagation algorithm (Rumelhart and McClelland, 1986) is used in


layered feed-forward ANNs. This means that the artificial neurons are organized
in layers, and send their signals forward, and then the errors are propagated
backwards. The network receives inputs by neurons in the input layer, and the
output of the network is given by the neurons on an output layer. There may be
one or more intermediate hidden layers.
The back propagation algorithm uses supervised learning, which means that we
provide the algorithm with examples of the inputs and outputs we want the
network to compute, and then the error (difference between actual and expected
results) is calculated. The idea of the back propagation algorithm is to
reduce this error, until the ANN learns the training data. The training begins
with random weights, and the goal is to adjust them so that the error will be
minimal. Back propagation network has gained importance due to the
shortcomings of other available networks. The network is a multi layer network
(multi layer perception) that contains at least one hidden layer in addition to
input and output layers.
Number of hidden layers & numbers of neurons in each hidden layer is to be
fixed based on application, the complexity of the problem and the number of
inputs and outputs. Use of non-linear log-sigmoid transfer function enables the
network to simulate non-linearity in practical systems.Due to this numerous
advantages, back propagation network is chosen for present work.
Implementation of back propagation model consists of two phases. First phase
is known as training while the second phase is called Testing.
Training, in back propagation is based on gradient decent rule that tends to
adjust weights and reduce system error in the network. Input layer has neurons
equal in number to that of the inputs. Similarly, output layer neurons are same in
the number as number of outputs. Number of hidden layer neurons is
deciding by trial and error method using the experimental data.
In this work, both ANN implementation & training is developed, using the
neural network toolbox of Mat Lab. Different ANNs are build rather than using
one large ANN including all the output variables.
This strategy allowed for better adjustment of the ANN for each specific
problem, including the optimization of the architecture for each output.
One of the most relevant aspects of a neural network is its ability to generalize,
that is, to predict cases that are not included in the training set. One of the
problems that occur during neural network training is called over fitting. The
error on the training set is driven to a very small value, but when new data is
presented to the network, the error is large. The network has memorized the
training examples, but it has not learned to generalize to new situations. One
method for improving network generalization is to use a network that is just
large enough to provide an adequate fit. The larger network you use the more
complex functions the network can create. There are two other methods for
improving generalization that are implemented in Mat Lab Neural Network
Toolbox software: regularization & early stopping.
The typical performance function used for training feed forward neural
networks is the mean sum of squares of the network errors.
The weights are corrected according to the magnitude of the error in the way
defined by the learning algorithm. This kind of learning is also called learning
with a teacher, since a control process knows the correct answer for the set of
selected input vectors. Unsupervised learning is used when, for a given
input, the exact numerical output a network should produce is unknown.
Assume, for example, that some points in two-dimensional space are to be
classified into three clusters. For this task we can use a classifier network with
three output lines, one for each class (Figure). Each of the three computing
units at the output must specialize by firing only for inputs corresponding to
elements of each cluster. If one unit fires, the others must keep silent. In this
case we do not know a priori which unit is going to specialize on which cluster.
Generally we do not even know how many well-defined clusters are present.
Since no teacher is available, the network must organize itself in order to be
able to associate clusters with units.
BIOLOGICAL NEURAL NETWORK
 When a signal reaches a synapse: Certain chemicals called
neurotransmitters are released.
 Process of learning: The synapse effectiveness can be adjusted by signal
passing through.
 Cerebral cortex :a large flat sheet of neurons about 2 to 3 mm thick and
2200 cm , 10^11 neurons
 Duration of impulses between neurons:
milliseconds and the amount of information sent is also small(few bits)
Critical information are not transmitted directly , but stored in
interconnections The term Connectionist model initiated from this idea.

PROPERTIES OF ANN
Computational power
The multilayer perceptron is a universal function approximator, as proven by the
universal approximation theorem. However, the proof is not constructive
regarding the number of neurons required or the settings of the weights.
Work by Hava Siegelmann and Eduardo D. Sontag has provided a proof that a
specific recurrent architecture with rational valued weights (as opposed to full
precision real number-valued weights) has the full power of a Universal Turing
Machine[54] using a finite number of neurons and standard linear connections.
Further, it has been shown that the use of irrational values for weights results in
a machine with super-Turing power.
Capacity
Artificial neural network models have a property called 'capacity', which
roughly corresponds to their ability to model any given function. It is related to
the amount of information that can be stored in the network and to the notion of
complexity.
Convergence
Nothing can be said in general about convergence since it depends on a number
of factors. Firstly, there may exist many local minima. This depends on the cost
function and the model. Secondly, the optimization method used might not be
guaranteed to converge when far away from a local minimum. Thirdly, for a
very large amount of data or parameters, some methods become impractical. In
general, it has been found that theoretical guarantees regarding convergence are
an unreliable guide to practical application.

Generalization and statistics


In applications where the goal is to create a system that generalizes well in
unseen examples, the problem of over-training has emerged. This arises in
convoluted or over-specified systems when the capacity of the network
significantly exceeds the needed free parameters. There are two schools of
thought for avoiding this problem: The first is to use cross-validation and
similar techniques to check for the presence of overtraining and optimally select
hyper parameters such as to minimize the generalization error. The second is to
use some form of regularization. This is a concept that emerges naturally in a
probabilistic (Bayesian) framework, where the regularization can be performed
by selecting a larger prior probability over simpler models; but also in statistical
learning theory, where the goal is to minimize over two quantities: the
'empirical risk' and the 'structural risk', which roughly corresponds to the error
over the training set and the predicted error in unseen data due to overfitting.

CHARACTERISTICS OF ANN
Basically Computers are good in calculations that basically takes inputs process
then and after that gives the result on the basis of calculations which are done at
particular Algorithm which are programmed in the softwares but ANN improve
their own rules, the more decisions they make, the better decisions may
become.The Characteristics are basically those which should be present in
intelligent System like robots and other Artificial Intelligence Based
Applications.

Threshold
Sigmoid

Gaussian
APPLICATIONS OF ANN

They can perform tasks that are easy for a human but difficult for a machine −
Aerospace − Autopilot aircrafts, aircraft fault detection.

Automotive − Automobile guidance systems.

Military − Weapon orientation and steering, target tracking, object


discrimination, facial recognition, signal/image identification.

Electronics − Code sequence prediction, IC chip layout, chip failure analysis,


machine vision, voice synthesis.

Financial − Real estate appraisal, loan advisor, mortgage screening, corporate


bond rating, portfolio trading program, corporate financial analysis, currency
value prediction, document readers, credit application evaluators.

Industrial − Manufacturing process control, product design and analysis,


quality inspection systems, welding quality analysis, paper quality prediction,
chemical product design analysis, dynamic modeling of chemical process
systems, machine maintenance analysis, project bidding, planning, and
management.
Medical − Cancer cell analysis, EEG and ECG analysis, prosthetic design,
transplant time optimizer.

Speech − Speech recognition, speech classification, text to speech conversion.

Telecommunications − Image and data compression, automated information


services, real-time spoken language translation.

Transportation − Truck Brake system diagnosis, vehicle scheduling, routing


systems.

Software − Pattern Recognition in facial recognition, optical character


recognition, etc.

Time Series Prediction − ANNs are used to make predictions on stocks and
natural calamities.

Signal Processing − Neural networks can be trained to process an audio signal


and filter it appropriately in the hearing aids.

Control − ANNs are often used to make steering decisions of physical vehicles.

Anomaly Detection − As ANNs are expert at recognizing patterns, they can


also be trained to generate an output when something unusual occurs that
misfits the pattern.
ADVANTAGES OF ANN

 It involves human like thinking.

 They handle noisy or missing data.

 They can work with large number of variables or parameters.

 They provide general solutions with good predictive accuracy.

 System has got property of continuous learning.

 They deal with the non-linearity in the world in which we live.

 A neural network can perform tasks that a linear program cannot.

 When an element of the neural network fails, it can continue without any
problem by their parallel nature.

 A neural network learns and does not need to be reprogrammed.

 It can be implemented in any application.


 It can be implemented without any problem.

DISADVANTAGES OF ANN

 The neural network needs training to operate.

 The architecture of a neural network is different from the architecture of


microprocessors therefore needs to be emulated.

 Requires high processing time for large neural networks.


CONCLUSION:

The computing world has a lot to gain from neural networks. Their ability to
learn by example makes them very flexible and powerful. Furthermore there is
no need to devise an algorithm in order to perform a specific task; i.e. there is
no need to understand the internal mechanisms of that task. They are also very
well suited for real time systems because of their fast response and
computational times which are due to their parallel architecture.
Neural networks also contribute to other areas of research such as neurology
and psychology. They are regularly used to model parts of living organisms and
to investigate the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks is the possibility that
someday 'conscious' networks might be produced. There is a number of
scientists arguing that consciousness is a 'mechanical' property and that
'conscious' neural networks are a realistic possibility.
Finally, we can say that even though neural networks have a huge potential we
will only get the best of them when they are integrated with computing, AI,
fuzzy logic and related subjects.

Artificial Neural Networks are an imitation of the biological neural networks,


but much simpler ones. The computing would have a lot to gain from neural
networks. Their ability to learn by example makes them very flexible and
powerful furthermore there is need to device an algorithm in order to perform a
specific task. Neural networks also contributes to area of research such a
neurology and psychology. They are regularly used to model parts of living
organizations and to investigate the internal mechanisms of the brain. Many
factors affect the performance of ANNs, such as the transfer functions, size of
training sample, network topology, weights adjusting algorithm, etc.

Das könnte Ihnen auch gefallen