Beruflich Dokumente
Kultur Dokumente
CONTENTS
Soft computing Introduction to Neural network Human & artificial neuron Neural network topologies Training of artificial neural network Perceptrons Back propagation algorithm Applications Advantages & disadvantages Conclusion
Soft computing
Soft computing refers to a collection of computational techniques in computer science, machine learning and some engineering disciplines, which study, model, and analyze very complex phenomena: those for which more conventional methods have not yielded low cost, analytic, and complete solutions. Soft computing uses soft techniques contrasting it with classical artificial intelligence & hard computing techniques. Hard computing is bound by a Computer Science concept called NP-complete, which means, in layman's terms, that there is a direct connection between the size of a problem and the amount of resources needed to solve the problem . Soft computing aids to surmount NP-complete problems by using inexact methods to give useful but inexact answers to intractable problems
Evolutionary computation(EC)
Swarm intelligence Choas theory
Neural Network
An Artificial Neural Network (ANN) is an information processing
paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. Like people ,ANNs learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is also true in case of ANNs.
A biological Neuron
Dendrits: (Input) Getting other activations Axon: (Output ) forward the activation Dendrits (from 1mm up to 1m long) cell and Synapse: transfer of activation: nucleus to other cells, e.g. Dendrits of other neurons Axon a cell has about 1.000 to 10.000 (Neurit) connections to other cells Synapsis Cell Nucleus: (processing) evaluation of activation
A simple neuron
Many input & one output Two modes of operation (training mode & using mode)
Firing rules
1-taught set of patterns to fire
X3:
OUT:
0/1
0/1
0/1
0/1
X1:
X2:
X3:
OUT:
0/1
0/1
clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population. Unlike the supervised learning paradigm ,there is not a priori set of categories into which the patterns are to be classified; rather the system must develop its own representation of the input stimuli
Reinforcement Learning
This type of learning may be considered as an intermediate form
of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. The learning system grades its action good (rewarding) or bad (punishable) based on the environmental response and accordingly adjusts its parameters . Generally, parameter adjustment is continued until an equilibrium state occurs, following which there will be no more changes in its parameters. The self organizing neural learning may be categorized under this type of learning.
Perceptrons
One type of ANN system is based on a unit called a perceptron.
The space H of candidate hypotheses considered in perceptron learning is the set of all possible real-valued weight vectors.
In order to derive a weight learning rule for linear units, let us consider the training error of a hypothesis relative to the training examples.
The gradient specifies the direction that produces the steepest increase in E. The negative of this vector therefore gives the direction of steepest decrease. The training rule for gradient descent is
which makes it clear that steepest descent is achieved by altering each component of in proportion to .
The weight update rule for standard gradient descent can be summarized as
BACKPROPAGATION Algorithm
Architecture of Backpropagation
Applications
Aerospace
High performance aircraft autopilots, flight path simulations, aircraft
control systems, autopilot enhancements, aircraft component simulations, aircraft component fault detectors
Automotive
Automobile automatic guidance systems, warranty activity analyzers
Banking
Check and other document readers, credit application evaluators
Defense
Weapon steering, target tracking, object discrimination, facial
recognition, new kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/image identification
Electronics
Code sequence prediction, integrated circuit chip layout, process
control, chip failure analysis, machine vision, voice synthesis, nonlinear modeling
Applications
Financial Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, credit line use analysis, portfolio trading program, corporate financial analysis, currency price prediction Manufacturing Manufacturing process control, product design and analysis, process and machine diagnosis, real-time particle identification, visual quality inspection systems, beer testing, welding quality analysis, paper quality prediction, computer chip quality analysis, analysis of grinding operations, chemical product design analysis, machine maintenance analysis, project bidding, planning and management, dynamic modeling of chemical process systems
Applications
Robotics Trajectory control, forklift robot, manipulator controllers, vision systems Speech Speech recognition, speech compression, vowel classification, text to speech synthesis Securities Market analysis, automatic bond rating, stock trading advisory systems Telecommunications Image and data compression, automated information services, realtime translation of spoken language, customer payment processing systems Transportation Truck brake diagnosis systems, vehicle scheduling, routing systems
Advantages
A neural network can perform tasks that a linear program can not.
When an element of the neural network fails, it can continue without any problem by their parallel nature. A neural network learns and does not need to be reprogrammed. It can be implemented in any application. It can be implemented without any problem
Disadvantages
The neural network needs training to operate.
The architecture of a neural network is different from the
architecture of microprocessors therefore needs to be emulated. Requires high processing time for large neural networks.
Conclusion
The ability to learn by examples make them very flexible and
powerful. There is no need to understand the internal mechanisms of the task. They are also very well suited for real time systems because of their first response and computational times which are due to their parallel architecture.