Sie sind auf Seite 1von 37

Prepared By :

CONTENTS

Soft computing Introduction to Neural network Human & artificial neuron Neural network topologies Training of artificial neural network Perceptrons Back propagation algorithm Applications Advantages & disadvantages Conclusion

Soft computing
Soft computing refers to a collection of computational techniques in computer science, machine learning and some engineering disciplines, which study, model, and analyze very complex phenomena: those for which more conventional methods have not yielded low cost, analytic, and complete solutions. Soft computing uses soft techniques contrasting it with classical artificial intelligence & hard computing techniques. Hard computing is bound by a Computer Science concept called NP-complete, which means, in layman's terms, that there is a direct connection between the size of a problem and the amount of resources needed to solve the problem . Soft computing aids to surmount NP-complete problems by using inexact methods to give useful but inexact answers to intractable problems

Components of soft computing


Neural networks(NN) Fuzzy system(FS)

Evolutionary computation(EC)
Swarm intelligence Choas theory

Neural Network
An Artificial Neural Network (ANN) is an information processing

paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. Like people ,ANNs learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process. Learning in biological systems involves adjustments to the synaptic connections that exist between the neurons. This is also true in case of ANNs.

A biological Neuron
Dendrits: (Input) Getting other activations Axon: (Output ) forward the activation Dendrits (from 1mm up to 1m long) cell and Synapse: transfer of activation: nucleus to other cells, e.g. Dendrits of other neurons Axon a cell has about 1.000 to 10.000 (Neurit) connections to other cells Synapsis Cell Nucleus: (processing) evaluation of activation

Natural vs. Artificial Neuron


Dendrits cell and nucleus

Axon (Neurit) Synapsis

A simple neuron

Many input & one output Two modes of operation (training mode & using mode)

Firing rules
1-taught set of patterns to fire

0-taught set of patterns not to fire


X1: X2: 0 0 0 0 0 1 0 1 1 0 1 0 1 1 1 1

X3:

OUT:

0/1

0/1

0/1

0/1

X1:

X2:

X3:

OUT:

0/1

0/1

A more complicated neuron

Inputs are weighted The Neuron fires if X1W1+ X2W2+>T T=Threshold

Neural Network topologies


Feed-forward Neural network

Recurrent neural network

Training of artificial neural networks


Supervised learning or Associative learning
It is a technique in which the network is trained by providing it with input and matching output patterns. These input-output pairs can be provided by an external teacher, or by the system which contains the neural network (self-supervised

Unsupervised learning or Self -organisation


It is a technique in which an (output) unit is trained to respond to

clusters of pattern within the input. In this paradigm the system is supposed to discover statistically salient features of the input population. Unlike the supervised learning paradigm ,there is not a priori set of categories into which the patterns are to be classified; rather the system must develop its own representation of the input stimuli

Reinforcement Learning
This type of learning may be considered as an intermediate form

of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. The learning system grades its action good (rewarding) or bad (punishable) based on the environmental response and accordingly adjusts its parameters . Generally, parameter adjustment is continued until an equilibrium state occurs, following which there will be no more changes in its parameters. The self organizing neural learning may be categorized under this type of learning.

Perceptrons
One type of ANN system is based on a unit called a perceptron.

The perceptron function can sometimes be written as

The space H of candidate hypotheses considered in perceptron learning is the set of all possible real-valued weight vectors.

Representational Power of Perceptrons

EECP0720 Expert Systems Artificial Neural Networks

The Perceptron Training Rule


One way to learn an acceptable weight vector is to begin with random weights, then iteratively apply the perceptron to each training example, modifying the perceptron weights whenever it misclassifies an example. This process is repeated, iterating through the training examples as many times as needed until the perceptron classifies all training examples correctly. Weights are modified at each step according to the perceptron training rule, which revises the weight associated with input according to the rule

Gradient Descent and Delta Rule


In order to derive delta training rule let us consider the training of an unthresholded perceptron; that is, a linear unit for which the output o is given by

In order to derive a weight learning rule for linear units, let us consider the training error of a hypothesis relative to the training examples.

Derivation of the Gradient Descent Rule


The vector derivative is called the gradient of E with respect to written ,

The gradient specifies the direction that produces the steepest increase in E. The negative of this vector therefore gives the direction of steepest decrease. The training rule for gradient descent is

Derivation of the Gradient Descent Rule (cont.)


The negative sign is presented because we want to move the weight vector in the direction that decreases E. This training rule can also written in its component form

which makes it clear that steepest descent is achieved by altering each component of in proportion to .

Derivation of the Gradient Descent Rule (cont.)


The vector of derivatives that form the gradient can be obtained by differentiating E

The weight update rule for standard gradient descent can be summarized as

BACKPROPAGATION Algorithm

Architecture of Backpropagation

Backpropagation Learning Algorithm

Backpropagation Learning Algorithm (cont.)

Backpropagation Learning Algorithm (cont.)

Backpropagation Learning Algorithm (cont.)

Backpropagation Learning Algorithm (cont.)

Applications
Aerospace
High performance aircraft autopilots, flight path simulations, aircraft

control systems, autopilot enhancements, aircraft component simulations, aircraft component fault detectors

Automotive
Automobile automatic guidance systems, warranty activity analyzers

Banking
Check and other document readers, credit application evaluators

Defense
Weapon steering, target tracking, object discrimination, facial

recognition, new kinds of sensors, sonar, radar and image signal processing including data compression, feature extraction and noise suppression, signal/image identification

Electronics
Code sequence prediction, integrated circuit chip layout, process

control, chip failure analysis, machine vision, voice synthesis, nonlinear modeling

Applications
Financial Real estate appraisal, loan advisor, mortgage screening, corporate bond rating, credit line use analysis, portfolio trading program, corporate financial analysis, currency price prediction Manufacturing Manufacturing process control, product design and analysis, process and machine diagnosis, real-time particle identification, visual quality inspection systems, beer testing, welding quality analysis, paper quality prediction, computer chip quality analysis, analysis of grinding operations, chemical product design analysis, machine maintenance analysis, project bidding, planning and management, dynamic modeling of chemical process systems

Applications
Robotics Trajectory control, forklift robot, manipulator controllers, vision systems Speech Speech recognition, speech compression, vowel classification, text to speech synthesis Securities Market analysis, automatic bond rating, stock trading advisory systems Telecommunications Image and data compression, automated information services, realtime translation of spoken language, customer payment processing systems Transportation Truck brake diagnosis systems, vehicle scheduling, routing systems

Advantages
A neural network can perform tasks that a linear program can not.
When an element of the neural network fails, it can continue without any problem by their parallel nature. A neural network learns and does not need to be reprogrammed. It can be implemented in any application. It can be implemented without any problem

Disadvantages
The neural network needs training to operate.
The architecture of a neural network is different from the

architecture of microprocessors therefore needs to be emulated. Requires high processing time for large neural networks.

Conclusion
The ability to learn by examples make them very flexible and

powerful. There is no need to understand the internal mechanisms of the task. They are also very well suited for real time systems because of their first response and computational times which are due to their parallel architecture.

THANK YOU FOR UR PATIENCE

Das könnte Ihnen auch gefallen