Sie sind auf Seite 1von 35

Advanced Neural Networks

By
Dr. Muhammad Moinuddin

Aims of this Course


Fundamentals of NN
Various architectures of NN
Learning Methods
Algorithm Development
Programming skills to implement NN
Applying NN to Specific Problem

Course Outline/Contents

Introduction
Biological

Inspiration
Motivation: Applications
Advantages of NN

Learning Methods

Error correction learning


Memory based learning
Hebbian learning
Competitive learning
Boltzman learning

Unsupervised Learning
Correlation

Based (e.g. Hebbians Learning)


Competition Based (e.g. Winner-Takes-All)

Course Outline/Contents (Contd.)

Supervised Learning
Error

Correction Based (e.g. LMS)


Matched Based (e.g. Fuzzy ART)

Adaptive Linear Combiner/Adaptive Linear


Element
Basic

ALE: Adaptive Filtering Problem


Method of Steepest Descent
Least Square Solution
Least Mean Square Algorithm
Newtons Method

Course Outline/Contents (Contd.)

Fundamentals of NN
Perceptron
Neuron
Synaptic

Weights
Activation Function

Multilayer Perceptron NN/Multilayer FeedForward NN


Backpropagation
Its

Algorithm

Variants

Radial Basis Function NN


Covers

Theorem
Various Algorithms

Course Outline/Contents (Contd.)

Self Organizing Map (SOM) NN


Learning

Vector Quantization

Temporal Processing with NN


Architectures
Spatio-Temporal Models
Temporal Backpropagation Algorithm

Recurrent NN

Architectures
Feedback structure
Backpropagation Through Time
Hopfield NN

Course Outline/Contents (Contd.)

Applications of NN
Function

Approximation/Curve Fitting
Pattern Recognition/Classification
Robots path planning
Network routing algorithms
Time-Series Prediction
Estimation
Optimization
Control

Introduction

Inspiration from Neurobiology

A neuron: many-inputs /
one-output unit
output can be excited or not
excited
incoming signals from other
neurons determine if the
neuron shall excite ("fire")
Output subject to
attenuation in the
synapses, which are
junction parts of the neuron

Human Brain

It is complex, non-linear, and parallel


computer.

Stimulus
Receptors

Neural Net
Nerve

Effectors

Response

Human Brain Efficiency


There are approximately 10 billion neurons
in human cortex and 60 trillion synapses or
connections (Shepherd and Koch, 1990)
which
gives
massive
parallel
interconnections.
Energetic
efficiency of the brain is
approximately 10-16 Joules per operation
per second whereas for a best computer in
use today is 10-6

Methodology of NN

Neural Networks do not simply memorise the


pattern
They can extract and recognise patterns (the
style)
They generalise from the already seen to make
predictions
Therefore, neural networks (biological and artificial)
are good at (unlike conventional computer)

Motivation for Using


NN

Some Real Life Examples


Non-linear Estimation: Function
Approximation
Pattern Learning (Non-linear Curve Fitting)
Adaptive Inverse Control
Dynamic Parameter Estimation: e.g.
Wireless Channel Estimation

How NN Classifies?

a1x+b1y+c1 a3x+b3y+c3

n
ou
gr
ck
Ba

Signal

n
ou
gr
ck
Ba

Signal

a2x+b2y+c2
X

Event sample characterized by two variables X and Y (left figure)


A linear combination of cuts can separate signal from background (right fig.)

0
1

Define step function S(ax by c)

Signal (x, y) OUT


Signal (x, y) IN

Separate signal from background with the following function:


C(x, y) S(S(a1x b1 y c1 ) S(a2x b2 y c2 ) S(a3x b3 y c3 ) 2)

Where we can use NN?


when we can't formulate an algorithmic
solution.
when we can get lots of examples of the
behavior we require.
learning from experience
when we need to pick out the structure
from existing data.

Advantages of Neural Networks


Nonlinearity
Input-Output Mapping
Adaptive Architecture
Evidential Response
Fault Tolerance
Parallel Processing (useful for VLSI
Implementation)

NN Structure

NN incorporate the two fundamental


components of biological neural nets:

1. Neurones (nodes)
2. Synapses (weights)

Neurone vs. Node

Learning Process

What is Learning

Learning of an ANN
How to adapt Network architecture
Learning method:

Unsupervised

learning
Reinforcement learning
Supervised learning

Learning Steps

Learning Methods
Error correction learning
Memory based learning
Hebbian learning
Competitive learning
Boltzman learning

Error correction learning

Example: Backpropagation Algorithm

Error correction learning

Memory based learning

Example: Nearest neighborhood rule

Hebbian learning

Simple Examples:

Hebbian learning (Contd.)

Competitive learning
The output of neurons compete to become
active (fired)
Only a single neuron active at any one
time
Example: Self Organizing MAP (SOM)

Competitive learning (Contd.)

Reinforcement Learning

Reinforcement Learning (Contd)

Supervised Learning
Example: Back propagation
Desired output of the training examples
Error = difference between actual &
desired output
Change weight relative to error size
Calculate output layer error , then
propagate back to previous layers.

Das könnte Ihnen auch gefallen