Beruflich Dokumente
Kultur Dokumente
SYSTEM MODELING
Gábor Horváth
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Outline
• Introduction
• System identification: a short overview
– Classical results
– Black box modeling
• Neural networks architectures
– An overview
– Neural networks for system modeling
• Applications
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Introduction
• The goal of this course:
to show why and how neural networks can be
applied for system identification
– Basic concepts and definitions of system identification
• classical identification methods
• different approaches in system identification
– Neural networks
• classical neural network architectures
• support vector machines
• modular neural architectures
– The questions of the practical applications, answers based
on a real industrial modeling task (case study)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
System identification
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
System identification: a short overview
• Modeling
• Identification
– Model structure selection
– Model parameter estimation
• Non-parametric identification
– Using general model structure
• Black-box modeling
– Input-output modeling, the description of the behaviour of a
system
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling
• What is a model?
• Why we need models?
• What models can be built?
• How to build models?
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling
• What is a model?
– Some (formal) description of a system, a separable part
of the world.
Represents essential aspects of a system
– Main features:
• All models are imperfect. Only some aspects are taken
into consideration, while many other aspects are
neglected.
• Easier to work with models than with the real systems
– Key concepts: separation, selection, parsimony
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling
• Separation:
– the boundaries of the system have to be defined.
– system is separated from all other parts of the world
• Selection:
Only certain aspects are taken into consideration e.g.
– information relation, interactions
– energy interactions
• Parsimony:
It is desirable to use as simple model as possible
– Occam’s razor (William of Ockham or Occam) 14th Century English
philosopher)
The most likely hypothesis is the simplest one that is consistent with
all observations
The simpler of two theories, two models is to be preferred.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling
• Why do we need models?
– To understand the world around (or its defined part)
– To simulate a system
• to predict the behaviour of the system (prediction, forecasting),
• to determine faults and the cause of misoperations,
fault diagnosis, error detection,
• to control the system to obtain prescribed behaviour,
• to increase observability: to estimate such parameters which are
not directly observable (indirect measurement),
• system optimization.
– Using a model
• we can avoid making real experiments,
• we do not disturb the operation of the real system,
• more safe then working with the real system,
• etc...
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling
• What models can be built?
– Approaches
• functional models
– parts and its connections based on the functional role
in the system
• physical models
– based on physical laws, analogies (e.g. electrical
analog circuit model of a mechanical system)
• mathematical models
– mathematical expressions (algebraic, differential
equations, logic functions, finite-state machines, etc.)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling
• What models can be built?
– A priori information
• physical models, “first principle” models
use laws of nature
• models based on observations (experiments)
the real physical system is required for
obtaining observations
– Aspects
• structural models
• input-output (behavioral) models
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Identification
• What is identification?
– Identification is the process of deriving a
(mathematical) model of a system using
observed data
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Measurements
• Empirical process
– to obtain experimental data (observations),
• primary information collection, or
• to obtain additional information to the a
priori one.
– to use the experimental data for obtaining
(determining) the free parameters (features) of
a model.
– to validate the model
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Identification (measurement)
The
The goal
goal of
of modeling
modeling
Identification
Collecting
Collecting aa priori
priori knowledge
knowledge
A
A priori
priori model
model
Measurement
Experiment
Experiment design
design
Observations,
Observations, determining
determining
features,
features, parameters
parameters
Correction
Correction
Model
Model validation
validation
Final
Final model
model
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Model classes
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Model classes
• Based on the system characteristics
– Static – dynamic
– Deterministic – stochastic
– Continuous-time – discrete-time
– Lumped parameter – distributed parameter
– Linear – non-linear
– Time invariant – time variant
– …
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Model classes
• Based on the modeling approach
– parametric
• known model structure
• limited number of unknown parameters
– nonparametric
• no definite model structure
• described in many points (frequency characteristics,
impulse response)
– semi-parametric
• general class of functional forms are allowed
• the number of parameters can be increased
independently of the size of the data
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Model classes
• Based on the a priori information (physical insight)
– white-box Known Missing (Unknown)
– gray-box
Structure
Structure Parameters
Parameters Black-box
– black-box
Structure
Structure Parameters
Parameters
Structure
Structure Parameters
Parameters Gray-box
Structure
Structure Parameters
Parameters
Structure
Structure Parameters
Parameters
White-box
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Identification
• Main steps
— collect information
– model set selection
– experiment design and data collection
– determine model parameters (estimation)
– model validation
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Identification
• Collect information
– physical insight (a priori information)
understanding the physical behaviour
– only observations or experiments can be designed
– application
• what operating conditions
– one operating point
– a large range of different conditions
• what purpose
– scientific
basic research
– engineering
to study the behavior of a system,
to detect faults,
to design control systems,
etc.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Identification
• Model set selection
– static – dynamic
– linear – non-linear
– non-linear
• linear - in - the - parameters
• non-linear - in - the - parameters
– white-box – black-box
– parametric – non-parametric
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Identification
• Model structure selection
– known model structure (available a priori
information)
– no physical insights, general model structure
• general rule: always use as simple model as
possible (Occam’s razor)
– linear
– feed-forward
•
•
•
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Experiment design and data collection
• Excitation
– input signal selection
– design of excitation
• time domain or frequency domain identification
(random signal, multi-sine excitation, impulse
response, frequency characteristics)
• persistent excitation
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Excitation
• Step function
• Multisine
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Excitation
• Step function
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Excitation
• Random signal (autoregressive moving
average (ARMA) process)
– obtained by filtering white noise
– filter is selected according to the desired
frequency characteristic
– an ARMA(p,q) process can be characterized
• in time domain
• in lag (correlation) domain
• in frequency domain
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Excitation
• Pseudorandom binary sequence
– The signal switches between two levels with given probability
⎧ u( k ) with probability p
u( k + 1) = ⎨
⎩− u ( k ) with probability 1 - p
– Frequency characteristics depend on the probability p
– Example
1 1
-1/N
NTc
time function autocorrelation function
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Excitation
K ⎛ k ⎞
• Multisine u (k ) = ∑ U k cos⎜ 2π f max + ϕ (k ) ⎟
k =1 ⎝ N ⎠
– where f max is the maximum frequency of the excitation signal,
K is the number of frequency components
max ( u(t ) )
• Crest factor CF =
urms (t )
minimizing CF with the selection of φ phases
Multisine with
minimal crest factor
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Excitation
• Persistent excitation
– The excitation signal must be „rich” enough to
excite all modes of the system
– Mathematical formulation of persistent excitation
• For linear systems
– Input signal should excite all frequencies,
amplitude not so important
• For nonlinear systems
– Input signal should excite all frequencies and
amplitudes
– Input signal should sample the full regressor
space
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The role of excitation: small excitation signal
(nonlinear system identification)
Model output
6
4
2
0
-2
0 500 1000 1500 2000
Plant output
6
4
2
0
-2
0 500 1000 1500 2000
4
Error
2
0
-2
0 500 1000 1500 2000
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The role of excitation: large excitation signal
(nonlinear system identification)
Model output
6
4
2
0
-2
0 500 1000 1500 2000
Plant output
6
4
2
0
-2
0 500 1000 1500 2000
4
Error
2
0
-2
0 500 1000 1500 2000
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (some examples)
• Resistor modeling
• Model of a duct (an anti-noise problem)
• Model of a steel converter (model of a
complex industrial process)
• Model of a signal (time series modeling)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• Resistor modeling
– the goal of modeling: to get a description of a
physical system (electrical component)
– parametric model
• linear model
• constant parameter I R
U = RI
U
DC
• variant model I R(I)
U = R( I ) I
U
• frequency dependent c
U( f ) R R
U ( f ) = Z ( f )I ( f ) Z( f ) = Z( f ) = AC
I( f ) j 2π f R C + 1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• Resistor modeling
– nonparametric model
I I
f
DC AC
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• Resistor modeling
– parameter estimation based on noisy measurements
nI + nu + nI + nu +
Measurement noise
I U I U
linear
U
I
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• Model of a duct
– the goal of modeling: to design a controller for
noise compensation.
active noise control problem
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• Model of a duct
– physical modeling: general knowledge about acoustical
effects; propagation of sound, etc.
– no physical insight. Input: sound pressure, output: sound
pressure
– what signals: stochastic or deterministic: periodic, non-
periodic, combined, etc.
– what frequency range
– time invariant or not
– fixed solution, adaptive solution. Model structure is fixed,
model parameters are estimated and adjusted: adaptive
solution
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• Model of a duct
– nonparametric model of the duct (H1)
– FIR filter with 10-100 coefficients
5
0
-5
-10
magnitude (dB)
-15
-20
-25
-30
-35
-40
-45
0 200 400 600 800 1000
frequency (Hz)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• Nonparametric models: impulse responses
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• The effect of active noise compensation
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• Model of a steel
converter (LD converter)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• Model of a steel converter (LD converter)
– the goal of modeling: to control steel-making
process to get predetermined quality steel
– physical insight:
• complex physical-chemical process with many inputs
• heat balance, mass balance
• many unmeasurable (input) variables (parameters)
– no physical insight:
• there are input-output measurement data
– no possibility to design input signal, no possibility
to cover the whole range of operation
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling (example)
• Time series modeling
– the goal of modeling: to predict the future
behaviour of a signal (forecasting)
• financial time series
• physical phenomena e.g. sunspot activity
• electrical load prediction
• an interesting project: Santa Fe competition
• etc.
– signal modeling = system modeling
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Time series modeling
300
250
200
150
100
50
0
0 200 400 600 800 1000 1200
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Time series modeling
300
250
200
150
100
50
0
0 20 40 60 80 100 120 140 160 180 200
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Time series modeling
• Output of a neural model
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Box, G.E.P and Jenkins, G.M: “Time Series Analysis: Forecasting and Control”, Revised Edition,
Holden Day, 1976
Eykhoff, P. “System Identification, Parameter and State Estimation”, Wiley, New York, 1974.
Goodwin, G.C. and R. L. Payne, “Dynamic System Identification”, Academic Press, New York, 1977.
Horváth, G. “Neural Networks in Systems Identification”, (Chapter 4. in: S. Ablameyko, L. Goras, M.
Gori and V. Piuri (Eds.) Neural Networks in Measurement Systems) NATO ASI, IOS Press, pp.
43-78. 2002.
Horváth, G., Dunay, R.: "Application of Neural Networks to Adaptive Filtering for Systems with
External Feedback Paths." Proc. of The International Conferenace on Signal Processing
Application and Technology. Vol. II. pp. 1222-1227. Dallas, Tx. 1994.
Ljung, L. “System Identification - Theory for the User”. Prentice-Hall, N.J. 2nd edition, 1999.
Pintelon R. and Schoukens, J. “System Identification. A Frequency Domain Approach”, IEEE Press,
New York, 2001.
Pataki, B., Horváth, G., Strausz, Gy. and Talata, Zs. "Inverse Neural Modeling of a Linz-Donawitz
Steel Converter" e & i Elektrotechnik und Informationstechnik, Vol. 117. No. 1. 2000. pp. 13-17.
Rissanen, J. “Modelling by Shortest Data Description”, Automatica, Vol. 14. pp. 465-471, 1978.
Sjöberg, J., Q. Zhang, L. Ljung, A. Benveniste, B. Delyon, P.-Y. Glorennec, H. Hjalmarsson, and A.
Juditsky: "Non-linear Black-box Modeling in System Identification: a Unified Overview",
Automatica, 31:1691-1724, 1995.
Söderström, T. and P. Stoica, “System Indentification”, Prentice Hall, Englewood Cliffs, NJ. 1989.
Weigend,. A.S and N.A Gershenfeld "Forecasting the Future and Understanding the Past" Vol.15.
Santa Fe Institute Studies in the Science of Complexity, Reading, MA. Addison-Wesley, 1994.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Identification (linear systems)
• Parametric identification (parameter estimation)
– LS estimation
– ML estimation
– Bayes estimation
• Nonparametric identification
– Transient analysis
– Correlation analysis
– Frequency analysis
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
n
u y
System
y=f (u,n)
Criterion C
function
C(y,yM)
Model yM
yM=f M(u,θ)
Parameter
adjustment
algorithm
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• Parameter estimation
– linear system
L
y (i ) = u(i ) Θ + n(i ) = ∑ u j (i )Θ j + n(i )
T
i = 1,2,..., N
⎡ u(1) ⎤ T
j =1
⎢ ⎥ y = UΘ + n
U=⎢ M ⎥
⎢u( N )T ⎥
⎣ ⎦ y T = y TN = [ y (1) L y ( N )]
– linear-in-the parameter model
ˆ = ∑ u (i )Θ̂
y M (i ) = u(i ) T Θ y M = UΘ̂
j j
j
ˆ) = y−y Θ
ε (Θ M
ˆ () () ˆ )) = V (y − y ) = V y − y Θ
ˆ = V (ε (Θ
VΘ M M
ˆ ( ( ))
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• LS estimation
quadratic loss function
1 T 1 N
V (Θ) = ε ε = ∑ ε (ι ) =
ˆ 2
2 2 i =1
1 N
∑
2 i =1
y ((
i ) − u (i )T ˆ
Θ y (i ))(
− u (i )T ˆ
Θ =
1
2
) (
ˆ
y N − UΘ ) (y
T
N
ˆ
− UΘ )
LS estimate
∂V (Θˆ)
ˆ = arg min V (Θ
Θ ˆ) =0
LS
ˆ
Θ ∂Θˆ
ˆ = (U T U ) −1 U T y
Θ LS N N N N
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• Weighted LS estimation
– weighted quadratic loss function
1N 1 N
( ) (
1 T
V (Θ) = ∑ ε (ι ) = ∑ y (i ) − u(i ) Θ qik y (k ) − u(k ) Θ = y N − UΘ Q y N − UΘ
ˆ
2 i =1
2
2 i ,k =1
T ˆ T ˆ
2
ˆ ˆ ) ( ) ( )
weighted LS estimate
ˆ
Θ −1 T
WLS = ( U N QU N ) U N Qy N
T
ˆ
Θ = ( U T
Σ −1
U ) −1 T
U Σ −1
yN
WLS N N N
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• Maximum likelihood estimation
– we select the estimate which makes the given
observations most probable
(
f y Θ̂ 1 ) (
f y Θ̂ ML ) … f (y Θ̂ ) k
y
Measurements
– likelihood function, log likelihood function
ˆ)
f (y N Θ ˆ)
log f ( y N Θ
– maximum likelihood estimate
ˆ = arg max f (y Θˆ) ∂ ˆ)=0
Θ ML N ˆ
log f (y N Θ
ˆ
Θ ∂Θ
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• Properties of ML estimates
– consistency
ˆ
lim P Θ
N →∞
{ }
ML ( N ) − Θ > ε = 0 for any ε > 0
– asymptotic normality
ˆ
Θ converges to a normal random variable as N→∞
ML ( N )
ˆ
lim var(Θ ⎜
ML ( N ) − Θ ) = − E ⎨ ⎬ ⎟
N →∞ ⎜ ∂Θ 2 ⎟
⎝ ⎩ ⎭⎠
– Gauss-Markov if f (y N Θˆ ) Gaussian
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• Bayes estimation
– the parameter Θ is a random variable with known pdf
a priori
a posteriori
f(Θ)
f(Θ│y)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• Bayes estimation with different cost functions
– median ( )
C Θˆ Θ = Θˆ − Θ
⎧⎪ Const
– MAP ˆ(
C ΘΘ = ⎨
⎪⎩ 0
) if Θˆ − Θ ≤ Δ
otherwise
– mean ( )
C Θˆ Θ = Θˆ − Θ
2
Cost functions
f(Θ│y)
Δ Δ
MAP MEAN MEDIAN (Θˆ - Θ )
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• Recursive estimations
– y (k ) is predicted as ˆ
y M ( k ) = u( k ) T Θ
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• Recursive estimations
– least mean square LMS
ˆ (k + 1) = Θ
Θ ˆ (k ) + μ (k )ε (k )u(k )
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• Recursive estimations
– recursive least square RLS
ˆ (k + 1) = Θ
Θ ˆ (k ) + K (k + 1)ε (k )
[
K (k + 1) = P(k )U(k + 1) I + U(k + 1)P(k )UT (k + 1) ]−1
[
P(k + 1) = P(k ) − P(k )U T (k + 1) I + U(k + 1)P(k )U T (k + 1) U(k + 1)P(k ) ]
−1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Parametric identification
• Recursive estimations
– recursive Bayes a posteriori df f (Θ y )
f (y1 Θ ) f (Θ ) f (y 2 y1 , Θ ) f (y1 , Θ )
f (Θ y1 ) = +∞
f (Θ y1 , y 2 ) = +∞
∫ f (y Θ ) f (Θ )dΘ
−∞
1 ∫ f (y
−∞
2 y1 , Θ ) f (y1 , Θ )dΘ
f (y y , y , K , y , Θ ) f (y1 , y 2 , K , y k −1 , Θ )
f (Θ y , y , K , y ) =
1 2 k +∞
k 1 2 k −1
∫ f (y y , y ,K, y
−∞
2 1 2 k −1 , Θ ) f (y1 , y 2 , K , y k −1 , Θ )dΘ
observation observation
yk-1 yk
k-1 k
• Parameter estimation
− Least square less a priori information
− Maximum Likelihood
ˆ)
conditional probability density f. f (y N Θ
− Bayes most a priori information
a priori probability density f. f (Θ )
conditional probability density f. f (y N Θˆ )
cost function C (Θ̂ Θ )
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Non-parametric identification
• Frequency-domain analysis
– frequency characteristic, frequency response
– spectral analysis
• Time-domain analysis
– impulse response
– step response
– correlation analysis
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Non-parametric identification (frequency
domain)
• Secial input signals
– sinusoid
– multisine
⎛ k ⎞
K j ⎜ 2π f max +ϕ ( k ) ⎟
u (t ) = ∑U k e ⎝ N ⎠
k =1
max ( u(t ) )
crest factor CF =
urms (t )
minimizing CF with the selection of φ phases
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Non-parametric identification (frequency
domain)
• Frequency response
– Power density spectrum, periodogram
– Calculation of periodogram
– Effect of finite registration length
– Windowing (smoothing)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Eykhoff, P. System Identification, Parameter and State Estimation, Wiley, New York, 1974.
Ljung, L. ”System Identification - Theory for the User” Prentice-Hall, N.J. 2nd edition, 1999.
Goodwin, G.C. and R.L. Payne, Dynamic System Identification, Academic Press, New York,
1977.
Rissanen, J. “Stochastic Complexity in Statistical Inquiry”, Series in Computer Science”. Vol. 15
World Scientific, 1989.
Sage, A.P. and J.L. Melsa, Estimation Theory with Application to Communications and Control,
McGraw-Hill, New York, 1971.
Pintelon, R. and J. Schoukens, System Identification. A Frequency Domain Approach, IEEE
Press, New York, 2001.
Söderström, T. and P. Stoica, System Indentification, Prentice Hall, Englewood Cliffs, NJ. 1989.
Van Trees, H.L. Detection Estimation and Modulation Theory, Part I. Wiley, New York, 1968.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black box modeling
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box modeling
• Why do we use black-box models?
– the lack of physical insight: physical modeling is not
possible
– the physical knowledge is too complex, there are
mathematical difficulties; physical modeling is possible
in principle but not possible in practice
– there is no need for physical modeling, (only the
behaviour of the system should be modeled)
– black-box modeling may be much simpler
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box modeling
• Steps of black-box modeling
– select a model structure
– determine the size of the model (the number of
parameters)
– use observed (measured) data to adjust the model
(estimate the model order - the number of
parameters - and the numerical values of the
parameters)
– validate the resulted model
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box modeling
• Model structure selection
Dynamic models: yM (k ) = f (Θ, ϕ(k )) with φ(k) regressor-vectors
how to chose φ(k) regressor-vectors?
past inputs
ϕ(k ) = [u(k − 1), u(k − 2), . . . , u(k − N )]
past inputs and outputs
ϕ(k ) = [u (k − 1), u (k − 2), . . . , u (k − N ), yM (k − 1), yM (k − 2), . . . , yM (k − P )]
past inputs and system outputs
ϕ(k ) = [u (k − 1), u (k − 2), . . . , u (k − N ), y(k − 1), y (k − 2), . . . , y(k − P )]
past inputs, system outputs and errors
ϕ(k ) = [u (k − 1), . . . , u (k − N ), y (k − 1), . . . , y (k − P ), ε (k − 1), . . . , ε (k − L )]
past inputs, outputs and errors
ϕ(k ) = [u (k − 1), . . ., u (k − N ), y M (k − 1), . . ., y M (k − P ), ε (k − 1), . . ., ε (k − L ), ε u (k − 1), ..., ε u (k − K )]
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• Linear dynamic model structures
FIR
y M (k ) = a1 u (k − 1) + a2 u (k − 2 ) + . . .+ a N u (k − N )
ARX
yM (k ) = a1u (k − 1) + K + a N u (k − N ) + b1 y (k − 1) + K + bP y (k − P )
OE
yM (k ) = a1u (k − 1) + K + a N u (k − N ) + b1 yM (k − 1) + K + bP yM (k − P )
ARMAX
yM (k ) = a1u (k − 1) + K + a N u (k − N ) +b1 y(k − 1) + K +bP y(k − P ) + c1ε (k − 1) + K + cLε (k − L )
BJ y (k ) = a u (k − 1) + K + a u (k − N ) +b y (k − 1) + K + b y (k − P ) +
M 1 N 1 P
NARX
yM (k ) = f ( u (k − 1), . . ., u (k − N ), y(k − 1), . . ., y (k − P ) )
NOE
yM (k ) = f ( u (k − 1), . . . , u (k − N ), yM (k − 1), . . . , yM (k − P ) )
NARMAX
yM (k ) = f ( u (k − 1), . . . , u (k − N ), y (k − 1), . . ., y(k − P ), ε (k − 1), . . . ,ε (k − L ) )
NBJ
yM (k ) = f [u(k −1),. . ., u(k − N ), y(k −1),. . ., y(k − P), ε (k −1),. . ., ε (k − L), ε u (k −1),..., ε u (k − K )]
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• How to choose nonlinear mapping?
yM (k ) = f (Θ, ϕ(k ))
– linear-in-the-parameter models
n
yM (k ) = ∑α j f j (ϕ(k )) Θ = [α 1α 2 Kα n ]
T
j =1
– nonlinear-in-the-parameters
yM (k ) = ∑α j f j (β j , ϕ(k ))
n
Θ = [α 1α 2 Kα n , β 1 β 2 K β n ]
T
j =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• Model validation, model order selection
– residual test
– Information Criterion:
• AIC Akaike Information Criterion
• BIC Bayesian Information Criterion
• NIC Network Information Criterion
• etc.
– Rissanen MDL (Minimum Description Length)
– cross validation
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• are the residuals white (white noise process with mean 0)?
• are residuals normally distributed?
• are residuals symmetrically distributed?
– cross correlation test:
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• Model validation: residual test
autocorrelation test:
N
1
Cˆ εε (τ ) = ∑τ ε (k )ε (k − τ )
N −τ k = +1
rεε =
1
( ˆ ˆ
Cεε (1) K Cεε (m)
T
)
Cˆ εε (0)
dist
N rεε → N (0 , I )
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• Model validation: residual test
– cross-correlation test:
N
1
C uε (τ ) =
ˆ
∑τ ε (k )u (k − τ )
N −τ k = +1
ruε (m ) =
1
(
ˆ ˆ
Cuε (τ + 1) K Cuε (τ + m)
T
)
Cˆ uε (0)
dist
ˆ )
N ruε → N (0 , R uu
1 N ⎡ u k −1 ⎤
R̂ uu = ∑ ⎢ M ⎥[uk −1 L uk − m ]
N − m k = m+1 ⎢uk − m ⎥
⎣ ⎦
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• residual test 1 Auto correlation function of prediction error
0.5
-0.5
0 5 10 15 20 25
lag
Cross correlation function of past input and prediction error
0.4
0.2
-0.2
-0.4
-25 -20 -15 -10 -5 0 5 10 15 20 25
lag
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• Model validation, model order selection
– the importance of a priori knowledge
(physical insight)
– under- or over-parametrization
– Occam’s razor
– variance-bias trade-off
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• Model validation, model order selection
– criterions: noise term+penalty term
• AIC: ˆ ) = ( −2) log (max imum likelihood) + 2 p
AIC(Θ
AIC( p) = (−2) log L Θˆ + 2p ( N)
• NIC network information criterion
extension of AIC for neural networks
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• Model validation, model order selection
– cross validation
• testing the model on new data (from
the same problem)
• leave out one cross validation
• leave out k cross validation
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• Model validation, model order selection
– variance-bias trade-off
difference between the model and the real
system
• model class is not properly selected: bias
• actual parameters of the model are not
correct: variance
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• Model validation, model order selection
– variance-bias trade-off
{
V (Θ ) = E y − f (Θ )
2
}
= σ + E ⎨ f 0 (Θ, ϕ(k ) ) − f Θ, ϕ(k ) ⎫⎬
⎧
⎩
ˆ
2
⎭
( )
E{V (Θ )} = σ + E ⎧⎨ f 0 (Θ, ϕ(k ) ) − f Θ
⎩
ˆ , ϕ( k ) ( ) 2
⎫
⎬
⎭
{ (
≈ σ + E f 0 (Θ, ϕ(k ) ) − f Θ (m), ϕ(k ) *
) 2
} ⎩
(
+ E ⎧⎨ f Θ * (m), ϕ(k ) − f Θ) (
ˆ , ϕ( k ) ) 2
⎫
⎬
⎭
noise bias variance
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Black-box identification
• Model validation, model order selection
– approaches
• A sequence of models are used with increasing m
Validation using cross validation or some criterion e.g.
AIC, MDL, etc.
• A complex model structure is used with a lot of
parameters (over-parametrized model)
Select important parameters
– regularization
– early stopping
– pruning
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Neural modeling
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Neural networks
• Why neural networks?
– There are many other black-box modeling approaches:
e.g. polynomial regression.
– Difficulty: curse of dimensionality
– In high-dimensional (N) problem and using M-th order
polynomial the number of the independently adjustable
parameters will grow as NM.
– To get a trained neural network with good
generalization capability the dimension of the input
space has significant effect on the size of required
training data set.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Neural networks
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Other black-box structures
• Wavelets
– mother function (wavelet), dilation, translation
• Volterra series
∞ ∞ ∞ ∞ ∞ ∞
y M (k ) = ∑ g l u (k − l ) + ∑ ∑ g ls u (k − l ) u (k − s )+ ∑ ∑ ∑ g lsr u (k − l )u (k − s )u (k − r ) + L
l =0 l =0 s =0 l = 0 s =0 r =0
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Other black-box structures
•Fuzzy models, fuzzy neural models
– general nonlinear modeling approach
•Narendra structures
– other
combined linear dynamic and nonlinear static
systems
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Combined models
• Narendra structures
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Akaike, H. “Information Theory and an Extension of the Maximum Likelihood Principle” Second Intnl.
Symposium on Information Theory. Akadémiai Kiadó, Budapest, pp. 267-281. 1972.
Akaike, H. “A New Look at the Statistical Model Identification” IEEE Trans. On Automatic Control, Vol.
19. No. 9. pp. 716-723. 1974.
Haykin, S.: "Neural Networks. A Comprehensive Foundation" Prentice Hall, N. J.1999.
L. Ljung, ”System Identification - Theory for the User” Prentice-Hall, N.J. 2nd edition, 1999.
Narendra, K. S. and Pathasarathy, K. "Identification and Control of Dynamical Systems Using Neural
Networks," IEEE Trans. Neural Networks, Vol. 1. 1990. pp.
Noboru Murata, Shuji Yoshizawa and Shun-Ichi Amari “Network Information Criterion - Determining
the Number of Hidden Units for an Artificial Neural Network Model” IEEE Trans. on Neural Networks,
Vol. 5. No. 6. Pp. 865-871
Pataki, B., Horváth, G., Strausz, Gy. and Talata, Zs. "Inverse Neural Modeling of a Linz-Donawitz
Steel Converter" e & i Elektrotechnik und Informationstechnik, Vol. 117. No. 1. 2000. pp. 13-17.
M.B. Priestley, “ Non-linear and Non-stationary Time Series Analysis” Academic Press, London,
1988.
Rissanen, J. “Stochastic Complexity in Statistical Inquiry”, Series in Computer Science”. Vol. 15
World Scientific, 1989.
J. Sjöberg, Q. Zhang, L. Ljung, A. Benveniste, B. Delyon, P.-Y. Glorennec, H. Hjalmarsson, and A.
Juditsky: "Non-linear Black-box Modeling in System Identification: a Unified Overview", Automatica,
31:1691-1724, 1995.
A. S Weigend,. - N.A Gershenfeld "Forecasting the Future and Understanding the Past" Vol.15. Santa
Fe Institute Studies in the Science of Complexity, Reading, MA. Addison-Wesley, 1994.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Neural networks
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Outline
• Introduction
• Neural networks
– elementary neurons
– classical neural structures
– general approach
– computational capabilities of NNs
• Learning (parameter estimation)
– supervised learning
– unsupervised learning
– analytic learning
• Support vector machines
– SVM architectures
– statistical learning theory
• General questions of network design
– generalization
– model selection
– model validation
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Neural networks
• Elementary neurons
– linear combiner
– basis-function neuron
• Classical neural architectures
– feed-forward
– feedback
• General approach
– nonlinear function of regressors
– linear combination of basis functions
• Computational capabilities of NNs
– approximation of function
– classification
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Neural networks (a definition)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Neural networks (main features)
• Main features
– complex nonlinear input-output mapping
– adaptivity, learning capability
– distributed architecture
– fault tolerance
– VLSI implementation
– neurobiological analogy
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The elementary neuron (1)
• Linear combiner with nonlinear activation function
x 0 =1 w0
x1 w1 T
s=w x
w2 y=f(s)
x2 Σ f (s)
wN
xN
activation functions
y(s) y(s) y(s) y(s)
+1 +1 +1
+1
0,5
s s s
s
-1 -1 -1
+1 s > 0 +1 s >1
y= y= s -1 <_ s<1
_
_0
s< 1 - e-Ks 1
-1 -1 s <-1 y = ________ ; K >0 y = ________ ; K >0
1 + e -Ks 1 + e -Ks
a.) b.)
c.) d.)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Elementary neuron (2)
• Neuron with basis function
y = ∑ wi g i (x )
i
xN u = ( x1 − c1 ) 2 + ( x2 − c2 ) 2 + K + ( x N − c N ) 2
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Classical neural networks
• static (no memory, feed-forward)
– single layer networks
– multi-layer networks
• MLP
• RBF
• CMAC
• dynamic (memory or feedback)
– feed-forward (storage elements)
– feedback
• local feedback
• global feedback
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feed-forward architecture
• Single layer network: Rosenblatt’s perceptron
x =1 w0
0
x1 w1
s= w T x
y=sgn(s)
Σ
x2 w2
xN wN
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feed-forward architecture
• Single layer network
x1
x2 y1
Inputs x3 y2 Outputs
yM
xN W
y1
(1)
x1 Σ f(.) Σ f(.)
x(1)
2
y2
x (1)
Σ f(.) Σ f(.)
3
y
Σ f(.) Σ f(.) n
x (1) (1) (2)
N
W W
(1) (2)
x (1) y =x y(2)=y
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feed-forward architecture
• Network with one trainable layer (basis function
networks)
+1 w
w0
ϕ 1 (k)
wX11
ϕ2(k)
Non-linear w2
mapping
x(k) y(k)=wTϕ(k)
Fixed Σ
or trained
supervised or
unsupervised
ϕ M (k)
wM
Linear trainable layer
ϕ(k) w
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Radial basis function (RBF) network
• Network with one trainable layer
c 1, σ 1
ϕ 0=+1
x
Radial, e.g Gaussian
g1(x )= ϕ 1
basis function
x1 c2 ,σ2
x2
g2(x )=ϕ2 y
Σ
c ,σ
M M
w
xN
gM(x )=ϕ M
xi wi
xi+1
wi+1
xi+2 wi+2
xi+3 wi+3
x1 xi+4 wi+ 4
x2 xi+5 wi+5
y = wTa(x)
x3 Σ y
x a C=4 a y
Space of possible xj wj
xj+1 wj+1
input vectors xj+ 2 wj+2
xj+ 3 wj+3
a w
Binary association weight vector
vector (trainable)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feed-forward architecture
• Dynamic multi-layer network
(l)
( )
y(1l- 1) =x1l s1 y1(l)
Σ f (.) w21
(l) Σ f (.)
()
z21l ( l+1)
()
y l = x2
2
Σ f (.) FIR filter Σ f (.)
.
.
.
Σ f (.) Σ f (.)
l -th layer
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feed-forward architecture
• Dynamic multi layer network (single trainable layer)
ϕ 1(k) z 1(k)
FIR filter
The first ϕ 2(k)
layer of the FIR filter
network z2 (k)
x(k) y(k)
Σ
(non-linear
mapping)
ϕM(k)
FIR filter
zM (k)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feedback architecture
• Lateral feedback (single layer)
x1
w1
x2 1 y1
Inputs x3 2 y2 Outputs
yi = ∑ wij x j
3
y3 j
w3
xN Feed-forward Lateral
parameters connections
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feedback architecture
• Local feedback (MLP)
x1 y
1
b b
x2 y
c a 2
Input Output
x x3 y y
a 3
xN y
M
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feedback architecture
• Global feedback (sequential network)
x(k)
Input
x(k)
T x(k-1)
D
L x(k-N) Multi-input
y(k)
single output
y(k-M) Output
static network
T y(k-2)
D
L
y(k-1)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feedback architecture
• Hopfield network (global feedback)
x1 x2 x3 x4 xN
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Basic neural network architectures
• Genaral approach
– Regressors
• current inputs (static networks)
• current inputs and past outputs (dynamic networks)
• past inputs and past outputs (dynamic networks)
– Basis functions
• non-linear-in-the-parameter network
• linear-in-the-parameter networks
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Basic neural network architectures
• Non-linear dynamic model structures based on regressor
– NFIR
y (k ) = f ( x(k ), x(k − 1), . . . , x(k − N ))
Input x(k)
x(k)
T x(k-1)
Multi-input y(k)
D
single output
L x(k-N) Output
static network
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Basic neural network architectures
• Non-linear dynamic model structures based on regressor
– NARX y(k ) = f ( x(k ), . . ., x(k − N ), d (k − 1), . . ., d (k − M ) )
x(k)
Input From the system’s output, d(k)
x(k)
T x(k-1)
D
L x(k-N) Multi-input
y(k)
single output Output
T
d(k-M) static network
d(k-2)
D
L d(k-1)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Basic neural network architectures
• Non-linear dynamic model structures based on regressor
– NOE y(k ) = f ( x(k ), . . . , x(k − N ), y(k − 1), . . . , y(k − M ) )
x(k)
Input
x(k)
T x(k-1)
D
L x(k-N) Multi-input
y(k)
single output Output
T
y(k-M) static network
y(k-2)
D
L y(k-1)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Basic neural network architectures
• Non-linear dynamic model structures based on regressor
– NARMAX
y (k ) = f ( x(k ), . . . , x(k − N ), d (k − 1), . . ., d (k − M ), ε (k − 1), . . . ,ε (k − L ) )
– NJB
y (k ) = f ( x(k ), . . ., x(k − N ), y (k − 1), . . ., y (k − M ), ε (k − 1), . . ., ε (k − L ), ε x (k − 1), ..., ε x (k − K ))
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Basic neural network architectures
• Nonlinear function of regressor
y(k ) = f (w, ϕ(k ))
– linear-in-the-parameter models (basis function models)
n
y(k ) = ∑ w j f j (ϕ(k )) w = [w1w2 Kwn ]T
j =1
– nonlinear-in-the-parameter models
y(k ) =
n
( 2)
∑ wj f j
j =1
(w (1)
, ϕ(k ) ) w= w w [ ( 2)
1
( 2)
2 K w ,w ( 2)
n ]
(1) T
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Basic neural network architectures
• Basis functions f j (ϕ(k ))
– MLP (with single nonlinear hidden layer)
1
• sigmoidal basis function sgm( s ) =
1 + e − Ks
n
(
y (k ) = ∑ w(j2) f j w (1) , ϕ(k )
j =1
) ( )
f j w (1) , ϕ(k ) = sgm(ϕ(k )T w (j1) + w(j10) )
y (k ) = ∑ w j f j (ϕ(k ) ) = ∑ w j f ϕ − c j
j j
( ) ( )
f ϕ − c j = exp − ϕ − ci [ 2
/ 2σ i2 ]
– CMAC (rectangular basis functions, splines)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Basic neural network architectures
• CMAC (rectangular basis functions)
overlapping regions
u2
points of
subdiagonal regions of
one overlay
points of
main diagonal
u1
quantization intervals
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Basic neural network architectures
• General basis functions of compact support
(higher-order CMAC)
• B-splines A two-dimensional basis function
advantages with compact support:
tensor product of a second-order B-spline
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Capability of networks
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Capability of networks
• Function approximation
• Classification Supervised
• Association learning network
• Clustering
• Data compression
Unsupervised
• Significant component learning network
selection
• Optimization
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Capability of networks
• Approximation of functions
– Main statements: some FF neural nets (MLP, RBF) are
universal approximators (in some sense)
– Kolmogorov’s Theorem (representation theory): any
continuous real-valued N-variable function defined on
[0,1]N can be represented using properly chosen functions
of one variable (non constructive).
⎛ N
2N ⎞
f ( x1 , x2 ,..., x N ) = ∑ φ q ⎜⎜ ∑ ψ pq ( x p ) ⎟⎟
q =0 ⎝ p =1 ⎠
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Capability of networks
• Approximation of function (MLP)
– Arbitrary continuous function f : RN→R on a compact
subset K of RN can be approximated to any desired
degree of accuracy (maximal error) if and only if the
activation function, g(x) is non-constant, bounded,
monoton increasing.
(Hornik, Cybenko, Funahashi, Leshno, Kurkova, etc.)
M N
fˆ ( x1 ,..., x N ) = ∑ ci g ( ∑ wij x j ) ; x0 = 1
i =1 j =0
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Capability of networks
• Approximation of function (MLP)
– Arbitrary continuous function f : RN→R on a compact
subset of RN can be approximated to any desired
degree of accuracy (in the L2 sense) if and only if the
activation function is non-polynomial (Hornik, Cybenko,
Funahashi, Leshno, Kurkova, etc.)
M N
fˆ ( x1 ,..., x N ) = ∑ ci g ( ∑ wij x j ) , x0 = 1
i =1 j =0
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Capability of networks
• Classification
– Perceptron: linear separation
– MLP: universal classifier
f ( x) = j , iff x ∈ X ( j ) f : K → {1,2, K , k}
K compact subset of R N
X ( j ) j = 1, K , k disjoint subsets of K
k
K = U X ( j ) and X ( j ) I X ( j ) is empty if i ≠ j
j =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Capability of networks
ˆf (x) = w g ⎛⎜ x - c i ⎞
M
∑ i ⎜ σ ⎟
⎟
i =1 ⎝ i ⎠
if g : RN→R is non-zero, continuous, integrable function.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Computational capability of the CMAC
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Computational capability of the CMAC
xi wi
xi+1
wi+1
xi+2 wi+2
xi+3
wi+3
x1 xi+ 4 wi+ 4
x2 xi+5 wi+ 5
x3 Σ y
x a C= 4 a y
Space of possible xj wj
xj+1 wj+1
input vectors xj+ 2 wj+2
xj+ 3 wj+3
a w
association vector weight vector (trainable)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Computational capability of the CMAC
• Arrangement of basis functions: uni-variate case
C=4
overlays
1 2 3 4 x
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Computational capability of the CMAC
• Arrangement of basis functions: multi-variate case
overlapping regions
C overlays Number of basis functions
u2
C=4 ⎤
⎡ 1 N -1
M = ⎢ N −1 ∏( Ri + C − 1) ⎥⎥
⎢C i= 0
points of
subdiagonal regions of
one overlay
points of
main diagonal
u1
quantization intervals
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC approximation capability
C overlays
Consistency equations:
f (a) − f (b) = f (c) − f (d)
Basis functions
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC modeling capability
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC generalization capability
Important parameters:
C generalization parameter
dtrain distance between adjacent training data
Interesting behavior
C=l*dtrain : linear interpolation between the
training points
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC generalization error
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC generalization error
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC generalization error
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC generalization error
Multidimensional case
without with
regularization
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC generalization error univariate case (max)
h
Abs. value
0.2
of max.
rel. error 0.18
0.16
0.14
0.12
0.1
0.08
0.06
0.04
0.02
0
1 2 3 4 5 6 7 8
C/dtrain
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Application of networks (based on the capability)
• Regression: function approximation
– modeling of static and dynamic systems, signal
modeling, system identification
– filtering, control, etc.
• Pattern association
– association
• autoassociation (similar input and output)
(dimension reduction, data compression)
• Heteroassociation (different input and output)
• Pattern recognition, clustering
– classification
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Application of networks (based on the capability)
• Optimization
– optimization
• Data compression, dimension reduction
– principal component analysis (PCA), linear
networks
– nonlinear PCA, non-linear networks
– signal separation, BSS, independent component
analysis (ICA).
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Data compression, PCA networks
• Karhunen-Loève tranformation
y = Φ x Φ = [ϕ1, ϕ2 , ..., ϕ N ]T ϕTi ϕ j = δ ij , further Φ T Φ = I, → Φ T = Φ −1
N M
x = ∑ yi ϕ i xˆ = ∑ yi ϕ i , M ≤ N
i =1 i =1
{ } ⎧⎪ ⎫⎪
2
{ }
N M N
∑ y ϕ −∑ y ϕ ⎬ = ∑ E ( yi )
2 2
ε = E x − xˆ
2
= E⎨ i i i i
⎪⎩ i =1 i =1 ⎪⎭ i = M +1
εˆ = ε −2
∑ λ i (ϕ ϕi − 1) =
N
i = M +1
T
i ∑ [ϕ C
N
i = M +1
T
i ϕ
xx i − λ i ϕ (
i ϕi − 1
T
)] { }
Cxx = E xxT
∂εˆ
[ ]
N
= ∑ 2C xx ϕi − 2 λi ϕi = 0
∂ϕ i i = M +1
N N N
C xxϕ i = λi ϕ i
2
ε = ∑ ϕTi C xxϕi = ∑ ϕTi λi ϕi = ∑ λi
ι= Μ +1 i = M +1 i = M +1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Data compression, PCA networks
• Principal component analysis (Karhunen-
Loève tranformation
y = Φx x2
y2
y1
x1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Nonlinear data compression
• Non-linear problem (curvilinear component
analysis) x2
x1
y1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
ICA networks
• Such linear transformation is looked for that restores the
original components from mixed observations
• Many different approaches have been developed
depending on the definition of independence
(entropy, mutual information, Kullback-Leibleir information,
non-Gaussianity)
• The weights can be obtained using nonlinear network
(during training)
• Nonlinear version of the Oja rule
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The task of independent component analysis
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Brown, M. - Harris, C.J. and Parks, P "The Interpolation Capability of the Binary CMAC", Neural Networks,
Vol. 6, pp. 429-440, 1993
Brown, M. and Harris, C.J. "Neurofuzzy Adaptive Modeling and Control" Prentice Hall, New York, 1994.
Hassoun, M. H.: "Fundamentals of Artificial Neural Networks", MIT Press, Cambridge, MA. 1995.
Haykin, S.: "Neural Networks. A Comprehensive Foundation" Prentice Hall, N. J.1999.
Hertz, J. - Krogh, A. - Palmer, R. G. "Introduction to the Theory of Neural Computation", Addison-Wesley
Publishing Co. 1991.
Horváth, G. "CMAC: Reconsidering an Old Neural Network" Proc. of the Intelligent Control Systems and
Signal Processing, ICONS 2003, Faro, Portugal. pp. 173-178, 2003.
Horváth, G. "Kernel CMAC with Improved Capability" Proc. of the International Joint Conference on Neural
Networks, IJCNN’2004, Budapest, Hungary. 2004.
Lane, S.H. - Handelman, D.A. and Gelfand, J.J "Theory and Development of Higher-Order CMAC Neural
Networks", IEEE Control Systems, Vol. Apr. pp. 23-30, 1992.
Miller, T.W. III. Glanz, F.H. and Kraft, L.G. "CMAC: An Associative Neural Network Alternative to
Backpropagation" Proceedings of the IEEE, Vol. 78, pp. 1561-1567, 1990
Szabó, T. and Horváth, G. "Improving the Generalization Capability of the Binary CMAC” Proc. of the
International Joint Conference on Neural Networks, IJCNN’2000. Como, Italy, Vol. 3, pp. 85-90, 2000.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Learning
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Learning in neural networks
• Learning: parameter estimation
– supervised learning, learning with a teacher
x, y, d training set: {x i , d } P
i i =1
x, y
– analytical learning
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Supervised learning
• Model parameter estimation: x, y, d
n
x System d
d=f (x,n)
Criterion C=C(ε)
function
C(d,y)
Neural model y
y=fM (x,w)
Parameter
adjustment
algorithm
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Supervised learning
• Criterion function
– quadratic criterion function:
{ ⎧ 2⎫
C (d, y ) = C (ε) = E (d − y ) (d − y ) = E ⎨∑ d j − y j ⎬
T
} ( )
⎩j ⎭
– other criterion functions
• e.g. ε insensitive
C(ε)
ε ε
– regularized criterion functions: C (d, y ) = C (ε ) + λC R
adding a penalty (regularization) term
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Supervised learning
• Criterion minimization
ˆ = arg min C (d, y (w ) )
w
• Analytical solution w
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Supervised learning
• Error correction rules
– perceptron rule w (k + 1) = w (k ) + μ ε (k )x(k )
– gradient methods w (k + 1) = w (k ) + μ Q(− ∇ (k ) )
w ( k + 1) = w ( k ) + α k g k gTj Rg k = 0 if j ≠ k
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Perceptron training
x0 =1 w0 w (k + 1) = w (k ) + μ ε (k )x(k )
x1 w1
s= wTx
y=sgn(s)
Σ
x2 w2
xN wN
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Gradient method
• Analytical solution
– linear-in-the parameter model
y ( k ) = w T (k )x (k ).
– quadratic criterion function
⎧
⎩
( )( T
( ) (
C (k ) = E ⎨ d k − w k x k ⎬)
2⎫
⎭
)
{ } { }
= E d 2 ( k ) − 2 E d (k )xT (k ) w(k ) + w T (k )E x (k )xT (k ) w (k ){ }
= E {d (k )}− 2p w (k ) + w (k )Rw (k )
2 T T
– Wiener-Hopf equation
w ∗ = R − 1p . R = E xx T { } p = E {xy }
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Gradient method
• Iterative solution
w ( k + 1) = w ( k ) + μ (− ∇( k ) ).
– gradient
∂ C (k )
∇(k ) = = 2R ( w (k ) − w ∗ )
∂ w (k )
– condition of convergence
1
0< μ < λmax : maximal eigenvalue of R
λ max
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Gradient method
• LMS: iterative solution based on instantaneous error
ε (k ) = d (k ) − xT (k )w (k ) Cˆ ( k ) = ε 2 ( k )
∂ Cˆ (k ) ∂ε ( k )
– instantaneous gradient ∇(k ) =
ˆ = 2ε ( k )
∂ w (k ) ∂w (k )
– weight updating
w (k + 1) = w (k ) + μ − ∇ ( )
ˆ (k ) = w (k ) + 2 με (k )x (k )
– condition of convergence
1
0< μ <
λ max
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Gradient methods
• Example of convergence
w1
w* (b)
(a) w (0)
(c)
w(1)
w0
a.) small μ b.) large μ c.) conjugate gradient
steepest descent
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Gradient methods
• Single neuron with nonlinear activation function
ε (k ) = d (k ) − y (k ) = d (k ) − sgm(s (k )) = d (k ) − sgm(w T (k )x (k ))
w (k + 1) = w (k ) + 2 μ(k ) ε (k ) sgm′(s (k ))x(k ) = w (k ) + 2 μ(k ) δ (k ) x(k )
Parameter
adjustment
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Gradient methods
• Multi-layer network: error backpropagation (BP)
⎛ N l +1 (l +1) ⎞
w i (k + 1) = w i (k ) + 2 μ ⎜⎜ ∑ δ r (k )wri (k )⎟⎟ sgm′(si(l ) (k ))x (l ) (k )
(l ) (l ) (l +1)
⎝ r =1 ⎠
= w i(l ) (k ) + 2 μδ i(l ) (k )x (l ) (k )
⎛ N l +1 (l +1) ⎞
δi (k ) = ⎜⎜ ∑ δ r (k )wri (k )⎟⎟sg m ′ si(l ) (k )
(l) (l +1)
( )
⎝ r =1 ⎠
l = layer index
i = processing element index
k = iteration index
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
MLP training: BP
(1) (2)
x o =1 xo = 1
d1 +
y
(1)
x1 Σ f(.) Σ f(.) 1
_ +
(1) ε
x d2 + 1
2
y2
x(1) Σ f(.) Σ f(.) _ +
3
ε
2
dn +
y
x
(1) Σ f(.) Σ f(.) n
_ +
N (1) (2)
ε
W W n
(1) (1) (2)
x y =x
f'(.) f'(.)
(1) (2)
δ W δ(2)
x x
x x
x x
(1) (2)
x Π 2μ x Π 2μ
(1) (2)
W updating W updating
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Designing of an MLP
• important questions
– the size of the network (model order: number of
layers, number of hidden units)
– the value of the learning rate, μ
– initial values of the parameters (weights)
– validation, cross validation learning and testing set
selection
– the way of learning, batch or sequential
– stopping criteria
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Designing of an MLP
• The size of the network: the number of hidden
units (model order)
– theoretical results: upper limits
• Practical approaches: two different strategies
– from simple to complex
• adding new neurons
– from complex to simple
• pruning
– regularization
– (OBD, OBS, etc)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Designing of an MLP
• Cross validation for model selection
C Bias (underfitting) Variance (overfitting)
Test error
Training error
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Designing of an MLP
• Structure selection
C training error
(b)
(f)
(c)(d)(e)
Number of training cycles
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Designing of an MLP
• Generalization, overfitting
Output
Generalization
Overfitting
Training points
Input
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Designing of an MLP
• Early stopping for avoiding overfitting
C
Test error if not stopped at the optimum point
Training error
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Designing of an MLP
• Regularization
– parametric penalty
Cr (w ) = C (w ) + λ ∑ wij
i, j
⎛ ∂C ⎞
Δwij = μ ⎜ − ⎟ − μ λ sgn ( wij )
⎜ ∂ wij ⎟
⎝ ⎠
C r (w ) = C (w ) + λ ∑ wij
wij ≤ Θ
– nonparametric penalty
Cr (w ) = C (w ) + λΦ fˆ ( x) ( )
( )
where Φ fˆ ( x) is some measure of smoothness
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
MLP as linear data compressor
(autoassociative network)
• Subspace transformation
Input: x Output of the Desired output = Input
hidden layer: z
x1 y1
z1
x2 y2
z2
x3 x
zM
xN W z W'
yN
Hidden layer Output layer
M neuron (in learning phase)
(Output of the compression) N neuron
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Nonlinear data compression (autoassociative network)
Z compressed output
f f
z1
x1
l y
1
f l f
x2 y2
l
x
f f
l
x3
l y
z2 3
x f f
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
RBF (Radial Basis Function)
c1, σ 1
ϕ 0=+1
x g1(x )=ϕ 1
x1 c2 ,σ2
x2
g2(x )=ϕ2 y
Σ
cM ,σM
w
xN
gM(x ) =ϕM
y = ∑ wi g i (x ) = ∑ wi g ( x − ci ) = w x T [
g i (x ) = exp − x − c i / 2σ i2
2
]
i i
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
RBF training
• Linear-in-the parameter structure
– analytical solution
– LMS
• Cetres (nonlinear-in-the-parameters)
– K-means
– clustering
– unsupervised learning
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Designing of an RBF
• Important questions
– the size of the network (number of hidden units)
(model order)
– the value of learning rate, μ
– initial values of parameters (centres, weights)
– validation, learning and testing set selection
– the way of learning, batch or sequential
– stopping criteria
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC network
• Network with one trainable layer
xi wi
xi+1
wi+1
xi+2 wi+2
xi+3 wi+3
x1 xi+4 wi+ 4
x2 xi+5 wi+5
x3 Σ y
x a C=4 a y
Space of possible xj wj
xj+1 wj+1
input vectors xj+ 2 wj+2
xj+ 3 wj+3
a w
association vector weight vector (trainable)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC network
• Network with hash-coding
x1= 0
x a x2= 0 a z z y
xi =1
xi+1=1 zi wi
xi+2=1
xi+3=1
zi+i+1 wi+1
x
zi+2 wi+2
Σ y
zi+3 wi+3
Input space
xM-2=0
C=4 xM-1=0
axM= 0 z w
association compressed weight vector
vector association vector
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
CMAC modeling capability
• Analytical solution
• Iterative algorithm (LMS)
∗
w =A d †
A = A AA
† T
( )
T −1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Networks with unsupervised learning
• Selforganizing network
– Hebbian rule
– Competitive learning
– Main task
• clustering, detection of similarities
(normalized Hebbian + competitive)
• data compression (PCA, KLT) (nomalized Hebbian)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Unsupervised learning
• Hebbian learning
Δw = η xy w
x y
• Competitive learning Normalized Hebbian rule
x1 Output
y1
Δw i ∗ = μ (x − w i ∗ )
x2 yi*
Input
yM
xN
W w Ti∗ x ≥ w Ti x ∀i
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
PCA network ~ (k + 1) = w (k ) + μxy
w
• Oja rule ~ (k + 1)
w
w (k + 1) = ~ ~ (k + 1) w
=w ~ (k + 1) −1
Input w (k + 1)
x1
~ (k + 1) 2 = w (k ) 2 + 2 μ w
w ~ (k )T x(k ) y (k ) + O μ 2 ( )
x2 y =wTx = w (k ) + 2 μ [ y (k )] + O μ 2
2 2
( )
Output
x3 Σ
y
~ (k + 1) −1 = w
w (
~ (k + 1) 2 )−1 / 2
( )
= 1 − μ y 2 (k ) + O μ 2
Δw = μ y (x − y w ) = μ yx − y 2 w ( )
xN w
[
w (k + 1) = [w (k ) + μx(k ) y (k )] 1 − μy 2 (k ) + O μ 2 ( )]
It can be proofed: w converges ≅ w (k ) + μy (k )[x(k ) − w (k ) y (k )]
to the largest eigenvector
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
PCA network
• Oja rule as a maximum problem (gradient search)
Input E y2 { }
w T Rw
f (w ) = T = T
x y =wTx w w w w
1
∇ f ( w ) = 2Rw − (wT Rw )2 w
x
2
Output ∇ f ( w ) = 2 E{xxT }w − 2 E{w T xxT w}w
x3 Σ
y = 2 E{xy} − 2 E{y 2 }w
∇ f (w ) = 2xy − 2 y 2 w = 2 y (x − wy )
w (k + 1) = w (k ) + μy (k )[x(k ) − w (k ) y (k )]
x w
N
Solution: gradient method with instantenous gradient
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
PCA networks
• GHA network (Sanger network)
Δ w1 = μ ( y1x (1) − y12 w1 )
Input
x ( 2 ) = x (1) − (w1T x (1) )w1 = x (1) − y1w1
x1
Output
x2 y1 (
Δw 2 = μ y 2 x ( 2 ) − y 22 w 2 = )
μ (y x2
(1)
− y1 y 2 w 1 − y 22 w 2 )
x3 y2
(
Δw i = μ yi x (1) − yi2 w i )
yM = μ (y x i
(1)
− y1 y 2 w i − ... − yi2 w i )
⎛ (1) i −1 ⎞
= μ ⎜⎜ yi x − ∑ yi y j w i − yi2 w i ⎟⎟
xN W ⎝ j =1 ⎠
[
ΔW = η yx T − LT (yy T )W ]
Oja rule + Gram-Schmidt orthogolarization
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
PCA networks
• Oja rule for multi-output (subspace problem)
Input
x1
Output
x 2 y1
x3 y2
[
ΔW = η y x T − (y y T )W ]
yM
x W
N
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
PCA networks
Input
Input x1
x1
Output
Output x2 y1
x2 y1
x3 y2
x3 y2
yj- 1
yM
xN yj
xN W
W Q
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
ICA networks
• Such linear transformation is looked for that restores the
original components from mixed observations
• Many different approaches have been developed
depending on the definition of independence
(entropy, mutual information, Kullback-Leibleir information,
non-Gaussianity)
• The weights can be obtained using nonlinear network
(during training)
• Nonlinear version of the Oja rule
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
ICA training rule (one of the possible methods)
M
x (k ) = As(k ) + n(k ) = ∑ a i si (k ) + n(k ) y (k ) = Bx(k ) = sˆ(k )
i =1
( ) = ∑ E{y }− 3E {y }
M M
C (y ) = ∑ cum y i
4 4 2 2
i i
i =1 i =1
W (k + 1) = W (k ) + η (k )[v (k ) − W (k ) g ( y (k )) ] g (y T (k )) g(t)=tanh(t)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Networks with unsupervised learning
• clustering
• detection of similarities
• data compression (PCA, KLT)
• Independent component analysis (ICA)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Networks with unsupervised learning
Input x3 2 y2 Output
3 y3
w3
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Independent component analysis
M
x (k ) = As(k ) + n(k ) = ∑ a i si (k ) + n(k )
i =1
y (k ) = Bx(k ) = sˆ(k )
x v y
v1
x1
ICA network y1
architecture v2
x2 y2
xL yM
vL
V W
Input Whitened Restored
complex signal complex signal signal
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Diamantaras, K. I .- Kung, S. Y. "Principal Component Neural Networks Theory and Applications",
John Wiley and Sons, New York. 1996.
Girolami, M - Fyfe, C. "Higher Order Cumulant Maximisation Using Nonlinear Hebbian and Anti-
Hebbian Learning for Adaptive Blind Separation of Source Signals", Proc. of the IEEE/IEE
International Workshop on Signal and Image Processing, IWSIP-96, Advances in Computational
Intelligence, Elsevier publishing, pp. 141 - 144, 1996.
Hassoun, M. H.: "Fundamentals of Artificial Neural Networks", MIT Press, Cambridge, MA. 1995.
Haykin, S.: "Neural Networks. A Comprehensive Foundation" Prentice Hall, N. J.1999.
Hertz, J. - Krogh, A. - Palmer, R. G. "Introduction to the Theory of Neural Computation", Addison-
Wesley Publishing Co. 1991.
Karhunen, J. - Joutsensalo, J. "Representation and Separation of Signals using Nonlinear PCA Type
Learning", Neural Networks, Vol. 7. No. 1. 1994. pp. 113-127.
Karhunen, J. - Hyvärinen, A. - Vigario, R. - Hurri, J. - and Oja, E. "Applications of Neural Blind Source
Separation to Signal and Image Processing", Proceedings of the IEEE 1997 International Conference
on Acoustics, Speech, and Signal Processing (ICASSP'97), April 21 - 24. 1997. Munich, Germany,
pp. 131-134.
Kung, S. Y. - Diamantaras, C.I. "A Neural Network Learning Algorithm for Adaptive Principal
Component Extraction (APEX)", International Confer-ence on Acoustics, Speech and Signal
Processing, Vol. 2. 1990. pp. 861-864.
Narendra, K. S. – Parthasaraty, K. „Gradient Methods for the Optimization of Dynamical Systems
Containing Neural Networks” IEEE Trans. on Neural networks, vol. 2. No. 2. pp. 252-262. 1991.
Sanger, T. "Optimal Unsupervised Learning in a Single-layer Linear Feedforward Neural Network",
Neural Networks, Vol. 2. No. 6. 1989. pp. 459-473.
Widrow, B. - Stearns, S. D. "Adaptive Signal Processing", Prentice-Hall, Englewood Cliffs, N. J. 1985.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Dynamic neural architectures
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Dynamic neural structures
• Feed-forward networks
– NFIR: FIR-MLP, FIR-RBF,etc.
– NARX
• Feedback networks
– RTRL
– BPTT
• Main differences from static networks
– time dependence (for all dynamic networks)
– feedback (for feedback networks: NOE, NARMAX,
etc.)
– training: not for single sample pairs, but for sample
sequences (sequantial networks)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feed-forward architecture
• NFIR: FIR-MLP
(winner of the Santa Fe competition for laser signal
prediction)
( ) (l)
y(1l- 1)=x1l s1 ()
y1l
Σ f (.) (l) Σ f (.)
w21
()
z21l () ( l+1)
y2l = x2
Σ f (.) FIR filter Σ f (.)
.
.
.
Σ f (.) Σ f (.)
l -th layer
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feed-forward architecture
FIR-MLP training: temporal backpropagation
K
ε = ∑ ε (k )
∂ε 2 ∂ε 2 (k ) ∂ε2 ∂ ε 2 ∂ si(l ) (k )
=∑ = ∑ (l )
2 2
(l) (l ) (l) (l )
k =1 ∂ w ij k ∂ w ij ∂ w ij k ∂ si (k ) ∂ w ij
∂ ε 2 (k ) ∂ ε 2 ∂ si(l ) (k )
(l)
≠
∂ w ij ∂ si(l ) (k ) ∂ w ij(l )
– output layer
(
w ij(L ) (k + 1) = w ij(L ) (k ) + 2 μ ε i f ' si( L ) (k ) x (jL ) (k ))
– hidden layer
w ij (k + 1) = w ij (k ) + 2 μδ i (k − M ) x j (k − M )
δ i (k − M ) = f ′(si (k )) ∑ ΔTm (k − M ) w mi
m
[ (
Δ (ml )(k ) = δ (ml )(k ), δ (ml )(k + 1), . . . , δ (ml ) k + M m(l ) )]
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Recurrent network
• Architecture
x1( k ) feed-forward(static) network
W
x2 (k ) y1(k+1)
Input y2(k+1)
xN(k ) Output
yM(k )
y2(k ) yM (k+1)
y1(k )
z -1
z-1
z -1
z -1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Recurrent network
• Training: real-time recursive learning (RTRL)
∂ ε 2 (k ) ∂ ε l2 (k ) ∂ ε 2 (k ) ∂ ε 2 (k ) ∂ ε l (k )
=∑ ; Δwij (k ) = − μ ; = 2 ∑ ε l (k )
∂ wij (k ) l∈C ∂ wij (k ) ∂ wij (k ) ∂ wij (k ) l∈C ∂ wij (k )
∂ yl (k )
= -2 ∑ ε l (k )
l∈C ∂ wij (k )
∂ yl (k + 1) ⎡ ∂ yr (k ) ⎤
= f ′(sl (k ))⎢ ∑ wlr (k ) + δ li u j (k )⎥
∂ wij (k ) ⎢⎣ r∈B ∂ wij (k ) ⎥⎦
⎧⎪ ⎡ ∂ yr (k ) ⎤ ⎫⎪
wij (k + 1) = wij (k ) + 2 μ ∑ ⎨εl (k ) f ′(sl (k )) ⎢ ∑ wlr (k ) + δli u j (k )⎥ ⎬
l∈C ⎪ ⎢⎣r∈B ∂ wij (k ) ⎥⎦ ⎪⎭
⎩
⎧ x (k ) if i ∈ A ⎧(d i (k ) − yi (k ))2 if i ∈ C (k )
ui (k ) = ⎨ i εi2 (k ) = ⎨
⎩ yi (k ) if i ∈ B
⎩0 otherwise
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Recurrent network
• Training: backpropagation through time (BPTT)
unfolding in time
x(1) x(2) x(3)
w11
x(k) w11 w11 w11
1 2 3 4
PE1 PE1 PE 1 PE 1 PE 1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Dynamic neural structures
• Combined linear dynamic and non-linear
dynamic architectures
feed-forward architectures
∂ ε(k ) ∂y (k ) ∂v ∂ε (k ) ∂y (k ) ∂y (k ) ∂vl
= = H (z ) = =∑
∂ wij ∂wij ∂wij ∂wij( 2) ∂wij( 2) l ∂vl ∂wij( 2)
Dynamic backpropagation
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Dynamic neural structures
∂ε(k ) ∂y (k ) ∂N (v ) ∂y (k ) ∂N (v )
a.) = = H (z ) +
∂wij ∂wij ∂v ∂wij ∂wij
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Dynamic system modeling
• Example: modeling of a discrete time system
y (k + 1) = 0.3 y (k ) + 0.6 y (k − 1) + f [u (k )]
– where
f (u ) = u 3 + 0.3u 2 − 0.4u
– Training signal: uniform, random, two different
amplitudes
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Dynamic system modeling
• The role of excitation: small excitation signal
Model output
6
4
2
0
-2
0 500 1000 1500 2000
Plant output
6
4
2
0
-2
0 500 1000 1500 2000
4
Error
2
0
-2
0 500 1000 1500 2000
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Dynamic modeling
• The role of excitation: large excitation signal
Model output
6
4
2
0
-2
0 500 1000 1500 2000
Plant output
6
4
2
0
-2
0 500 1000 1500 2000
4
Error
2
0
-2
0 500 1000 1500 2000
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Hassoun, M. H.: "Fundamentals of Artificial Neural Networks", MIT Press, Cambridge, MA. 1995.
Haykin, S.: "Neural Networks. A Comprehensive Foundation" Prentice Hall, N. J.1999.
Hertz, J. - Krogh, A. - Palmer, R. G. "Introduction to the Theory of Neural Computation", Addison-
Wesley Publishing Co. 1991.
Widrow, B. - Stearns, S. D. "Adaptive Signal Processing", Prentice-Hall, Englewood Cliffs, N. J. 1985.
Narendra, K. S. - Pathasarathy, K. "Identification and Control of Dynamical Systems Using Neural
Networks," IEEE Trans. Neural Networks, Vol. 1. 1990. pp. 4-27.
Narendra, K. S. - Pathasarathy, K. "Identification and Control of Dynamic Systems Using Neural
Networks", IEEE Trans. on Neural Networks, Vol. 2. 1991. pp. 252-262.
Wan, E. A. "Temporal Backpropagation for FIR Neural Networks", Proc. of the 1990 IJCNN, Vol. I. pp.
575-580.
Weigend, A. S. - Gershenfeld, N. A. "Forecasting the Future and Under-standing the Past" Vol.15.
Santa Fe Institute Studies in the Science of Complexity, Reading, MA. Addison-Wesley, 1994.
Williams, R. J. - Zipser, D. "A Learning Algorithm for Continually Running Fully Recurrent Neural
Networks", Neural Computation, Vol. 1. 1989. pp. 270-280.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Support vector machines
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Outline
• Why we need a new approch
• Support vector machines
– SVM for classification
– SVM for regression
– Other kernel machines
• Statistical learning theory
– Validation (measure of quality: risk)
– Vapnik-Chervonenkis dimension
– Generalization
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Support vector machines
• A new approach:
gives answers for questions not solved using
the classical approach
– the size of the network
– the generalization capability
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Support vector machines
• Classification Margin
Optimal
hyperplane
wTx + b = 0
w T x i + b ≥ +1, if x i ∈ X 1
and
w T x i + b ≤ −1, if x i ∈ X 2
(wT xi + b) yi ≥ 1, ∀i Optimal
hyperplane
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Support vector machines
• Geometric interpretation
x2
The formulation of optimality
b
T
w x+b d =
d (w , b, x) = w x
w
d(x)
x1
ρ ( w , b ) = min d ( w , b, x i ) + min d ( w , b, x i ) =
{x i ; yi =1} {x i ; yi = −1}
wT xi + b w T xi + b 2
= min + min =
{x i ; yi =1} w {x i ; yi = −1} w w
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Support vector machines
• Criterion function (primal problem)
2
min w → max margin
1
Φ (w ) =
2
w with the conditions (wT xi + b) yi ≥ 1, ∀i
2
a constrained optimization problem
(KKT conditions, saddle point)
1 2 P
max min J ( w , b , α )
J ( w , b, α ) = w − ∑ α i {[ w T x i + b ] yi − 1} α w ,b
2 i =1
conditions
∂J P ∂J P P P
= w − ∑αi xi yi = 0 = ∑αi yi = 0 w = ∑ α i x i yi ∑ α i yi = 0
∂ w i =1 ∂ b i=1 i =1 i =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Support vector machines
• Lagrange function (dual problem)
1 P P P
max W (α ) = max {− ∑ ∑ α iα j yi y j ( x i x j ) + ∑ α i }
α α 2 i =1 j =1 i =1
P
∑ α i yi = 0 α i ≥ 0 for all i
i =1
support vectors optimal hyperplane
P
xi : α i > 0 w∗ = ∑ α i yi x i
i =1
output
( )
T P
y (x) = w ∗ x + b = ∑ α i yi x Ti x + b
i =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Support vector machines margin
separating hyperplane ξ
yi [wT xi + b] ≥ 1 − ξ i i = 1,..., P
0 ≤ αi ≤ C
P
support vectors xi : α i > 0 optimal hyperplane w∗ = ∑ α i yi x i
i =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Support vector machines
• Nonlinear separation, feature space
– separating hypersurface (hyperplane in the φ space)
w T ϕ(x ) + b = 0
M
∑ w jϕ j (x ) = 0
j =0
– decision surface
P ⎛P M ⎞
∑ α i yi K (x i , x) = ∑ ⎜α i yi ∑ ϕ j (xi )ϕ j (x )⎟⎟ = 0
⎜
i =1 i =1⎝ j =0 ⎠
– kernel function (Mercer conditions)
K (x i , x j ) = ϕ T (x i )ϕ x j ( )
– criterion function
1 P P
( )
P
W (α ) = ∑ α i − ∑ ∑ α iα j yi y j K xi x j
i =1 2 i =1 j =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Feature space
input (x) space (higher dimenzional)
feature (φ) space
separating curve
separating plane
M
y= ∑ j j
w ϕ (x ) = w T
ϕ (x )
j=0
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Kernel space
• Kernel trick
feature representation (nonlinear transformation) is not used
kernel function values (scalar values are used)
K (x i , x j ) = ϕ T (x i )ϕ x j( )
( )
T P P
y (x) = w ∗ ϕ(x ) + b = ∑ α i yi ϕ Τ (x i )ϕ(x ) + b = ∑ α i yi K (x i x ) + b
i =1 i =1
– Polynomial
K (x, x i ) = (xT x i + 1) d , d = 1,...
– RBF ⎛ 1 2⎞
K (x, x i ) = exp⎜ − 2 x − xi ⎟
⎝ 2σ ⎠
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Support vector machines
– kernel function
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
SVR (regression)
ε-insensitive loss(criterion ) function
C(ε)
ε ε
⎧⎪ 0 ha y − f (x, α ) ≤ ε
Cε ( y, f (x, α )) = y − f (x, α ) ε = ⎨
⎪⎩ y − f (x, α ) − ε otherwise
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
SVR (regression)
M
f (x) = ∑ w j ϕ j (x )
j =0
Constraints: Minimize:
⎛P ⎞
y i − w ϕ (x i ) ≤ ε + ξ i ,
T 1 T
Φ(w, ξ, ξ′) = w w + C⎜⎜ ∑ (ξ i + ξ′i )⎟⎟
2 ⎝ i=1 ⎠
w T ϕ (x i ) − y i ≤ ε + ξ ′i ,
i = 1,2,..., P
ξ i ≥ 0,
ξ ′i ≥ 0 ,
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
SVR (regression)
Lagrange function
[ ]
P P
1 T
J (w, ξ , ξ ′, α , α ′, γ , γ ′ ) = C ∑ (ξ i +ξ i′) + w w − ∑ α i w T ϕ( x i ) − yi + ε + ξ i −
i =1 2 i =1
[ ]
P P
− ∑ α i′ yi − w ϕ( x i ) + ε + ξ i′ − ∑ (γ iξ i + γ i′ξ ′)
T
i =1 i =1
γ i = C −αi γ i′ = C − α i′
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
SVR (regression)
• Dual problem
1 P P
( )( )
P P
W (α iα i′ ) = ∑ yi (α i − α i′ ) − ε ∑ (α i + α i′ ) − ∑ ∑ (α i − α i′ ) α j − α ′j K xi , x j
i =1 i =1 2 i =1 j =1
constraints support vectors
P
∑ (αi −αi′ ) = 0 0 ≤ αi ≤ C, 0 ≤ αi′ ≤ C, x i : α i ≠ α i′
i=1
solution
P
w ∗ = ∑ (α i − α i′ ) ϕ (x i )
i =1
( )
T P P
y (x) = w ∗ ϕ(x ) = ∑ (α i − α i′ ) ϕ Τ (x i )ϕ(x ) = ∑ (α i − α i′ )K (x i x )
i =1 i =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
SVR (regression example)
ε=0
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
SVR (regression example)
ε=0,1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
SVR (regression)
ε=0,1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Support vector machines
• Main advantages
– automatic model complexity (network size)
– relevant training data point selection
– allows tolerance (ε)
– high-dimensional feature space representation is not
used directly (kernel trick)
– upper limit of the generalization error (see soon)
• Main difficulties
– quadratic programming to solve dual problem
– hyperparameter (C, ε, σ ) selection
– batch processing (there are on-line versions too)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
SVM versions
• Classical Vapnik’s SVM (drawbacks)
• LS-SVM
classification regression
1 T ⎛ P 2⎞ 1 T ⎛ P 2⎞
Φ(w, ξ, ξ′) = w w + C⎜⎜ ∑ ei ⎟⎟ Φ(w, ξ, ξ′) = w w + C⎜⎜ ∑ ei ⎟⎟
2 ⎝ i =1 ⎠ 2 ⎝ i =1 ⎠
equality constraints
yi [wT xi + b] = 1 − ei i = 1,..., P yi = wT ϕ(xi ) + b + ei i = 1,..., P
• Ridge regression
similar to LS-SVM
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
LS-SVM
Lagrange equation
1 T
2
1 P 2 P
L(w, b, e; α) = w w + γ ∑ ek − ∑ α k w T ϕ(x k ) + b + ek − y k
2 k =1
{ }
k =1
The results
∂L P
=0 → w = ∑ α k ϕ(x k )
∂w k =1
∂L P
∂b
=0 → ∑α k =0
k =1
∂L
=0 → α k = γ ek k = 1,..., P
∂ek
∂L
=0 → w T ϕ(x k ) + b + ek − y k = 0 k = 1,..., P
∂α k
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
LS-SVM
Linear equation
Regression Clasification
rT
⎡0 1 ⎤ ⎡b ⎤ ⎡0 ⎤ ⎡0 y T ⎤ ⎡b ⎤ ⎡0⎤
⎢ r −1 ⎥ ⎢ ⎥
=⎢ ⎥ ⎢ −1 ⎥ ⎢ ⎥
= ⎢r ⎥
⎢⎣1 Ω + γ I⎥⎦ ⎣α⎦ ⎣y ⎦ ⎣y Ω + γ I ⎦ ⎣α⎦ ⎣1⎦
where
Ω i , j = K (x i , x j ) = ϕ T (x i )ϕ x j ( ) Ω i , j = yi y j K ( x i , x j )
the response of the network
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Main features of LS-SVM and ridge regression
• Benefits
– Easy to solve (no quadratic programming , only a
linear equation set)
– On-line, adaptive version (important in system
identification)
• Drawbacks
– Not sprase solution, all training points are used
(there are no „support vectors”)
– No „tolerance parameter” (ε)
– No proved upper limit of the generalization error
– Large kernel matrix if many training points are
available
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Improved LS Kernel machines
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Kernel CMAC (an example)
• Goal
– General goal:
to show that additional constraints can be used in the
framework of LS-SVM
here: the additional constraint is a weight-smoothing
term
– Special goal:
to show that kernel approach can be used for
improving the modelling capability of the CMAC
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
General goal
• Introducing new constraints
• General LS-SVM problem
Criterion function: two terms
weight minimization term + error term
Lagrange function
criterion function+ Lagrange term
Extension
adding new constraint to the criterion function
Extended Lagrange function
new criterion function (with the new constraint) +
Lagrange term
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Special goal: improving the capability of the
CMAC
• Difficulties with multivariate CMAC:
– too many basis functions (too large weight memory)
– poor modelling and generalization capability
• Improved generalization: regularization
• Improved modelling capability:
– more basis functions:
difficulties with the implementation
kernel trick, kernel CMAC
without with
regularization
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Kernel CMAC
• Classical Albus CMAC: analytical solution
yd k = w T a k k = 1,..., P y d = Aw
w* = A†y d A † = A T AA T ( )−1
y (x) = aT (x)w * = aT (x) A T ( AAT ) −1 y d
• Kernel version
1 T γ P 2
criterion function (LS) min J (w, e ) = w w + ∑ ek
w 2 2 k =1
constraint yd k = w T a k + ek
( )
P
Lagrangian L(w, e, α ) = J (w, e) − ∑ α k w T a k + ek − y d k
k =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Kernel CMAC (ridge regression)
⎡ 1 ⎤
⎢K + γ I⎥α = y d K = AAT
⎣ ⎦
P P
y (x) = a (x)w = a (x)∑ α k a k = ∑ α k K (x, x k ) = K T (x)α
T T
k =1 i =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Kernel CMAC with regularization
Extended criterion function:
2
1 T γ P 2 λ P ⎛ yd k ⎞
min J (w, e ) = w w + ∑ ek + ∑∑ ⎜⎜ − wk (i ) ⎟⎟
w 2 2 k =1 2 k =1 i ⎝ C ⎠
Lagrange function
2
γ λ ⎛ yd k ⎞
1 T
( )
P P P
L(w, e, α ) = w w + ∑ ek + ∑∑ ⎜⎜
2
− wk (i ) ⎟⎟ − ∑ α k w T a k + ek − y d k
2 2 k =1 2 k =1 i ⎝ C ⎠ k =1
1 T γ P 2 P
L(w, e,α ) = w w + ∑ ek − ∑α k aTk diag (a k )w + ek − y dk
2 2 k =1
( )
k =1
λ P y d2k dk T
P
λ P T
+
2
∑ C
− λ ∑ a k diag (a k )w + ∑ w diag (a k )w
k =1 k =1 C 2 k =1
Output
⎡ λ ⎤ P
y (x) =a (x)(I + λD) D = ∑ diag (a k )
−1
T
A ⎢α + y d ⎥
T
where
⎣ C ⎦ k =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Kernel CMAC with regularization
• Kernel function for two-dimensional kernel CMAC
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Regularized Kernel CMAC ( example)
• 2D sinc
without with
regularization
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Haykin, S.: "Neural Networks. A Comprehensive Foundation" Prentice Hall, N. J.1999.
Saunders, C.- Gammerman, A. and Vovk, V. "Ridge Regression Learning Algorithm in Dual
Variables. Machine Learning", Proc. of the Fifteenth International Conference on Machine
Learning, pp. 515-521, 1998.
Schölkopf, B. and Smola, P. "Learning with Kernels. Support Vector Machines, Regularization,
Optimization and Beyond" MIT Press, Cambridge, MA, 2002.
Suykens, J.A.K., Van Gestel, T, De Brabanter, J., De Moor, B. and Vandewalle, J. "Least
Squares Support Vector Machines", World Scientific, Singapore, 2002.
Vapnik, V. "Statistical Learning Theory", Wiley, New York, 1995.
Horváth, G. "CMAC: Reconsidering an Old Neural Network" Proc. of the Intelligent Control
Systems and Signal Processing, ICONS 2003, Faro, Portugal. pp. 173-178, 2003.
Horváth, G. "Kernel CMAC with Improved Capability" Proc. of the International Joint Conference
on Neural Networks, IJCNN’2004, Budapest, Hungary. 2004.
Lane, S.H. - Handelman, D.A. and Gelfand, J.J "Theory and Development of Higher-Order
CMAC Neural Networks", IEEE Control Systems, Vol. Apr. pp. 23-30, 1992.
Szabó, T. and Horváth, G. "Improving the Generalization Capability of the Binary CMAC” Proc.
of the International Joint Conference on Neural Networks, IJCNN’2000. Como, Italy, Vol. 3,
pp. 85-90, 2000.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Statistical learning theory
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Statistical learning theory
1 P
Remp (w ) = ∑ yi − f (x l , w )
P l =1
[ ] 2
Remp (w * P )
optimal value
minimizing the empirical risk
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Statistical learning theory: ERM principle
• Asymptotic consistency of empirical risk
Remp(w* P) → R(wo ) when P → ∞
min R(w)=R(w º)
w
P
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Statistical learning theory
• Condition of consistency of the ERM principle
Necessarry and sufficient condition: finite VC dimension
Also: this is a sufficient condition of fast convergence
• VC (Vapnik-Chervonenkis) dimension:
A set of function has VC dimension h if there exist h
samples that can be shattered (can be separated into
two classes in all possible ways: all 2h possible ways) by
this set of functions but there do not exist h +1 samples
that can be shattered by the same set of functions.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Model complexity, VC dimension
•VC dimension of a set of indicator functions
– definition
VC dimension is the maximum number of samples for
which all possible binary labellings can be induced by a
set of functions
– illustration
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
VC dimension
• Based on VC dimension upper bounds of the
risk can be obtained
• Calculating the VC dimension
– general case: rather difficult
e.g for MLP VC-dimension can be infinite
– special cases: e.g. linear function set
• VC dimension of a set of linear functions
(linear separating task)
h =N +1 (N : input space dimension)
An important statement: It can be proved that
the VC dimension can be less than N +1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Generalization error
• Bound on generalization
– Classification: with probability of at least 1-η (confidence
level; η is a given value within the additional term)
guaranteed risk
Generalization error
training error
VC dimension
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Constructing a learning algorithm
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
SVM
• Support vector machines are such a learning
machines that minimize the length of the weight
vector
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Haykin, S.: "Neural Networks. A Comprehensive Foundation" Prentice Hall, N.
J.1999.
Vapnik, V. "Statistical Learning Theory", Wiley, New York, 1998.
Cherkassky, V., Mulier, F. „Learning from Data, Concepts, Theory, and Methods”,
John Wiley and Sons,1998.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modular network architectures
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modular solution
• A set of networks: competition/cooperation
– all networks solve the same problem (competition/cooperation)
– the whole problem is decomposed: the different networks solve
different part of the whole problem (cooperation)
• Ensemble of networks
– linear combination of networks
• Mixture of experts
– using the same paradigm (e.g. neural networks)
– using different paradigms (e.g. neural networks + symbolic
systems, neural networks + fuzzy systems)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Cooperative networks
Ensemble of cooperating networks
(classification/regression)
• The motivation
– Heuristic explanation
• Different experts together can solve a problem better
• Complementary knowledge
– Mathematical justification
• Accurate and diverse modules
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Linear combination of networks
x
y0=1
NN1 y1
M
α0
y2
α1 y (x, α ) = ∑α
j =0
jyj (x )
NN2 α2
Σ
αM
yM
NNM
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Ensemble of networks
• Mathematical justification
M
– Ensemble output y (x, α ) = ∑α
j =0
jyj (x )
– Ambiguity (diversity) [
a j (x ) = y j (x) − y (x, α ) ]
2
Μ
– Constraint ∑1 α
j=
j =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Ensemble of networks
• Mathematical justification (cont’d)
M
– Weighted error ε (x, α ) = ∑α j ε j (x )
j =0
M
– Weighted diversity a (x, α ) = ∑α j a j (x )
j =0
E=E−A
Solution: Ensemble of accurate and diverse networks
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Ensemble of networks
• How to get accurate and diverse networks
– different structures: more than one network structure (e.g.
MLP, RBF, CCN, etc.)
– different size, different complexity networks (number of
hidden units, number of layers, nonlinear function, etc.)
– different learning strategies (BP, CG, random search,etc.)
batch learning, sequential learning
– different training algorithms, sample order, learning samples
– different training parameters
– different initial parameter values
– different stopping criteria
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Linear combination of networks
• Computation of optimal (fix) coefficients
1
– α k = , k = 1 ... M → simple average
M
– α k = 1, α j = 0, j ≠ k ,k depends on the input
for different input domains different network (alone)
gives the output
Μ
– optimal values using the constraint ∑
κ 1
α
=
k =1
[
R y = E y (x ) y (x )T ] [
P = E y (x ) d (x ) ]
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Haykin, S.: "Neural Networks. A Comprehensive Foundation" Prentice Hall, N.
J.1999.
Hashem, S. “Optimal Linear Combination of Neural Networks” Neural Networks, Vol. 10. No. 4.
pp. 599-614, 1997.
Krogh, A, Vedelsby, J.: “Neural Network Ensembles Cross Validation and Active Learning” In
Tesauro, G, Touretzky, D, Leen, T. Advances in Neural Information Processing Systems, 7.
Cambridge, MA. MIT Press pp. 231-238.
Gasser Auda and Mohamed Kamel: „Modular Neural Networks: A survey” Pattern Analysis
andíMachine Intelligence Lab. Systems Design Engineering Department, University of
Waterloo, Canada.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Mixture of Experts (MOE)
μ
Σ
g1 g2
Gating network gM
μ1 μΜ
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Mixture of Experts (MOE)
• The output is the weighted sum of the outputs of the
experts
M M
μ = ∑ gi μi μi = f ( x, Θi ) ∑g i =1 gi ≥ 0 ∀i
i =1 i =1
j =1
T
• v is the parameter of the gating network
i
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Mixture of Experts (MOE)
• Probabilistic interpretation
μ i = E [ y | x, Θ i ] g i = P (i | x, v i )
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Mixture of Experts (MOE)
• Training
– Training data {(
X = x ,y (l ) (l )
)}
P
l =1
P( y ( l ) | x ( l ) , Θ ) = ∑ P (i| x ( l ) , v i ) P ( y ( l ) | x ( l ) , Θ i )
i
P
⎡ P
⎤
P(y | x, Θ) = ∏ P(y (l )
| x , Θ) =∏ ⎢
(l )
∑ P (i | x (l )
, v i ) P ( y (l )
| x (l )
,
Θ )
i ⎥
l =1 l =1 ⎣ i ⎦
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Mixture of Experts (MOE)
• Training (cont’d)
– Gradient method
∂L (x,Θ ) ∂L (x,Θ )
=0 and =0
∂ Θi ∂ vi
l =1 ∂Θi
– The parameter of the gating network
( )
P
v i ( k + 1) = v i ( k ) + η ∑ hi( l ) − g i( l ) x ( l )
l =1
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Mixture of Experts (MOE)
• Training (cont’d)
– A priori probability
g i(l ) = g i (x (l ) , v i ) = P(i | x (l ) , v i )
– A posteriori probability
(l )
(l ) g i P ( y (l ) x (l ) , Θ i )
hi = (l )
∑ j
g
j
P ( y ( l ) (l )
x ,Θ j )
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Mixture of Experts (MOE)
• Training (cont’d)
– EM (Expectation Maximization) algorithm
A general iterative technique for maximum likelihood
estimation
• Introducing hidden variables
• Defining a log-likelihood function
– Two steps:
• Expectation of the hidden variables
• Maximization of the log-likelihood function
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Hierarchical Mixture of Experts (HMOE)
HMOE: more layers of gating
networks, groups of experts
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Mixture of Experts (MOE)
• MOE construction
• Cross-validation can be used to find the proper
architecture
• CART (Clasification And Regression Tree) for initial
hierarchical MOE (HMOE) architecture and for the initial
expert and gating network parameters
• MOE based on SVMs: different SVMs with different
hyperparameters
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Haykin, S.: "Neural Networks. A Comprehensive Foundation" Prentice Hall, N. J.1999.
Jordan, M. I., Jacobs, R. A.: “Hierarchical Mixture of Experts and the EM Algorithm”
Neural Computation Vol. 6. pp. 181-214, 1994.
Bilmes, J. A. et al “A Gentle Tutorial of the EM Algorithm and its application to Parameter
Estimation for Gaussian Mixture and Hidden Markov Models” 1998.
Dempster A.P. et al “Maximum-likelihood from incomplete data via the EM algorithm”,
1977.
Moon, T.K.”The Expectation-Maximization Algorithm” IEEE Trans Signal Processing,
1996.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Application:
modelling an industrial plant
(steel converter)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Overview
• Introduction
• Modeling approaches
• Building neural models
• Data base construction
• Model selection
• Modular approach
• Hybrid approach
• Information system
• Experiences with the advisory system
• Conclusions
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Overview
• Introduction
• Modeling approaches
• Building neural models
• Data base construction
• Model selection
• Modular approach
• Hybrid approach
• Information system
• Experiences with the advisory system
• Conclusions
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Introduction to the problem
• Task
– to develop an advisory system for a Linz-Donawitz
steel converter
– to propose component composition
– to support the factory staff in supervising the steel-
making process
• A model of the process is required: first a system
modelling task should be solved
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
LD Converter modeling
The Linz-Donawitz
converter in Hungary
(Dunaferr Co.)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Linz-Donawitz converter
Phases of steelmaking
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Linz-Donawitz converter
Phases of steelmaking
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Linz-Donawitz converter
Phases of steelmaking
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Linz-Donawitz converter
Phases of steelmaking
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Linz-Donawitz converter
Phases of steelmaking
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Linz-Donawitz converter
Phases of steelmaking
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Main featutes of the process
• Nonlinear input-output relation between many
inputs and two outputs
• input parameters (~50 different parameters)
– certain features “measured” during the process
• The main output parameters (output
measured values of the produced steel)
– temperature (1640-1700 CO -10 … +15 CO)
– carbon content (0.03 - 0.70 % )
• More than 5000 records of data
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling task
• The difficulties of model building
– High complexity nonlinear input-output relationship
– No (or unsatisfactory) physical insight
– Relatively few measurement data
– There are unmeasurable parameters of the process
– Noisy, imprecise, unreliable data
– Classical approach (heat balance, mass balance) gives
no acceptable results
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modeling approaches
• Theoretical model - based on chemical and
physical equations
• Input - output behavioral model
– Neural model - based on the measured process data
– Rule based system - based on the experimental
knowledge of the factory staff
– Combined neural - rule based system: a hybrid model
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The modeling task
oxygen
temperature
System
components +
(parameters)
ε
Σ
Model -
predicted
temperature
model output
components Copy of temperature
(parameters) Inverse
Model
measured Model predicted +
temperature
oxygen
ε
Σ
-
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Overview
• Introduction
• Modeling approaches
• Building neural models
• Data base construction
• Model selection
• Modular approach
• Hybrid approach
• Information system
• Experiences with the advisory system
• Conclusions
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
„Neural” solution
• The steps of solving a practical problem
Raw input
data
Preprocessing
Neural network
Postprocessing
Results
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Building neural models
• Creating a reliable database
– the problem of noisy data
– the problem of missing data
– the problem of uneven data distribution
• Selecting a proper neural architecture
– static network (size of the network)
– dynamic network
• size of the network: nonlinear mapping
• regressor selection + model order selection
• Training and validating the model
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Creating a reliable database
• Input components
– measure of importance
• physical insight
• sensitivity analysis (importance of the input variables)
• mathematical methods: dimension reduction (e.g. PCA)
• Normalization
– input normalization
– output normalization
• Missing data
– artificially generated data
• Noisy data
– preprocessing, filtering,
– errors-in-variables criterion function, etc.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Building database
• Selecting input components, sensitivity analysis
Initial
Initial database
database
Neural
Neural network
network training
training New
New database
database
Sensitivity
Sensitivity analysis
analysis Input
Input parameter
parameter
cancellation
cancellation
Input
Input parameter
parameter
yes
of
of small
small effect
effect on
on
the
the output?
output?
no
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Building database
• Dimension reduction: mathematical methods
– PCA
– Non-linear PCA, Kernel PCA
– ICA
• Combined methods
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The effect of data distribution
• Typical data distributions
1000 70
900
60
800
700 50
600
40
500
30
400
300 20
200
10
100
0 0
0 10 20 30 40 50 60 1600 1620 1640 1660 1680 1700 1720 1740
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Normalization
• Zero mean, unit standard deviation
1 P ( p) 1 P ( p) ~ xi( p ) − xi
xi = ∑ xi σ i2 = ∑ ( xi − xi )
2
xi( p ) =
P p =1 P − 1 p =1 σi
• Decorrelation + normalization
1 P ( p)
Σ= ∑ ( x − x )( x ( p ) − x )T Σϕ j = λ j ϕ j Λ = diag(λ1...λ N )
P − 1 p =1
~
x ( p ) = Λ−1 / 2ΦT ( x ( p ) − x ) [
Φ = ϕ1 ϕ 2 K ϕ N ]T
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Normalization
• Decorrelation + normalization = Whitening transformation
Whitened
Original
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Missing or few data
– using correlation
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Overview
• Introduction
• Modeling approaches
• Building neural models
• Data base construction
• Model selection
• Modular approach
• Hybrid approach
• Information system
• Experiences with the advisory system
• Conclusions
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Missing or few data
• Filling in the missing values based on:
~ C(i, j )
correlation coefficient between xi and x j C(i, j ) =
C(i, i ) C( j , j )
previous (other) values xˆi = xi + σ iξ
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Few data
• Artificial data generation
– using realistic transformations
– using sensitivity values: data generation around various
working points (a good example: ALVINN)
(ALVINN = Autonomous Land Vehicle In a Neural Net an onroad
neural network navigation solution developed in CMU)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Noisy data
Inherent noise suppression
– classical neural nets have noise suppression property
(inherent regularization)
– Regularization (smoothing regularization)
– averaging (modular approach)
• SVM
ε-insensitive criterion function
• EIV
– input and output noise are taken into consideration
– modified criterion function
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Reducing the effect of output noise
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Reducing the effect of input and output noise
• Errors in variables (EIV) n[pi,]k
xk* yk*
System +
[i ]
nm , x , k n[mi ], y , k
+ +
xk
[i ]
yk[ i ]
1 M M
1
xk =
M
∑ x[ki ] yk =
M
∑ k
y [i ]
i =1 i =1
1 M [i ] 1 M [i ]
σ 2
x,k = ∑
M − 1 i =1
( xk − xk ) 2 σ 2
y,k = ∑
M − 1 i =1
( yk − yk ) 2
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
EIV
• LS vs EIV criterion function
1 P *
C LS = ∑ ( y k − f NN ( xk* , W)) 2
P k =1
1 P ⎛⎜ ( y k − f NN ( x k , W )) 2 ( x ∗ k − x k ) 2 ⎞
⎟
C EIV = ∑ +
⎜
P k =1 ⎝ σ y ,k
2
σ x2, k ⎟
⎠
• EIV training
M P e f ,k ∂f NN ( xk , W) M ⎡ e f , k ∂f NN ( xk , W) ex,k ⎤
ΔWj = η ∑
2P k =1 σ y2,k ∂Wj Δ xk = η ⎢ 2 + 2 ⎥
2 ⎢⎣σ y, k ∂xk σ x,k ⎥⎦
e f , k = yk − f NN ( xk , W)
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Noisy data
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Overview
• Introduction
• Modeling approaches
• Building neural models
• Data base construction
• Model selection
• Modular approach
• Hybrid approach
• Information system
• Experiences with the advisory system
• Conclusions
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Model selection
• Static or dynamic
– why better a dynamic model
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Model selection
• NFIR
• NARX
• NOE
• NARMAX
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Model order selection
• AIC, MDL, NIC, Lipschitz number
y (k ) = f [x(k ), x(k − 1), x(k − 2),..., x(k − n), y ( k − 1), y (k − 2),..., y ( k − m)]
q(l)
11
q(l) 16
10
noiseless case 14
noisy case
9
8
optimal model order 12
no definite point
7
10
•
5
4 6
1 2 3 4 5 6 7 8 9 0 5 10 15 20
' ∂f
bounded derivatives fi = ≤M i = 1, 2, ... , n
∂xi
Lipschitz quotient
yi − yi
qij = , i≠ j 0 ≤ qij ≤ L
xi − x j
Sensitivity analysis
∂f ∂f ∂f
Δy = Δ x1 + Δ x2 + K + Δ xn = f1'Δ x1 + f 2' Δ x2 + K + f n' Δ xn
∂x1 ∂x2 ∂xn
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Model order selection x Lipschitz-
algorithm
d Initial
Estimation
• Noisy case x
Combined method Model
of Lipschitz + EIV creation
Untrained
Neural
x Network
Comparing
BP+LS
results
d
Trained
Network
x
New
BP + EIV Estimation
d
Trained
x* Network
Lipschitz-
algorithm
d
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Correlation based model order selection
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Berényi, P.,, Horváth, G., Pataki, B., Strausz, Gy. : "Hybrid-Neural Modeling of a Complex Industrial
Process" Proc. of the IEEE Instrumentation and Measurement Technology Conference, IMTC'2001.
Budapest, May 21-23. Vol. III. pp. 1424-1429.
Horváth, G., Pataki, B. Strausz, T.: "Neural Modeling of a Linz-Donawitz Steel Converter: Difficulties and
Solutions" Proc. of the EUFIT'98, 6th European Congress on Intelligent Techniques and Soft Computing.
Aachen, Germany. 1998. Sept. pp.1516-1521
Horváth, G. Pataki, B. Strausz, Gy.: "Black box modeling of a complex industrial process", Proc. Of the 1999
IEEE Conference and Workshop on Engineering of Computer Based Systems, Nashville, TN, USA. 1999.
pp. 60-66
Pataki, B., Horváth, G., Strausz, Gy., Talata, Zs. "Inverse Neural Modeling of a Linz-Donawitz Steel
Converter" e & i Elektrotechnik und Informationstechnik, Vol. 117. No. 1. 2000. pp.
Strausz, Gy., G. Horváth, B. Pataki : "Experiences from the results of neural modelling of an industrial
process" Proc. of Engineering Application of Neural Networks, EANN'98, Gibraltar 1988. pp. 213-220
Akaike, H. “A New Look at the Statistical Model Identification”, IEEE Transaction on Aut. Control Vol. AC-19,
No.6., 1974, pp.716-723
Rissanen, J. “Estimation of Structure by Minimum Description Length”, Circuits, Systems and Signal
Processing, special issue on Rational Approximations, Vol. 1, No. 3-4, 1982, pp. 395-406.
Akaike, H. “Statistical Predictor Identification”, Ann. Istitute of Statistical Mathematics, Vol. 22., 1970, pp.
203-217.
He, X. and Asada, H.“A New Method for Identifying Orders of Input-output Models for Nonlinear Dynamic
Systems” Proceedings of the American Control Conference, San Francisco, California, June 1993, pp.
2520-2523.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Overview
• Introduction
• Modeling approaches
• Building neural models
• Data base construction
• Model selection
• Modular approach
• Hybrid approach
• Information system
• Experiences with the advisory system
• Conclusions
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Modular solution
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Hybrid solution
• Utilization of different forms of information
– measurement, experimental data
– symbolic rules
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The hybrid information system
• Solution:
– integration of measurement information and experimental
knowledge about the process results
• Realization:
– development system – supports the design and testing of
different hybrid models
– advisory system
hybrid models using the current process state and input
information,
experiences collected by the rule-base system can be used to
update the model.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The hybrid-neural system
No prediction
Oxygen prediction (explanation)
Information
processing Output expert system
Input data
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The hybrid-neural system
Data preprocessing and correction
Neural
Model
Data preprocessing
Input data
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The hybrid-neural system
Conditional network running
O1 O2 Ok
Expert for
NN selecting
NN
1 a neural
2 NN
model
k
Input data
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The hybrid-neural system
Ox. prediction
Output expert
O1 O2 Ok
Expert
NN for Parallel network
NN selecting
1 running -
2 NN an NN
k model postprocessing
Input data
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
The hybrid-neural system
Iterative network running
Neural network
running, prediction
making
N Result Y
satisfactory
Modification of input
parameters
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Validation
• Model selection
– iterative process
– utilization of domain knowledge
• Cross validation
– fresh data
– on-site testing
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Experiences
• The hit rate is increased by + 10%
• Most of the special cases can be handled
• Further rules for handling special cases should
be obtained
• The accuracy of measured data should be
increased
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Conclusions
• For complex industrial problems all available information
have to be used
• Thinking about NNs as universal modeling devices
alone
• Physical insight is important
• The importance of preprocessing and post-processing
• Modular approach:
– decomposition of the problem
– cooperation and competition
– “experts” using different paradigms
• The hybrid approach to the problem provided better
results
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Summary
• Main questions
• Open questions
• Final conclusions
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Main questions
• Neural modeling: black-box or not?
• When to apply neural approach?
• How to use neural networks?
• The role of prior knowledge
• How to use prior knowledge?
• How to validate the results?
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Open (partly open) questions
• Model class selection
• Model order selection
• Validation, generalization capability
• Sample size, training set, test set, validation
set
• Missing data, noisy data, few data
• Data consistency
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
Final coclusions
• Neural networks are especially important and proper architectures
for (nonlinear) system modelling
• General solutions: NN and fuzzy-neural systems are universal
modeling devices (universal approximators)
• The importance of the theoretical results, theoretical background
• The difficulty of the application of theoretical results in practice
• The role of data base
• The importance of prior information, physical insight
• The importance of preprocessing and post-processing
• Modular approach:
– decomposition of the problem
– cooperation and competition
– “experts” using different paradigms
• Hybrid solutions: combination of rule based, fuzzy, neural,
mathematical
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Parag H. Batavia, Dean A. Pomerleau, Charles E. Thorpe.: „Applying Advanced Learning Algorithms to
ALVINN”, Technical Report, CMU-RI-TR-96-31 Robotics Institute Carnegie Mellon University Pittsburgh,
Pennsylvania 15213-3890
Berényi, P.,, Horváth, G., Pataki, B., Strausz, Gy. : "Hybrid-Neural Modeling of a Complex Industrial
Process" Proc. of the IEEE Instrumentation and Measurement Technology Conference, IMTC'2001.
Budapest, May 21-23. Vol. III. pp. 1424-1429.
Berényi P., Valyon J., Horváth, G. : "Neural Modeling of an Industrial Process with Noisy Data" IEA/AIE-
2001, The Fourteenth International Conference on Industrial & Engineering Applications of Artificial
Intelligence & Expert Systems, June 4-7, 2001, Budapest in Lecture Notes in Computer Sciences, 2001,
Springer, pp. 269-280.
Bishop, C, M.: “Neural Networks for Pattern Recognition” Clanderon Press, Oxford, 1995.
Horváth, G., Pataki, B. Strausz, T.: "Neural Modeling of a Linz-Donawitz Steel Converter: Difficulties and
Solutions" Proc. of the EUFIT'98, 6th European Congress on Intelligent Techniques and Soft Computing.
Aachen, Germany. 1998. Sept. pp.1516-1521
Horváth, G. Pataki, B. Strausz, Gy.: "Black box modeling of a complex industrial process", Proc. Of the 1999
IEEE Conference and Workshop on Engineering of Computer Based Systems, Nashville, TN, USA. 1999.
pp. 60-66
Pataki, B., Horváth, G., Strausz, Gy., Talata, Zs. "Inverse Neural Modeling of a Linz-Donawitz Steel
Converter" e & i Elektrotechnik und Informationstechnik, Vol. 117. No. 1. 2000. pp.
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics
References and further readings
Strausz, Gy., G. Horváth, B. Pataki : "Experiences from the results of neural modelling of an
industrial process" Proc. of Engineering Application of Neural Networks, EANN'98, Gibraltar
1988. pp. 213-220
Strausz, Gy., G. Horváth, B. Pataki : "Effects of database characteristics on the neural modeling
of an industrial process" Proc. of the International ICSC/IFAC Symposium on Neural
Computation / NC’98, Sept. 1998, Vienna pp. 834-840.
Bishop, C, M.: “Neural Networks for Pattern Recognition” Clanderon Press, Oxford, 1995.
Horváth, G (ed.),” Neural Networks and Their Applications”, Publishing house of the Budapest
University of Technology and Economics, Budapest, 1998. (in Hungarian)
Jang, J. -S. R., Sun, C. -T. and Mizutani: E. „Neuro-Fuzzy and Soft Computing. A Computational
Approach to Learning and Machine Intelligence”, Prentice Hall, 1997.
Jang, J. -S. R: „ANFIS: Adaptive-Network-Based Fuzzy Inference System” IEEE Trans. on
Sysytem Man, and Cybernetics. Vol. 23. No.3. pp. 665-685, 1993.
Nguyen, D. and Widrow, B. (1989). "The Truck Backer-Upper: An Example of Self-Learning in
Neural Networks," in Proceedings of the International Joint Conference on Neural Networks
(Washington, DC 1989), vol. II, 357-362.
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986b). "Learning Internal Representations
by Error Propagation," in Parallel Distributed Processing: Explorations in the Microstructure
of Cognition, vol. I, D. E. Rumelhart, J. L. McClelland, and the PDP Research Group. MIT
Press, Cambridge (1986).
Neural Networks for System Modeling • Gábor Horváth, 2005 Budapest University of Technology and Economics