Sie sind auf Seite 1von 68

Functional Link

Artificial
Neural Network
(FLANN)
By

Prof. G. Panda,

FNAE, FNASc.

IIT Bhubaneswar

FLANN

The functional link ANN or Pao-network originally


proposed by Pao.
Single layer ANN structure
Need of hidden layer is removed
Capable of forming arbitrarily complex decision
regions by generating nonlinear decision boundaries.
Offers less computational complexity
Higher convergence speed than MLP

Structure of FLANN

x1(k)

X1(1)

FE1
x1(k-1)
x1(k-2)

X5(1)
X2(1)

FE2

X2(5)
X3(1)

FE3

X3(5)

W1(k)

X
X
X
X
X
X

W5(k)

Bias weight
W6(k)
W10(k)

d(k)

W11(k)
e(k)
W15(k)

Adaptive Learning
Algorithm
Functional
Expansion

y(k)

Types of Functional
expansions

Trigonometric

Legendre

Exponential

Chebyshev

Polynomial

Trigonometric Expansion

sin(pi*x1(k))
X1(k)

cos(pi*x1(k))
sin(3*pi*x1(k))

Trigonometric Expansion

X1(k)

x,

cos(3*pi*x1(k))

In general

sin x,
cos x,
sin 3 x,
cos3 x,
sin 5 x,
cos5 x
sin(2n 1) x,
cos(2n 1) x

Legendre Expansion
L1 ( x) x

L1 ( x)

X1(k)

L3 ( x)
L4 ( x)
L5 ( x)

Legendre Expansion

L2 ( x)

1
(3 x 2 1)
2
1
L3 ( x) (5 x 3 3 x)
2
1
L4 ( x) (35 x 4 30 x 2 3)
8
1
L5 ( x) (63 x5 70 x 3 15 x)
8
1
L6 ( x) (231x 6 315 x 4 150 x 2 5)
16

L2 ( x )

L7 ( x)

In general

1
(429 x 7 693 x 5 315 x 3 35 x)
16

Ln 1 ( x )

1
{(2n 1) xLn ( x ) nLn 1 ( x)}
n 1

Chebyschev expansion:
Tn+1 (x) = 2x Tn (x) - Tn-1 (x)
The First Few Chebyschev Polynomials

T1(x)

To (x) = 1

T2(x)

T1 (x) = x

T2 (x) = 2x2 1
T3 (x) = 4x3 3x

T4(x)

T4 (x) = 8x4 8x2 + 1


T5 (x) = 16x 20x + 5x
5

T3(k)

T5(x)

Polynomial Expansion

x0
x1
x1

x1

x1 x 2

x2

x2
x2

Learning rule

The average change of m th weight at l th experiment is

wm (l 1) wm (l ) wm (l )

The change in m th weight at l th experiment is given by


Q

wm (l )

2
n 1

( n, l )e( n, l )
K

Where

=convergence coefficient
e(m, l) = error at mth instant
X(m) = expanded input vector at mth
instant

Error at mth instant e(m, l) is given by

e(n, l ) d (n, l ) y (n, l )

Where d(n,l) is desired output andy (n, l )

The percentage of error is calculated by

is estimated output .

e(n, l )

100%
% error
d (n, l )

1 P d (n, l ) y (n, l )
100
Mean Absolute Percentage Error (MAPE)
P i 1
d ( n, l )

FLANN Structure with sigmoid

x1(k)

X1(1)

FE1
x1(k-1)
x1(k-2)

X5(1)
X2(1)

FE2

X2(5)
X3(1)

FE3

X3(5)

W1(k)

X
X
X
X
X
X

bias

W5(k)

+
W6(k)
W10(k)

tanh

y(k)

W11(k)
e(k)
W15(k)

X
Adaptive Algorithm

Functional
Expansion

d(k)

Delta Learning
The error is given by

e( k ) d ( k ) y ( k )
Then the delta is calculated as

(k ) (1 y (k ) 2 ) * e(k )
Now the weight update equation becomes

W (k 1) W (k ) ( k ) (k 1)
where

(k ) (k ) [( X k )]T

Epoch based learning


Application of all N

patterns constitutes one experiment.

At the end of each experimentN

sets of
w(i )

are obtained

Then the average change of weight is computed as

1
w(i )
N

w (k )
k 1

The weights of the FLANN model is then updated according to the relation

w(i 1) w(i ) w(i )


Similarly the bias weight is updated using

wb (i 1) wb (i ) wb

Function Approximation
The two examples are

f1 ( x) x 3 0.3 x 2 0.4 x
f 2 ( x) 0.6 sin(x) 0.3 sin(3x) 0.1sin(5x)
In both cases the input pattern is expanded using trigonometric
expansion
Fifteen input nodes including a bias input are used
The nonlinearity associated is tanh( ) function.
The convergence coefficient is set to 0.1
Training of the weights of FLANN model are carried out by using an
uniformly distributed random signal over the interval [-1,1] as
input.

Function approximation
During testing the input to the identified model used is given by

2k
sin
for k 250

250
x(k )
0.8 sin 2k 0.2 sin 2k for k 250

250
25

0.8

Plant
Model

0.8

0.6
0.4

0.6
0.4

Outputs

Outputs

0.2

0.2

-0.2

-0.4

-0.2
-0.4
-1

-0.6
-0.8

-0.6

-0.4

-0.2
0
0.2
Discrete Time

Example-1

0.4

0.6

0.8

-0.8
-1

Plant
Model
-0.8

-0.6

-0.4

-0.2
0
0.2
Discrete Time

0.4

Example-2

0.6

0.8

Applications
To predict the output of non-linearly related input-output
system
To Predict the Exchange rate of currency
For Prediction of Machinery Noise in Opencast Mines
For forecasting of stock markets
For the Prediction of Medical diagnosis to assist the
doctors
For forecasting chance of churning in Telecom sector
Active control of nonlinear noise processes
System Identification
Channel Equalization

Exchange rate of currency:


Feature Extraction from Raw Data

Data Available
1. Average of daily figures (rupees/pounds/yens per unit dollar)
of one monthx(m , m 1, 2,K M

2.M 1 = No of months for which data is available for


training.
3.M 2

= No of months for which data is available for


testing.

4.M M 1 M 2 = No of months for which total data is available.

Feature Extraction from Raw Data


Each xm is normalized using the maximum value of xm
to obtain xnm(0 to 1).
Input is taken from 12th month onwards for the purpose
of training so that features from the previous11data can
be extracted.
(m 11)th

(1 m M 111 )
For
the month
mean and variance
values are computed as

xnm 11
xnm xnm 11
= Mean of
to
.
2
xnm xnm 11
x nm 11
= Variance of

to

The inputs to the model for any ( m 11) month are


th

xnm 11

xnm 11 x 2 nm11 (1 m M 1 11)

The estimated output of the model is given by


as the model predicts for
month.
The input-output of the nonlinearthmodel is

(m 12)

m 12
xn

After the model is developed testing is carried out using data starting from
M 1 11using
to M 1
M1 11
M 1
Features are separately computed from the time series of rupees, yen and
pound

xn

to xn

DEVELOPMENT OF THE FLANN FORECASTING MODEL :


Let each element of the input pattern before expansion be represented as

z (i ),1 i I
each element

is functionally expanded as

z (i )

zn (i ),1 n N
where N = Number of points each input element is
In this paper we have used N=5 and I=3.

expanded.

Thus

x1 (i ) z (i )
x2 (i ) sin {z (i )}
x3 (i ) sin 2 {z (i )}

x4 (i ) cos {z (i)}
x5 (i ) cos 2 {z (i )}

Where

z (1) xnm 11
z (2) xnm 11
z (3) x 2 nm 11

Functional Expansion of first element

Total no of nonlinear input is 15.


The change in weight for any input pattern P is given by

w j (k ) xf j (k )e(k )

where

xf =
j functionally expanded input at kth iteration

1 P
p
w j (k ) w j ( k )
P p 1

weight update equation is given by

w j (k 1) w j (k ) w j (k )

Where

w j (k )

= weight at kth iteration

= Convergence coefficient. Its value lies between 0 to1.

1 j J;

J M I

enm 12 (k ) xm 12 (k ) x m 12 (k )
J

xm 12 (k ) xf j (k ).w j (k )
j 1

w j (k ) j th weight at kth iteration.


w j (o) 0(Initial weight at k=0 )

Proposed Nonlinear model for forecasting

Training of the proposed model


The forecasting model of above Fig.3 is simulated.
1

10

10

MSE

MSE
0

10

Mean square error(MSE)

Mean square error(MSE)

10

-1

10

-2

10

-3

10

-1

10

-2

10

-3

10
-4

10

-4

10

-5

10

4
5
6
No. of Iterations

10
4

4
5
6
No. of Iterations

10
4

x 10

x 10

Convergence characteristics
of proposed forecasting
model of Rupees

Convergence characteristics
of proposed forecasting model
of Pound

10

MSE
0

Mean square error(MSE)

10

-1

10

-2

10

-3

10

-4

10

4
5
6
No. of Iterations

10
4

x 10

Convergence characteristics of
proposed forecasting model of
Yen

Simulation Results
comparisons of actual and predicted values are plotted
in Figs. 7, 8 and 9 for Rupees, Pound and Yen
respectively.
Actual Value
Predicted Value

49

3.5

48
3

47

Pound

Rupees

46
45
44

2.5

43
1.5

42
41

350

355

360

365

No. of Months

370

375

380

Fig. 7 Comparison of actual and


predicted value(equivalent rupees
for 1US$) for one month ahead with
test data set

50

100

150

200
No. of Months

250

300

350

Fig. 8 Comparison of actual and


predicted value(equivalent Pound for
1US$) for one month ahead with test
data set

Table 1. Test Results: One month ahead prediction


of equivalent rupees of 1US$

113
112

Actual

111
110

Yen

109
108
107
106
105
104
103
392

394

396

398
400
No. of Months

402

404

Fig. 9 Comparison of actual and


predicted value(equivalent Yen for
1US$) for one month ahead with
test data set

406

46.05
45.74
45.03
43.85
43.62
43.58
43.59
43.64
43.41
43.52
43.43
43.55
43.85

Predicted %of Error

46.158
45.8276
45.0858
43.8666
43.637
43.6031
43.6193
43.6714
43.4432
43.5686
43.4956
43.6442
43.9766

0.2339
0.1911
0.1237
0.0378
0.0389
0.0529
0.0671
0.0719
0.0764
0.1115
0.1508
0.2158
0.28788

Table 2. Test Results: One month ahead


prediction of equivalent pound of 1US$
Actual

Predicted %of Error

109.43
109.49
110.23
110.09
108.78
104.70
103.81
103.34
104.94
105.25
107.19
106.60

111
110
110
110
109
107
106
106
107
107
108
107

1.4347
0.5208
0.5208
0.5208
0.3929
2.2205
3.1344
3.1344
2.2205
2.2205
1.3067
2.2205

Table 3. Test Results: One month ahead


prediction of equivalent yen of 1US$
Actual

Predicted %of Error

109.43
109.49
110.23
110.09
108.78
104.70
103.81
103.34
104.94
105.25
107.19
106.60

111
110
110
110
109
107
106
106
107
107
108
107

1.4347
0.5208
0.5208
0.5208
0.3929
2.2205
3.1344
3.1344
2.2205
2.2205
1.3067
2.2205

Development of an FLANN model for


stock market prediction
Steps Involved:
Collection of historical stock index data.
Calculation of Technical indicators (Data
Preprocessing).
Finalizing model parameters such as inputs, expansion
and activation functions, evaluation criteria,
convergence coefficient etc.
Training the Network.
Testing and Performance Evaluation.

DATA
Data is the single most important quantity in
forecasting, so Data integrity is must.
Historical Stock price data is collected for three stock
indices namely Dow Jones Industrial Average (DJIA),
USA, Standards & Poor 500 (S&P 500), USA and
Bombay Stock Exchange (BSE, India).
The time series data of S&P500 and DJIA stock indices
were collected from 3rd January 1994 to 23rd October
2006. Thus there were 3228 data patterns in total out
of which training and testing sets were created.

Raw data figure

Creating the actual inputs to the


model
Raw data is preprocessed to create technical
indicators which are then used as inputs to the model.
Data preprocessing refers to analyzing and
transforming the input and output variables to
minimize noise, highlight important relationships,
detect trends, and flatten the distribution of the
variable to assist the neural network in learning the
relevant patterns.
So what are technical indicators??

Technical indicators
Various technical indicators used as inputs in this work
are :
Simple moving averages (SMA)
Exponential moving averages (EMA)
Accumulation/Distribution Oscillator (ADO)
Stochastic Oscillator
On Balance Volume
Williams %R
Relative Strength Index
Price Rate of Change
Closing Price & High Price Acceleration

Model Parameters for FLANN


Trigonometric functions are used to expand the
functions
Each pattern is expanded into five inputs and fed into
the network.
The expansion of inputs provides the necessary
nonlinearity to the model.
Convergence coefficient is fixed at 0.1
The whole data set is divided into training and testing
sets. 2510 patterns are used for training and rest are
set aside for testing.

FLANN based Forecasting Model

x1(k)

X1(1)

FE1
x1(k-1)
x1(k-2)

X5(1)
X2(1)

FE2

X2(5)
X3(1)

FE3

X3(5)

W1(k)

X
X
X
X
X
X

bias

W5(k)

+
W6(k)
W10(k)

y(k)

W11(k)
e(k)
W15(k)

Adaptive Algorithm

Functional
Expansion

d(k)

Training of the FLANN model


A simple LMS/RLS algorithm is used for adaptively
updating the weights and mean square error is plotted
for each iteration.
The process continues until the mean square error
becomes flat and then the weights are frozen.
The experiment is carried out for these kinds of
prediction :
0.6

Desired output
network output

0.4

0.2

One, three, five, seven, fifteen days in advance .


One month in advance.
Two months in advance .
-0.2

-0.4

-0.6

-0.8

500

1000

1500

2000

2500

3000

Plot of desired and network output during training of network for S&P 500 one month
advance prediction.

The network is trained accordingly

Testing of the model


The model developed after freezing the weights is
tested for performance evaluation on the testing set.
The Mean Average Percentage Error (MAPE) is used as
the evaluation criteria.
1 N y j y j
MAPE |
| 100
N j 1
yj
where
y j actual stock market index at th test
pattern
y j
= estimated stock market index atth test
pattern
N = number of test patterns available for
validation

Results for one day advance


prediction of FLANN-LMS model
Stock
Index

Input
model

Variables

To

FLANN-LMS

MAPE

DJIA

EMA10,
EMA30,
ADO,
CPACC, 0.64%
HPACC, STO, RSI9, PROC 12, PROC
27.

DJIA

EMA10,
EMA20,
EMA30
ADO, 0.74%
CPACC,
HPACC,
RSI9,
RSI14,
PROC12, PROC27, WILLIAMS.

S&P 500

EMA10,
EMA30,
ADO,
CPACC, 0.61%
HPACC, STO, RSI9, PROC 12, PROC
27.

S&P 500

EMA10,
HPACC,
PROC27.

EMA30
ADO,
CPACC, 0.65%
STO,
RSI9,
PROC12,

Results for one month advance prediction of FLANN-RLS model


Stock
Index

Input Variables To FLANN


Model
(Technical Indicators)

Input Variables To FLANN


Model
( Fundamental Factors)

MAPE
using
RLS

RLS
Initializatio
n
constant

DJIA

EMA10, EMA20, EMA30, ADO,


CPACC,
HPACC,
RSI9,
PROC12,
PROC27.

Interest rate

2.19%

1000

DJIA

EMA10, EMA20, EMA30, ADO,


CPACC,
HPACC,
RSI9,
PROC12,
PROC27.

Oil price

2.20%

1000

DJIA

EMA10, EMA20, EMA30, ADO,


CPACC,
HPACC,
RSI9,
PROC12,
PROC27.

GDP growth (Quarterly)

2.20%

1000

DJIA

EMA10, EMA20, EMA30, ADO,


CPACC,
HPACC,
RSI9,
PROC12,
PROC27.

CPI rate

2.19%

1000

DJIA

EMA10, EMA20, EMA30, ADO,


CPACC,
HPACC,
RSI9,
PROC12,
PROC27.

Corporate Dividend rate

2.19%

1000

DJIA

EMA10, EMA20, EMA30, ADO,


CPACC,
HPACC,
RSI9,
PROC12,
PROC27.

Interest rate, Oil price

2.20%

1000

DJIA

EMA10, EMA20, EMA30, ADO,


CPACC,
HPACC,
RSI9,
PROC12,

Dividend, Interest
GDP growth
rate.

2.19%

1000

rate,

Comparison of prediction performance between FLANN-LMS and


FLANN-RLS models
Stock
Index

Input Variables To FLANN

Days in
advance
prediction

MAPE
(LMS)

MAPE
(RLS)

DJIA

EMA20, EMA30, ADO, CPACC, RSI9,


RSI14, OBV,
PROC27, WILLIAMS.

60 days

2.25%

2.45%

DJIA

EMA10, EMA20, EMA30, ADO, CPACC,


HPACC,
RSI9, WILLIAMS.

30 days

2.33%

2.54%

DJIA

EMA10, EMA30, ADO, CPACC, HPACC,


STO, RSI9,
PROC 12, PROC 27.

1 day

0.64%

0.58%

DJIA

EMA10, EMA20, EMA30 ADO, CPACC,


HPACC, RSI9,

1 day

0.74%

0.61%

RSI14, PROC12, PROC27, WILLIAMS.

In Medical Diagnosis

Applied to The Chronic periodontal diseases

The Block Diagram of the model is shown below

Design of Data
Tooth No. (FDI Severity of
Notation)
disease
(Diagnosis)
PROVISIONAL
DIAGNOSIS

1. Healthy
2. Mild Gingivitis
3. Moderate
Gingivitis
4. Severe
Gingivitis
5. Slight
Periodontitis
6. Moderate
Periodontitis
7. Severe
Periodontitis

Clinical
Parameters
(Inputs)

Scale

1. Local Deposits
2. Probing Depth

1
1
1
1
1
1
1
1
1

3. CAL
4. GI
5. Mobility
6. Recession
7. Furcation
8. Bone Loss
9. Pain
10.Ging Enl

to
to
to
to
to
to
to
to
to

4
3
4
4
5
5
3
2
2

Normalization is required to bring these parameters to an uniform scale i.e. 0-1

The Generalized Estimated output equation for Trigonometric Expansion


is given by

The Generalized Estimated output equation for Legendre Expansion is


given by

The Generalized Estimated output equation for Chebyshev Expansion is


given by

The model is trained till Mean Square Error plot attains steady state, The
following diagram shows the MSE plots of Trigonometric, Legendre,
Chebyshev Polynomials with 7 expansions

Results of Medical Diagnosis


Table 5. Comparison of Matching efficiency for various Functional link expansions

Training
Prediction
Testing Match
Prediction
Polynomial
Match
Expansions
Efficiency
(20% data)
Efficiency
type
(80% data)
(%)
N=112
(%)
N=446
Linear
381
85.43
92
82.14
Combiner
Trigonometric
3
405
90.81
103
91.96
Trigonometric
5
418
93.72
103
91.96
Trigonometric
7
419
93.95
104
92.86
Legendre
3
362
81.17
83
74.11
Legendre
5
403
90.36
97
86.61
Legendre
7
415
93.05
103
91.96
Chebyshev
3
390
87.44
91
81.25
Chebyshev
5
411
92.15
98
87.5
Chebyshev
7
412
92.38
103
91.96

Prediction of Customer churn value:


Model

Inputs

Model -1 Type -1 (DSF,DLF,FC)


Model -2 Type -2 (DLF, FC)
Model -3 Type -3 (DSF, FC)
Model -4 Type -4 (DSV,DLV,FC)

Output

Prediction of churning
value

Model - Type -5(FC)


5
Model -6 Type -6(DSV,FC)
Model -7 Type -7(DLV,FC)
Where DSF-Dissatisfaction Factors
DLF-Disloyalty Factors
FC-Churning factors
DSV - Dis satisfaction Values
DLV -Disloyalty Values

Block Diagram of Model-1

DSF(p,i)
DLF(p,i)

MODEL -1

C(p,i)
-

FC(p,i)

+
e(p,i)

Learning Algorithm

Representation of FLANN model using trigonometric expansion


W1(p,i)

X1(p,i)
Sinx1(p,i)

w2(p,i)

Cosx2(p,i)

w3(p,i)

Sin3x3(p,i)

X1(p,i)

w4(p,i)

cos3x1(p,i)

w5(p,i)

Sin5x1(p,i)

w6(p,i)
w7(p,i)

Cos5 1(p,i)

(p,i)

C(p,i)

e(p,i)
Xn(p,i)

w29(p,i)

sinxn(p,i)

W30(p,i)

cosxn(p,i)

Xn(p,i)

w31(p,i)

Sin3xn(p,i)

w32(p,i)

Cos3xn(p,i)

W33(p,i)

Sin5xn(p,,i)

W34(p,i)
w35(p,i)

Cos5xn(p,i)

w36(p,i)

+1

Bias input

Learning Algorithm

The Max Percentage error of 23.7 % was observed in


model-5 with linear combiner
The Min Percentage error of 5.8 % was observed in
model-1 with trigonometric expansion
Based on the observations it is concluded that the
model-1 with input parameters as
1. Dis satisfaction factors
2. Dis loyalty factors
3. Churning factors
The prediction efficiency is increased when both factors
of satisfaction and loyalty are fed along with the
churning factors.

Grouping of the customers based on the churning values

S.No Churning level range


.
1
Less than 0.33

Grouping
A

0.34- 0.66

0.67-0.99

A-Less likelihood of churning


B-Mediocre churners
C-High likelihood of churning

Clustering of customers based on value of


churning
S.No. Model No. Trigonometric
Expansion
No. of
customers
A
B
C

8
37 18
1.
Model -1

9
29 25
2.
Model -2
3.
Model -3 6
36 21

Legendre
Expansion
No. of
customers
A
B
C
10 37 16

Chebyshev
Expansion
No. of
customers
A
B
C
10 30
23

Linear
Combiner
No. of
customers
A
B
C
10 37 16

32

24

36

18

60

32

24

39

15

59

4.

Model-4

30

26

34

24

36

20

60

Model-5

10

31

22

31

25

11

29

23

60

Model-6

26

28

32

25

36

20

60

Model-7

30

26

33

25

39

18

59

Observations:
For the customers which are in group A (0
churning value .33) no action is needed to
retain them.
For the customers which are in group B (.34
churning value .66) action is needed to retain
them. E.g. Offering some offers
For the customers which are in group C (.67
churning value .99) more concentration is
needed to retain them.

System Identification
+

x(k)

System

NL

Random
noise

Model

y(k)

y(k)

Adaptive
algorithm

Block Diagram for System Identification model

e(k)

Simulation study:

Input signal : uniformly distributed random signal [-.5,.5].


SNR : 30dB
H(z) : 0.260 + 0.930z 1 + 0.260z 2
The learning parameter : 0.02
Iteration : 2000

Averaging : 50 times

For channel equalization delay : 3


Non linearity
NL = 0 : b(k)= a(k)
NL = 1 : y= tanh(x)
NL = 2 : y = x + 0.2 * (x^2) - 0.1*(x^3)
NL = 3 : y= x - 0.9 * (x.^3)
NL = 4 : y= x + 0.2 * (x^2) - 0.1 * (x^3)+ 0.5*cos(pi*x)
A linear channel is modeled by NL = 0

System Identification
y=x
0.6

MLP

-5

0.4

CFLANN
FLANN

response matching

MSE in dB

-10
-15
-20
-25

0.2
0
-0.2

desired

-0.4

CFLANN
FLANN

-0.6

-30

MLP

-35

500

1000
No of iteration

Fig. 6(a)

1500

2000

-0.8

10
No of iteration

Fig. 6(b)

15

20

System Identification
y= tanh(x)

0
-5

0.4

FLANN

response matching

-10
MSE in dB

0.6

MLP
CFLANN

-15
-20
-25

0.2
0
-0.2
-0.4

-30

-0.6

-35

-0.8

500

1000
No of iteration

Fig. 6(c)

1500

2000

desired
CFLANN
FLANN
MLP

10
No of iteration

Fig. 6(d)

15

20

System Identification
y = x + 0.2 * (x^2) - 0.1*(x^3)

MLP
CFLANN

-5

response matching

FLANN

MSE in dB

-10
-15
-20
-25
-30

0.8

desired

0.6

CFLANN
FLANN
MLP

0.4
0.2
0
-0.2
-0.4

500

1000
No of iteration

Fig. 6(e)

1500

2000

-0.6

10
No of iteration

Fig. 6(f)

15

20

System Identification
y= x - 0.9 * (x.^3)

0.6

CFLANN
FLANN

0.4
response matching

MSE in dB

-5

MLP

-10

-15
-20

0.2
0
-0.2
-0.4

desired
CFLANN
FLANN

-0.6
-25

500

1000
No of iteration

Fig. 6(g)

1500

2000

-0.8

MLP

10
No of iteration

Fig. 6(h)

15

20

System Identification
y= x + 0.2 * (x^2) - 0.1 * (x^3)+ 0.5* cos(pi*x)

0
-2

MLP

1.4

CFLANN

1.2

FLANN

MSE in dB

-6
-8
-10
-12

0.4
0.2

-16

-0.2
500

1000
No of iteration

Fig. 6(i)

1500

2000

MLP

0.6

FLANN

0.8

-14

-18

CFLANN

1
response matching

-4

desired

-0.4

10
No of iteration

Fig. 6(j)

15

20

Block diagram for channel equalization

Channel
Noise
Random
Binary
Input Digital
x (k)
Channel
H (z)

+
+

Channel y (k)
_
Equalizer

d (k)
+

e (k)
Adaptive
Algorithm
Del
ay
z-m

Channel Equalization
Fig. 7(a):y=x
Fig. 7(b): y= tanh(x)

-0.5

-1

-0.5
-1
-1.5

-2

BER

BER

-1.5

-2.5

-2.5

-3

LMS

LMS

-3

MLP

-3.5
-4

-2

MLP

-3.5

FLANN

FLANN

CFLANN

10
SNR in dB

Fig. 7(a)

15

20

-4

CFLANN

10
SNR in dB

Fig. 7(b)

15

20

Channel Equalization
Fig. 7(c): y = x + 0.2 * (x^2) - 0.1*(x^3)
Fig. 7(d):y= x - 0.9 * (x.^3)
-0.5

-0.2

-1

-0.4

-0.6
-2

BER

BER

-1.5

-0.8

-2.5
LMS

-3
-3.5

LMS

MLP

-1

FLANN

MLP
FLANN

CFLANN

10
SNR in dB

Fig. 7(c)

15

20

-1.2

CFLANN

10
SNR in dB

Fig. 7(d)

15

20

Channel Equalization
Fig. 7(e): y= x + 0.2 * (x^2) - 0.1 * (x^3)+ 0.5* cos(pi*x)
0
-0.5
-1

BER

-1.5
-2
-2.5
LMS

-3

MLP

-3.5
-4

FLANN
CFLANN

10
SNR in dB

Fig. 7(e)

15

20

Conclusion(s):
The Chebyshev FLANN (CFLANN) structure is a better candidate
in comparison to other nonlinear structures like MLP and FLANN
in terms of
better and faster convergence
less mathematical complexity
less no input sample.
Unlike other algorithms it is observed that the CFLANN
structure exhibits learning-while functioning instead of
learning then functioning hence it is suitable for On-line
identification.

Conclusion cntd
For medical diagnosis Trigonometric and Chebyshev are
giving results almost same but the amount of computations
involved is less in case of trigonometric expansion.
Unlike MLANN, here we can study the internal behavior of
the system by evaluating the weightage of the input
variables
Number of hidden layers can be eliminated using FLANN
The applications of FLANN are not limited, recently
Ecologists applied this model to predict the autocorrelations
between/among environmental (input) variables.

References:

Uuz H. A biomedical system based on artificial neural network and principal


component analysis for diagnosis of the heart valve diseases. J Med Syst. 36: 61
72, 2012.

Application of Functional Link Artificial Neural Network for Prediction of


Machinery Noise in Opencast Mines Santosh Kumar Nanda and Debi Prasad
Tripathy

Development and performance evaluation of FLANN based model for forecasting


of stock markets Ritanjali Majhi, G. Panda, G. Sahoo

Robust identification of nonlinear complex systems using low complexity ANN


Babita Majhi, G. Panda

Y. H. Pao, Adaptive pattern recognition and neural networks, (Addison-Wesley


(1989)

21. Kositbowornchai S, Plermkamon S, Tangkosol T. Performance of an artificial


neural network for vertical root fracture detection: an ex vivo study. Dental
traumatology : official publication of International Association for Dental
Traumatology. 2013 Apr;29(2):151-5.

11. Artificial neural networks for the electrocardiographic diagnosis of healed


myocardial infarction.(PMID:8017306)

THANK YOU

Das könnte Ihnen auch gefallen