Sie sind auf Seite 1von 10

Neurocomputing 83 (2012) 1221

Contents lists available at SciVerse ScienceDirect

Neurocomputing
journal homepage: www.elsevier.com/locate/neucom

Fluctuation prediction of stock market index by Legendre neural network


with random time strength function
Fajiang Liu, Jun Wang n
College of Science, Key Laboratory of Communication and Information System, Beijing Jiaotong University, Beijing 100044, China

a r t i c l e i n f o a b s t r a c t

Article history: Stock market forecasting has long been a focus of nancial time series prediction. In this paper, we
Received 23 February 2011 investigate and forecast the price uctuation by an improved Legendre neural network. In the
Received in revised form predictive modeling, we assume that the investors decide their investing positions by analyzing the
20 July 2011
historical data on the stock market, so that the historical data can affect the volatility of the current
Accepted 21 September 2011
stock market, and a random time strength function is introduced in the forecasting model to give a
Communicated by P. Zhang
Available online 5 January 2012 weight for each historical data. The impact strength of the historical data on the market is developed by
a random process, where a tendency function and a random Brownian volatility function are applied to
Keywords: describe the behavior of the time strength, here Brownian motion makes the model have the effect of
Prediction
random movement while maintaining the original uctuation. Further, the empirical research is made
Legendre neural network
in testing the predictive effect of SAI, SBI, DJI and IXIC in the established model, and the corresponding
Stock market index
Random time strength function statistical comparisons of the above market indexes are also exhibited.
Computer simulation & 2011 Elsevier B.V. All rights reserved.

1. Introduction The nancial time series models described by nancial theories


have been the basis for forecasting a series of data in the nancial
Recently, statistical behaviors and forecasting of price uctuations market. The development of multi-layer concept allows ANN
for stock markets have been investigated by various articial intelli- (articial neural networks) to be considered as one of prediction
gence methods, for example see [1,2,46,9,11,14,16,19,20,2224]. methodologies for the actual nancial market. Various models have
According to the randomness and the nonlinear nature of stock price been applied to forecast market time series by using ANN. Gooijer
returns, the articial neural network is introduced to train and and Hyndman reviewed the papers for the time series forecasting
forecast the uctuations of the stock market, since articial neural from the years 19822005, see [10]. The review summarized the
network, as a computing system containing many simple nonlinear statistical analysis and the simulative methods for the time series
computing units or nodes interconnected by links, can handle prediction, which included exponential smoothing, autoregressive
disorderly comprehensive information without requiring strong integrated moving average (ARIMA), seasonality, state space and
model assumptions, and also has good nonlinear approximation structural models, nonlinear models, long memory models, autore-
capability, strong self-learning and self-adaptive abilities. So that gressive conditional heteroskedasticity and generalized autoregres-
the neural network is a well-tested method for nancial analysis on sive conditional heteroskedasticity (ARCH-GARCH), etc. They also
the stock market, pattern recognition and optimization, see compiled the advantages and the disadvantages of each methodology
[4,7,13,15,21,2527]. Financial forecasting is the process of making and pointed out the potential future research elds. Recently, some
projections about future performance based on existing historical related research work focuses on improving the ANNs prediction
data. Because of the uncertainties involved in the volatility of stock performance and developing new articial neural network architec-
market, many factors have the impact on the market uctuations, ture. Zhang and Wan [29] developed a new ANN architecture
which include the estimate for this stock price, positive or negative (statistical fuzzy interval neural network) to predict JPY/USD and
news, trends, political event and economic policy, etc. The stock price GBP/USD exchange rate time series. Majhi et al. [17] discussed the
time series data is characterized by nonlinearities, discontinuities, and efcient prediction of exchange rates by functional link articial
high-frequency multi-polynomial components. neural network (FLANN), cascaded functional link articial neural
network (CFLANN) and least mean square (LMS) model. The article
exhibited that the CFLANN model performed the best followed by the
n
Corresponding author.
FLANN and the LMS models. Hassan et al. [12] used a hybrid model,
E-mail addresses: jun6211wang@yahoo.com.cn, which including Hidden Markov Model, ANN and Genetic Algorithm,
wangjun@bjtu.edu.cn (J. Wang). to forecast the stock market uctuation. And they displayed the

0925-2312/$ - see front matter & 2011 Elsevier B.V. All rights reserved.
doi:10.1016/j.neucom.2011.09.033
F. Liu, J. Wang / Neurocomputing 83 (2012) 1221 13

advantage of the hybrid forecasting model. Egrioglu et al. [8] Legendre neural network with the random time strength function
introduced a new method which is based on feed forward articial is a parallel distribution processor made up of processing units,
neural network to analyze the multivariate high order fuzzy time which have a natural propensity for storing experiential knowl-
series volatility. Menezes and Nikolaev [18] presented a polynomial edge and making that knowledge available for use. Major benets
neural network forecasting system, which has three features: poly- of LNNRT include nonlinearity and adaptability, since LNNRT is
nomial block reformulation, local ridge regression for weight estima- capable of detecting and extracting nonlinear relationships and
tion and regularized weight subset selection. Further, the relative interactions among predictor variables, and LNNRTs inferred
performance of this system to other established forecasting proce- patterns and associated estimates of precision do not depend on
dures is investigated and illustrated by three empirical studies. any assumptions relating to a distribution of the variables. In the
In the real nancial markets, the investing environment and the present paper, according to the polynomial approximation theory
uctuation behavior of the markets may change greatly, for and the established random time strength function, we develop
example see Chinese stock markets in 2007. As a result, the impact an improved three-layer multi-input Legendre neural network
of the historical data in the training set should be time-variant, model, which applies the Legendre orthogonal polynomial as the
which can properly reect the different behavior patterns of the transfer function of hidden layer neural cell, uses the weight sum
markets at different times. If all the data are used to train the as its output. The nonlinear identication model on the time
network equivalently, the network system may not be consistent series is proposed to predict the change of stock by introducing
with the movement of the real stock market. Especially in the the well-known back propagation algorithm with the random
current Chinese stock markets, stock market trading rules and time strength function, training the data of former sample and
management systems are changing rapidly, for example, the daily adjusting the weights of network, see [4].
price limit (now 10%), shareholding reformation, the direct invest- In Fig. 1, we describe a three-layer feed forward LNNRT model.
ment of Hong Kong stock markets, the reorganization of A share, B The corresponding structure is p  q  1, where p is the number of
share and H share, the establishment of nancial derivatives such as inputs, q is the number of neurons in the hidden layer, and one
futures and options, and the internationalization of Shanghai Stock output unit. The network includes the input vector X t x1t ,
Exchange. Therefore, using the historical data is difcult to reect x2t ,    ,xpt , t 1; 2, . . . ,N and the output yt 1 , and each arrow in
the current Chinese stock markets development. However, if only the gure symbolizes a parameter in the network. The network is
the recent data are selected, a lot of useful information will be lost divided into the layers. The input layer consists of only the inputs
which the early data hold. In the nancial model of the present to the network. Then it follows a hidden layer, which consists of
paper, considering the above mentioned nancial situation, a some number of neurons, or hidden units placed in parallel. Each
promising data mining technique in machine learning is proposed, neuron performs a weighted summation of the inputs, which then
this predictive approach is based on Legendre neural network and a pass the transfer function Ln x, which is the Legendre polyno-
random time strength function (LNNRT), or an improved Legendre mials. The input of hidden neuron j is described by
neural network. According to the polynomial approximation theory, p
X
a three-layer multi-input Legendre neural network model with a net j wij xit yj :
random time strength function, which applies the Legendre ortho- i1
gonal polynomial as the transfer function of hidden layer neural
cell, uses the weight sum as its output. And the nonlinear The output of hidden neuron is given by
!
identication model on the time series is proposed to predict the p
X
change of stock by introducing the well-known back propagation Lj net j Lj wij xit yj :
algorithm with a random time strength function, training the data i1

of former sample and adjusting the weights of network. In the The network output is formed by another weighted summation of
present paper, we consider the time series of Shanghai Stock the outputs of the neurons in the hidden layer. This summation
Exchange (SSE) A Share Index (SAI) and B Share Index (SBI), the on the output is called the output layer. In Fig. 1, there is only one
database is from Shanghai Stock Exchange (www.sse.com.cnand output in the output layer, since it is a single-output problem in
the data of each trading day in the 10-year period from January 4, this work. Generally, the number of output neurons equals the
2000 to April 30, 2010 are selected for SAI and SBI respectively. number of outputs of the approximation problem. So the output
Further, we test the predictive effect of SAI, SBI, Dow Jones of the Legendre neural network with the random time strength
Industrial Average Index (DJI) and Nasdaq Composite Index (IXIC) function is given by
by using LNNRT model, and the corresponding comparisons of price !
q
X Xp
forecasting for the above indexes are displayed.
yt 1 cj Lj wij xit yj ,
This paper presents an improved Legendre neural network model,
j0 i1
that is, a random time strength function is introduced in the
forecasting neural network. The impact strength of the historical where wij is the weight that connects the node i in the input layer
data on the market is developed by a random process, where a neurons to the node j in the hidden layer; cj is the weight that
tendency function and a random Brownian volatility function are
applied to describe the behavior of the time strength, and Brownian
motion makes the model have the effect of random movement while
maintaining the original uctuation. We hope that this study can
make some contribution to the ANN research and its application.

2. Methodology

2.1. Legendre neural network with a random time strength function

Articial neural network is one of technologies that have made


great progress in studying the volatility of stock market. The Fig. 1. Three-layer feed forward network with one output.
14 F. Liu, J. Wang / Neurocomputing 83 (2012) 1221

connects the node j in the hidden layer neurons to the node in the In details: (a) continuity: the map s/Xs is continuous P a.s.;
output layer neurons; and yj is the threshold of neuron. (b) independent increments: If s r t, X t X s is independent of
The forecasting model of the present paper uses the Legendre F sX u ,u r s; (c) stationary increments: If s r t, X t X s and
polynomials which based on the Legendre orthogonal basis X ts X 0 have the same probability law. From this denition we
functions [3,26]. The important properties of Legendre polyno- have (see [28]), if fXt,t Z 0g is a Brownian motion, then X t X 0 is
mials include: (i) they are orthogonal polynomials; (ii) they arise a normal random variable with mean rt and variance s2 t, where r
in numerous problems especially in those involving spheres or and s are constant real numbers. A Brownian motion is standard
spherical coordinates or exhibiting spherical symmetry; (iii) in (we denote it by B(t)) if B0 0 P a.s., EBt 0 and EBt2 t.
spherical polar coordinates, the angular dependence is always And the corresponding probability density function is given by
best handled by spherical harmonics which are dened in terms p 2
f t x 1= 2pt ex =2t . The above random time strength function
of Legendre functions. The Legendre polynomials are denoted by implies that the impact of the historical data on the stock market
Ln x, where n is the order and x (1o x o 1) is the argument of is a time variable function. Then the corresponding total error of
the polynomial. They constitute a set of orthogonal polynomials all the data training sets in the output layer at k-th network
as solutions to the differential equation: repeated training is dened as
 
d dy Rt Rt
1x2 nn1y 0: X
N
1XN
1 t n mt dt t n st dBt
dx dx Ek Et n e 0 0 dtn ytn 2 : 2
n1
2n1t
The zero and the rst order Legendre polynomials are given by
L0 x 1 and L1 x x respectively. The higher order polynomials On each repetition, an input pattern is applied, the output of
are given by follows: the LNNRT is computed and the error Et n is obtained. The error
value is used in the back propagation algorithm to minimize the
L2 x 12 3x2 1, L3 x 12 5x3 3x, L4 x 1835x4 30x2 3: cost function until it reaches a pre-dened minimum value. The
The recursive formula to generate higher order Legendre poly- goal of the learning algorithm is to minimize the cost function Ek .
nomials is expressed as The gradient of the cost function is given by DEk @Ek =@W. For the
weight nodes in the input layer, the gradient of the connective
1
Ln 1 x 2n 1xLn xnLn1 x: weight wij is given by
n1
@Ek
And Rodriguez formula of Legendre polynomial is given by Dwij Z Zetn cj jt n L0j net j xitn
@wij
n
1 d
Ln x x2 1n : and for the weight nodes in the hidden layer, the gradient of the
2 n! dxn
n
connective weight cj is given by
@Ek
2.2. Learning with the random time strength function Dcj Z Zetn jt n Lj net j ,
@cj
In order to minimize a given cost function, the learning process where Z is the learning parameter which is set between 0 and 1,
involves the updating of the weights of LNNRT. Gradient descent L0j net j is the derivative of the activation function for Legendre
algorithm is used for learning, where the gradient of a cost neural network with the random time strength function. So the
function with respect to the weights is determined and the update rule for the weight wij and cj is given by
weights are incremented by a fraction of the negative gradient
on each repetition. The well-known back propagation algorithm is wij k 1 wij k Dwij k wij k Zetn cj jt n L0j net j xitn ,
used to update the weights of LNNRT. Considering LNNRT with a
single output node, the objective of the learning algorithm is to cj k 1 cj k Dcj k cj k Zetn jt n Lj net j :
minimize Ek . We assume that the error of the output is
etn dtn ytn , and then the error of the sample n is dened as
2.3. The training algorithms procedure in LNNRT model
Et n 12jt n dtn ytn 2 ,

where dtn is the actual value, ytn is the output value at time t n , t n is Note that the training objective of LNNRT is to modify the
the time of the sample n, jt n is the random time strength network weights so as to minimize the error between the net-
function. Here we assume that the investors decide their invest- works prediction and the actual target. In Fig. 2, the training
ment positions by analyzing the historical data on the stock algorithms procedures of the improved Legendre neural network
market, the historical data affect the volatility of the current are shown, which are as follows:
stock market, and the historical data are given efcient informa- Step 1: Train the Legendre neural network with the random
tion depending on their time. We also introduce the Brownian time strength function by choosing ve kinds of stock prices as
motion in order to make the model have the effect of random the input values in the input layer: daily opening price, daily
movement while maintaining the original uctuation. Now we closing price, daily highest price, daily lowest price and daily
dene jt n as following: trade volume, and one price of the stock prices in the output
R tn R tn layer: the closing price of the next trading day. Then choose
1 mt dt t st dBt
p  q  1 three layer network model structure, set the network
jt n  e t0 0 , 1
t learning parameter Z which is between 0 and 1, and the training
where t 4 0 is the time strength coefcient, t 0 is the current times k.
time or the time of the newest data in the data set, and t n is an Step 2: Set the connective weights, and input the training data
arbitrary time point in the data set. mt is the drift function (or sets. At the beginning of data processing, set connective weights
the trend term), st is the volatility function, and B(t) is the wij 0 and cj 0 follow the uniform distribution on 1; 1, and let
standard Brownian motion [28]. A Brownian motion is a real- the neural threshold yj be 0.
valued, continuous stochastic process fXt,t Z 0g on a probability Step 3: Introduce the random time strength function jt n in
space O,A, P, with independent and stationary increments. the error function Ek . Choose the drift function mt and the
F. Liu, J. Wang / Neurocomputing 83 (2012) 1221 15

Fig. 2. The procedure of the improved Legendre neural network model.

volatility function st; give the transfer function from the input 3. Experiment analysis
layer to the hidden layer.
Step 4: Establish an error acceptable model and pre-set 3.1. Selection and preprocessing of the data
minimum training error x. Based on network training objective
P
Ek N n 1 Et n , if Ek is below pre-set minimum error, go to Step In this paper, four stock indexes time series have been used for
6, otherwise go to Step 5. experiments. We select the data of each trading day in a 10-year
Step 5: Modify the network connective weights: calculate period (from January 4, 2000 to April 30, 2010) for Shanghai Stock
the gradient of the connective weights wij , Dwij k. Calculate the Exchange (SSE) A Share Index and B Share Index. This period
gradient of the connective weights cj , Dcj k. Then modify the amounts to 2495 trading days, and we also choose the corre-
weights from the layer to the previous layer, wij k 1 or cj k 1. sponding data of DJI and IXIC by comparison. Recently, some
Step 6: In training the network, its parameters are adjusted research work has been made to apply neural network to forecast
incrementally until the training data satisfy the desired mapping time series of nancial market, for example see Azoff [4]. In the
as well as possible; that is, until yt 1 matches the desired value previous work, neural network is usually applied to predict the
dt 1 as closely as possible up to a maximum number of iterations, closing stock price of the next trading day according to the history
P P
then output the predictive value yt 1 qj 0 cj Lj pi 1 data. The stock data of the past trading days, including daily
wij xit yj . opening price, daily closing price, daily highest price, daily lowest
16 F. Liu, J. Wang / Neurocomputing 83 (2012) 1221

price and daily trade volume, are very important indicators and as follows:
contain lager amount of information. So in the present paper, we
stmin st
apply the history indicators as the input of neural network, the st0 :
max stmin st
closing price of the next trading day as the output to predict stock
price. Next we study the statistical properties of the returns for Similar to the above, the normalized values of the above-men-
the stock indexes. Fig. 3 presents the plot of the time series log tioned ve kinds of data can also be given.
return of SAI, SBI, DJI and IXIC. We denote the price sequences of
SAI, SBI, DJI and IXIC at time t by stt 0; 1,2, . . ., then the
corresponding formula of stock logarithmic return is given by 3.2. Training the improved Legendre neural network

Rt ln st 1ln st: Data sets are divided into two parts, the data training set and
the data testing set. We collect the data of SAI in 2000.12009.3
In Fig. 3, we can see that the prices of the index uctuate as the training set and the data of SAI in 2009.42010.4 as the
wildly, and this shows that there is a very high level of noise in testing set. According to the procedures of the three-layer net-
the data that causes the difculty in forecasting. In the pretreat- work introduced in Section 2, we chose the 5  15  1 neural
ment stage, the collected data should be properly adjusted and network structure which ve is the number of neural nodes in the
normalized, in order to reduce the impact of noise in the stock input layer, 15 is the number of neural nodes in the hidden layer
markets. First, three-sigma rule of mathematical statistics is and one is the number of neural nodes in the output layer. The
applied for data preprocessing before forecasting, that is, the maximum training cycles and the learning rate are set 120 and
sample data lie without 3 standard deviations of the mean should Z 0:01 respectively, and the pre-set minimum error accuracy is
be deleted. Second, the input data are pre-processed (or normal- set 0.001. When using the Legendre neural network with
ized) between 0 and 1, and passed to the improved Legendre the random time strength function to predict the daily closing
neural networks as non-stationary data. The data are normalized price of stock index, we assume mt (the drift function) and st

0.1 0.1

0.08
loganthmic prise returns R(t) of SAI

loganthmic prise returns R(t) of SBI

0.06 0.05

0.04

0.02 0

0.02 0.05

0.04

0.06 0.1

0.08 loganthmic prise returns R(t) of SAI loganthmic prise returns R(t) of SBI

0.1 0.15
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
TimeT TimeT

0.15
0.1

0.08 loganthmic prise returns R(t) of DJI 0.1


loganthmic prise returns R(t) of IXIC
loganthmic prise returns R(t) of DJI

0.06
0.05
0.04

0.02 0

0
0.05
0.02

0.04 0.1

0.06
0.15 loganthmic prise returns R(t) of IXIC
0.08

0.1 0.2
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Time T Time T

Fig. 3. The plots of log price returns of SAI (a), SBI (b), DJI (c) and IXIC (d).
F. Liu, J. Wang / Neurocomputing 83 (2012) 1221 17

Table 1 (the volatility function) as follows:


Comparison of training errors of different data for SAI, SBI, DJI and IXIC with the v
model LNNRT. u
1 u 1 X N
mt , st t xt x2 , 3
2 N1 t 1
Time Actual Predictive Error t a

(a) SAI where a is the predictive parameter, x is the mean of the sample
2000/1/14 1497.260 1574.2643 0.0514
data. The random time strength function is given by (see (1))
2002/1/16 1539.820 1580.3856 0.0263
2007/1/16 2963.864 2883.8650 0.0270 r
R tn R tn PN
2008/1/16 5552.460 5577.5392 0.0045 1=t a2 dt 1=N1 xt x2 dBt
1 t0 t0
2009/1/15 2015.950 2016.4100 0.0002 jtn e t 1
:
t
(b) SBI
2000/1/19 38.500 44.3055 0.1508 The cost function of network training is given by (see (2))
2002/1/21 142.020 148.5087 0.0457 r
2006/1/20 85.462 84.0018 0.0171 R tn 2
R tn P N
2
2008/1/18 354.496 351.3777 0.0088 1XN
1 t0 1=t a dt t0 1=N1 t 1xt x dBt
2009/1/19 121.067 121.3296
Ek e dtn ytn 2 :
0.0022 2n1t
(c) DJI
2001/11/13 9750.950 9521.8256 0.0235 In the following, we consider the LNNRT model by comparison
2002/11/14 8542.130 8374.8577 0.0196 for different pair values of m, s, which are selected as mt, st,
2006/11/14 12 218.010 12 062.7718 0.0127
2007/11/14 13 231.010 13 078.2572 0.0115
mt,0, 0, st and (0,0) respectively. In Table 1, the training
2008/11/14 8497.310 8444.4129 0.0062 errors of different trading dates for (a) SAI, (b) SBI, (c) DJI, and
(d) IXIC with the LNNRT model mt, st are given. Take the
(d) IXIC
2001/9/25 1501.640 1555.9305 0.0362 relative error of SAI for example, it can be seen that during
2003/9/24 1843.700 1883.8808 0.0218 the years 2000 and 2007, the relative error is larger than those in
2005/9/23 2116.840 2082.6815 0.0161 the year 2008 and 2009, and the errors become smaller as the
2007/9/24 2667.950 2644.8227 0.0087
2008/9/24 2155.680 2154.9914 0.0003
time goes on, this shows the effect of the random time strength
function. Furthermore, the gap between the relative error of SAI

0.15 0.15

Relative errors of SAI Relative errors of SBI


0.1 0.1

0.05
0.05
Relative errors of SAI

Relative errors of SBI

0
0
0.05
0.05
0.1

0.1
0.15

0.15
0.2

0.2 0.25
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
Time T Time T

0.15 0.15
Relative errors of IXIC
Relative errors of DJI
0.1
0.1

0.05
Relative errors of IXIC
Relative errors of DJI

0.05
0

0.05
0

0.1
0.05
0.15

0.1 0.2
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Time T Time T

Fig. 4. Relative errors of SAI (a), SBI (b), DJI (c) and IXIC (d) with the parameter mt, st.
18 F. Liu, J. Wang / Neurocomputing 83 (2012) 1221

Table 2 and SBI is much larger than the relative error of DJI and IXIC.
Average relative errors of SAI, SBI, DJI and IXIC with different values mt, st. So we can conclude that the value of the historical data in the
foreign stock markets is greater than those in the Chinese stock
lt, rt SAI SBI DJI IXIC
markets, this means that the Chinese stock markets uctuate
(a) mt, st more sharply, the foreign stock markets are more stable.
I 0.0221 0.0122 0.0064 0.0142 Fig. 4 shows the uctuation behaviors of the time sequences of
II 0.0034 0.0016 0.0043 0.0046 relative errors during the years 20002009 for the prices of SAI, SBI,
III 0.0158 0.0064 0.0058 0.0121
IV 0.0328 0.0183 0.0127 0.0225
DJI and IXIC. In these plots, the time zero represents the farthest data
to the current date, and the larger t (date) represents the date that is
(b) 0, st
nearer to the current date. Fig. 4 also indicates that the improved
I 0.0350 0.0274 0.0104 0.0213
II 0.0203 0.0239 0.0273 0.0162 Legendre neural network model can be realized by assigning
III 0.0228 0.0258 0.0174 0.0129 different weights to the data of different time. Time sequences of
IV 0.0219 0.0299 0.0108 0.0325 relative errors of (a) SAI and (d) IXIC in Fig. 4 also reect the
(c) mt,0 randomness of model by the effect of the Brownian motion.
I 0.0332 0.0257 0.0107 0.0220 In Table 2, the symbol I stands for the training average
II 0.0204 0.0240 0.0276 0.0167 relative error, II stands for the latest 100 days in the training
III 0.0214 0.0247 0.0187 0.0138
data sets, III stands for the latest 500 days in the training data
IV 0.0218 0.0284 0.0108 0.0326
sets, and IV stands for the average relative error of the rst 1000
(d) (0,0)
days in the training data sets. Take the relative error of SAI with
I 0.0367 0.0276 0.0107 0.0227
II 0.0203 0.0239 0.0277 0.0167
mt, st for example, the average relative error is 2.21%, the
III 0.0244 0.0262 0.0187 0.0138 average relative error of the rst 1000 days is 3.28%, the average
IV 0.0230 0.0301 0.0108 0.0346 relative error of the latest 100 days is 0.34%, and the average
relative error of the latest 500 days is 1.58%. So from the above

3800 280
Actual value
3600 260 Actual value
Predictive value
Predictive value
3400 240
Stock Price Index of SBI

3200 220
SAI

3000 200

2800 180

2600 160

2400 140

2200 120
2200 2250 2300 2350 2400 2450 2500 2200 2250 2300 2350 2400 2450 2500
Time T Time T

11500 2600

11000 Actual value


2400 Actual value
10500 Prdictive value
Predictive value
2200
Stock Price Index of IXIC
Stock Price Index of DJI

10000

9500
2000
9000
1800
8500

8000 1600

7500
1400
7000

6500 1200
2300 2350 2400 2450 2500 2550 2600 2200 2250 2300 2350 2400 2450 2500 2550 2600
Time T Time T

Fig. 5. Comparisons of the predictive values and the actual values.


F. Liu, J. Wang / Neurocomputing 83 (2012) 1221 19

computations, it exhibits that the latest data are more valuable Table 4
than the historical data of the past in the stock market prediction. LNNRT evaluation vs. different Legendre neural networks based on MAPE values.
In Table 2, the test of the four market indexes errors for different
Method SAI SBI DJI IXIC
values mt, st are also given. When the pair value is mt, st,
the errors are the smallest values respectively; for the pair value LNNRT mt, st 1.2721 1.5441 1.6852 2.2754
(0,0), the errors are the largest values respectively. Take the LNNR 0, st 1.3828 1.6462 1.8431 2.3777
relative errors of SAI with mt, st for example, the average LNNT mt,0 1.4661 1.5051 1.8688 2.4068
LNN (0,0) 1.6007 1.7146 1.8885 2.4968
relative error is 2.21% by (3); for the pair value 0, st, the
average relative error is 3.35%; for the pair value mt,0,
the average relative error is 3.32%; and for the pair value (0,0),
the average relative error is 3.67%. This shows that the drift the relative errors of SAI for a week are given by the different
function and the volatility function mt, st which are dened in values mt and st. It shows that the relative error is the smallest
(3) can reduce the predictive errors of market indexes, so the when the pair value is mt, st (almost below 1%), and the
timely effectiveness and the randomness of the random time relative error is the largest for the pair value (0,0) (from 1% to 2%).
strength function jt n are benecial for the prediction of the So this shows that the random time strength function of the
improved Legendre neural network. nancial predictive model is propitious for increasing the preci-
sion of prediction.
3.3. Predictive results
3.4. Performance analysis of LNNRT
In this part, according to the introduced improved Legendre
neural network, the comparisons between the predictive values For the purpose of evaluating LNNRT forecasting accuracy, we
and the actual values of SAI, SBI, DJI and IXIC are displayed for the will compare outputs of this method with different values of mt
pair value mt, st in Fig. 5. and st for indexes SAI, SBI, DJI and IXIC. We perform this task by
In order to test the effect of the random time strength function, a common evaluation statistic which is called Mean Absolute
we take the pair of the drift parameter and the volatility para- Percentage Error (MAPE):
meter to be mt, st, mt,0, 0, st and (0,0) (see (3)). If
st 0, the random time strength function is 1 X M
9di Y i 9
MAPE 100  ,
(Z ) M i 1 di
tn
1 1
jtn exp dt ,
where di is the actual value, Y i is the forecasted value of ith test
t t 0 t a
2

data obtained from LNNRT and M is the number of test data. The
then the model has no effect of the randomness but only has the comparisons of LNNRT evaluation with different Legendre neural
effect of the timely effectiveness. If mt 0, the random time networks are shown in Table 4.
strength function of the model becomes Regarding to Table 4, our proposed approach has improved the
8 v 9
<Z tn u = forecasting accuracy of market indexes prices for all four cases.
1 u 1 X N
jtn exp t 2
xt x dBt , Namely, LNNRT model outperforms the LNN by the MAPE
t : t0 N1 t 1 ; evaluation, so this shows that LNNRT can be considered as a
promising alternative for stock price prediction problems.
then the model has no effect of timely effectiveness but only has
Next, by using the linear regression analysis, we compare the
the effect of the randomness. In Table 3, the predictive values and
predictive values of the improved Legendre neural network model
with the actual values of SAI, SBI, DJI and IXIC in Fig. 6. It is known
Table 3
that the linear regression can be used to t a predictive model to
Predictive values and errors of SAI for different values mt, st.
an observed data set of Y and X values. Through the regression
Time Actual Predictive Error analysis, the linear equations in SAI (a), SBI (b), DJI (c) and IXIC
(d) are obtained. The linear equation for SAI (a) is
(a) mt, st
2010/4/12 3280.900 3276.9066 0.0012 Y predictivevalue 0:9540X 121:6026
2010/4/13 3314.608 3273.6627 0.0124
2010/4/14 3319.775 3297.5000 0.0067 and the correlation coefcient R 0.9984. The linear equation for
2010/4/15 3318.486 3300.6055 0.0054 SBI (b) is
2010/4/16 3282.011 3302.1388  0.0061
Y predictivevalue 0:9662X 4:7104
(b) 0, st
2010/4/12 3280.900 3272.9811 0.0024 and the correlation coefcient R 0.9985. The linear equation for
2010/4/13 3314.608 3270.3739 0.0133 DJI (c) is
2010/4/14 3319.775 3294.5387 0.0076
2010/4/15 3318.486 3296.5602 0.0066 Y predictivevalue 0:9671X 317:0764
2010/4/16 3282.011 3298.1478  0.0049
and the correlation coefcient R 0.9946. The linear equation for
(c) mt,0
IXIC (d) is
2010/4/12 3280.900 3268.7991 0.0037
2010/4/13 3314.608 3267.0836 0.0143 Y predictivevalue 0:9410X 123:7288
2010/4/14 3319.775 3291.4691 0.0085
2010/4/15 3318.486 3292.3202 0.0079 and the correlation coefcient R0.9957.
2010/4/16 3282.011 3293.8971  0.0036

(d) 0; 0
2010/4/12 3280.900 3256.0333 0.0076 4. Conclusion and future research
2010/4/13 3314.608 3255.9392 0.0177
2010/4/14 3319.775 3280.9166 0.0117 In the present paper, according to the polynomial approxima-
2010/4/15 3318.486 3279.2383 0.0118
2010/4/16 3282.011 3280.8920 0.0003
tion theory, we model a three-layer multi-input Legendre neural
network with a random time strength function (LNNRT), which
20 F. Liu, J. Wang / Neurocomputing 83 (2012) 1221

Predictive value vs. Actual value, R=0.9984 Predictive value vs. Actual value, R=0.9985
7000 400

Predictive value points


350 Predictive value points
Best Linear Fit
6000 Best Linear Fit

Predictive value Y, Best Linear Fit:


Predictive valueY, Best Linear Fit:

300
Y=(0.9540)X+(121.6026)

Y=(0.9662)X+(4.7104)
5000
250

4000 200

150
3000

100
2000
50

1000 0
1000 2000 3000 4000 5000 6000 7000 0 50 100 150 200 250 300 350 400
Actual value of SAI Actual value of SBI

x 104 Predictive value vs. Actual value, R=0.9946 Predictive value vs. Actual value, R=0.9957
1.5 5000
Predictive value points
Predictive value points
1.4
Best Linear Fit 4500 Best Linear Fit
Predictive value Y, Best Linear Fit:

Predictive value Y, Best Linear Fit:


1.3
4000
Y=(0.9410)X+(123.7288)
Y=(0.9671)X+(317.0764)

1.2
3500
1.1
3000
1
2500
0.9

2000
0.8

0.7 1500

0.6 1000
0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5 1000 1500 2000 2500 3000 3500 4000 4500 5000 5500
Actual value of DJI x 104 Actual value of IXIC

Fig. 6. Regression of the predictive values and the actual values.

has the timely effectiveness of mt and the randomness of B(t), Acknowledgments


and is introduced in the algorithm of Legendre neural network to
modify the networks weights. In this model, we think that the The authors were supported in part by National Natural
data in the data training set should be time-variant, reecting the Science Foundation of China Grant No. 70771006 and No.
different behavior patterns of stock market at different times. If 10971010, BJTU Foundation No. S11M00010.
all the data are used to train the network equivalently, the
network system may not be consistent with the behavior of stock
References
market. So this work may be a useful approach to improve
Legendre neural network and nancial market forecasting.
[1] A. Abhyankar, L.S. Copeland, W. Wang, Uncovering nonlinear structure in real
Further, we test the predictive results of market indexes SAI, time stock market indexes: the SP500, the DAX, the Nikkei 225, and the FTSE-
SBI, DJI and IXIC by using LNNRT model, discuss the impact of the 100, J. Bus. Econ. Statist. 15 (1997) 114.
random time strength function for different pair values mt, st, [2] A. Abraham, B. Nath, P.K. Mahanti, Hybrid intelligent systems for stock
market analysis, in: V.N. Alexandrov, J. Dongarra, B.A. Julianno, R.S. Renner,
and forecast the accuracy of the model by MAPE and the linear C.J.K. Tan (Eds.), Computational Science, Springer-Verlag, Germany, 2001,
regression methods. The empirical research shows that the pp. 337345.
proposed random time strength function in this nancial pre- [3] R.E. Attar, Special Functions and Orthogonal Polynomials, Lulu Press,
Morrisvelle, NC, 2006.
dictive model is propitious for increasing the precision of predic- [4] E.M. Azoff, Neural Network Time Series Forecasting of Financial Market,
tion, and the volatility of the nancial model much approaches to Wiley, New York, 1994.
the stock market movement. In the future research, the random [5] F. Castiglione, Forecasting price increments using an articial neural network,
Adv. Complex Syst. 4 (2001) 4556.
time strength function which is introduced in this work should be
[6] T. Chenoweth, Z. Obradovic, A multi-component nonlinear prediction system
further improved for different nancial markets, for example we for the S&P500 Index, Neurocomputing 10 (1996) 275290.
hope that Levy random process can be considered as the random [7] H. Demuth, M. Beale, Neural Network Toolbox: For Use with MATLAB, 5th ed.,
time strength function. Different selections of the drift function The Math Works, Inc, Natick, MA, 1998.
[8] E. Egrioglu, C.H. Aladag, U. Yolcu, V.R. Uslu, M.A. Basaran, A new approach
mt and the volatility function st are also interesting in our based on articial neural networks for high order multivariate fuzzy time
future work. series, Expert Syst. Appl. 36 (2009) 1058910594.
F. Liu, J. Wang / Neurocomputing 83 (2012) 1221 21

[9] S.A. George, P.V. Kimon, Forecasting stock market short-term trends using a [25] A. Shtub, P. Versano, Estimating the cost of steel pipe bending, a comparison
neuro-fuzzy based methodology, Expert Syst. Appl. 36 (2009) 1069610707. between neural networks and regression analysis, J. Prod. Econ. 62 (1999)
[10] J.G.D. Gooijer, R.J. Hyndman, 25 Years of time series forecasting, Int. J. 201207.
Forecast. 22 (2006) 443473. [26] H. Wang, Flexible ow shop scheduling: optimum, heuristics, and AI solu-
[11] E. Hadavandi, H. Shavandi, A. Ghanbari, Integration if genetic fuzzy systems tions, Expert Syst. 22 (2005) 7885.
and articial neural networks for stock price forecasting, Knowledge-Based [27] H. Wang, V. Jacob, E. Rolland, Design of efcient hybrid neural network for
Syst. 23 (2010) 800808. exible ow shop scheduling, Expert Syst. 20 (2003) 208231.
[12] M.R. Hassan, B. Nath, M. Kirley, A fusion model of HMM, ANN and GA for [28] J. Wang, Stochastic Process and its Application in Finance, Tsinghua Uni-
stock market forecasting, Expert Syst. Appl. 33 (2007) 171180. versity Press and Beijing Jiao tong University Press, Beijing, 2007.
[13] L.Q. Han, Theory, Design and Application of Articial Neural Network, [29] Y. Zhang, X. Wan, Statistical fuzzy interval neural networks for currency
Chemical Industry Press, 2002. exchange rate time series prediction, Appl. Soft Comput. 7 (2007) 11491156.
[14] S.H. Kim, S.H. Chun, Graded forecasting using an array of bipolar predictions:
Application of probabilistic neural network to a stock market index, Int. J.
Forecast. 14 (1998) 323337.
[15] A. Lapedes, R. Farber, Nonlinear signal processing using neural network, in: Fajiang Liu is a Researcher in Institute of Financial
Proceedings of the IEEE Conference on Neural Information Processing Mathematics and Financial Engineering at Beijing
System-Natural and Synthetic, 1987. Jiaotong University, China. He received his B.Sc. degree
[16] Z. Liao, J. Wang, Forecasting model of global stock index by stochastic time in Mathematics at Beijing Jiaotong University in 2010.
effective neural network, Expert Syst. Appl. 37 (2010) 834841. He is currently working in Department of Mathematics
[17] R. Majhi, G. Panda, G. Sahoo, Efcient prediction of exchange rates with low at Beijing Jiaotong University as a graduate student.
complexity articial neural network models, Expert Syst. Appl. 36 (2009) His research interests include Stochastic Systems,
181189. Stochastic Processes, Articial Intelligence, Neural Net-
[18] L.M. Menezes, N.Y. Nikolaev, Forecasting with genetically programmed works, Modeling and Computer Simulation, Probability
polynomial neural networks, Int. J. Forecast. 22 (2006) 249265. Theory and Statistics, Financial Mathematics, Financial
[19] Y. Nakajima, Are uctuations in stock price unexpected, Artif. Intell. Knowl. Engineering.
Based Process. 100 (2000) 3742.
[20] N. Oconnor, M.G. Madden, A neural network approach to predicting stock
exchange movements using external factors, Knowledge-Based Syst. 19
(2006) 371378.
[21] J.C. Patra, P.K. Meher, G.C. Chakraborty, Nonlinear channel equalization for
Jun Wang is a Full Professor in College of Science, and
wireless communication systems using Legendre neural networks, Signal Director of Institute of Financial Mathematics and
Process. 89 (2009) 22512262. Financial Engineering at Beijing Jiaotong University,
[22] P.K.H. Phua, X. Zhu, C. Koh, Forecasting stock index increments using neural China. He received his B.Sc. degree in Mathematics at
network with trust region methods, in: Proceedings of IEEE International Beijing Normal University in 1984, and received his
Joint Conference on neural network, vol. 1, 2003, pp. 260265. Ph.D. degree in Probability and Statistics at Kobe
[23] R. Pino, J. Parreno, A. Gomez, P. Priore, Forecasting next-day price of University of Japan in 2000. His research interests
electricity in the Spanish energy market using articial neural networks, include Large Scale Interacting Systems, Stochastic
Eng. Appl. Artif. Intell. 21 (2008) 5362. Systems, Stochastic Processes, Articial Intelligence,
[24] A.P.N. Refenes, A.N. Burgess, Y. Bentz, Neural network in nancial engineer- Neural Networks, Modeling and Computer Simulation,
ing: a study in methodology, IEEE Trans. Neural Network 8 (1997) Probability Theory and Statistics, Financial Mathe-
12221267. matics, Financial Engineering.

Das könnte Ihnen auch gefallen