Beruflich Dokumente
Kultur Dokumente
AbstractObtaining accurate prediction of stock index sig- while using these models such as linearity and stationary
nificantly helps decision maker to take correct actions to develop of the the financial time-series data. Such non-realistic
a better economy. The inability to predict fluctuation of the assumptions can degrade the quality of prediction results
stock market might cause serious profit loss. The challenge is
that we always deal with dynamic market which is influenced [7], [8].
by many factors. They include political, financial and reserve Soft Computing based Models: Soft computing is a
occasions. Thus, stable, robust and adaptive approaches which term that covers artificial intelligence which mimic bio-
can provide models have the capability to accurately predict logical processes. These techniques includes Artificial
stock index are urgently needed. In this paper, we explore the Neural Networks (ANN) [9], [10], Fuzzy logic (FL)
use of Artificial Neural Networks (ANNs) and Support Vector
Machines (SVM) to build prediction models for the S&P 500 [11], Support Vector Machines (SVM) [12], particle
stock index. We will also show how traditional models such swarm optimization (PSO) [13] and many others.
as multiple linear regression (MLR) behave in this case. The
developed models will be evaluated and compared based on a ANNs known to be one of the successfully developed
number of evaluation criteria. methods which was widely used in solving many prediction
Index TermsStock Market Prediction; S&P 500; Regres- problem in diversity of applications [14][18]. ANNs was
sion; Artificial Neural Networks; Support Vector Machines. used to solve variety of problems in financial time series
forecasting. For example, prediction of stock price movement
I. I NTRODUCTION was explored in [19]. Authors provided two models for the
Understanding the nature of the relationships between daily Istanbul Stock Exchange (ISE) National 100 Index
financial markets and the country economy is one of the using ANN and SVM. Another type of ANN, the radial
major components for any financial decision making system basis function (RBF) neural network was used to forecast the
[1][3]. In the past few decades, stock market prediction stock index of the Shanghai Stock Exchange [20]. In [21],
became one of the major fields of research due to its wide ANNs were trained with stock data from NASDAQ, DJIA
domain of financial applications. Stock market research field and STI index. The reported results indicated that augmented
was developed to be dynamic, non-linear, complicated, non- ANN models with trading volumes can improve forecasting
parametric, and chaotic in nature [4]. Much research focuses performance in both medium-and long-term horizons. A
on improving the quality of index prediction using many comparison between SVM and Backpropagation (BP) ANN
traditional and innovative techniques. It was found that sig- in forecasting six major Asian stock markets was reported in
nificant profit can be achieved even with slight improvement [22]. Other soft computing techniques such as Fuzzy Logic
in the prediction since the volume of trading in stock markets (FL) have been used to solve many stock market forecasting
is always huge. Thus, financial time series forecasting was problems [23], [24].
explored heavenly in the past. They have shown many Evolutionary computation was also explored to solve the
characteristics which made them hard to forecast due to the prediction problem for the S&P 500 stock index. Genetic
need for traditional statistical method to solve the parameter Algorithms (GAs) was used to simultaneously optimize all
estimation problems. According to the research developed in of a Radial Basis Function (RBF) network parameters such
this field, we can classify the techniques used to solve the that an efficient time-series is designed and used for business
stock market prediction problems to two folds: forecasting applications [25]. In [26], author provided a new
Econometric Models: These are statistical based ap- prediction model for the S&P 500 using Multigene Symbolic
proaches such as linear regression, Auto-regression and Regression Genetic Programming (GP). Multigene GP shows
Auto-regression Moving Average (ARMA) [5], [6]. more robust results especially in the validation/testing case
There are number of assumptions need to be considered than ANN.
55 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 4, No.7, 2015
56 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 4, No.7, 2015
Equations 5 is called least square (LS) normal equations. used to adjust the ANN weights such that it minimize the
The solution of these normal equations produce the least error difference between the actual (i.e. desired) output and
square estimate for a and b. the ANN output [32][34].
n
B. Multiple Linear Regression X
S= wi x i (9)
The simple linear model Equation 2 can be expanded to a i=0
multivariate system of equations as follows: 1
(S) = (10)
1 + eS
y = a1 x 1 + a2 x 2 + + an x j (6)
There are number of tuning parameters should be des-
th
where xj is the j independent variable. In this case, we ignated before we can use ANN to learn a problem. They
need to use LS estimation to compute the optimal values for include: the number of layers in the hidden layer, the type
the parameters a1 , . . . , aj . Thus, we have to minimize the of sigmoid function for the neurons and the adopted learning
optimization function L, which in this case can be presented algorithm.
as:
n n
V. S UPPORT VECTOR MACHINES
X X
L= 2i = (yi a1 x1 a2 x2 . . . an xn )2 (7) Support vector machine is a powerful supervised learning
i=1 i=1 model for prediction and classification. SVM was first in-
To get the optimal values of the parameters a1 , . . . , an , troduced by Vladimir Vapnik and his co-workers at AT&T
we have to compute the differentiation for the functions: Bell Laboratories [35]. The basic idea of SVM is to map the
training data into higher dimensional space using a nonlinear
L L L mapping function and then perform linear regression in
= = = =0 (8)
a1 a2 aj higher dimensional space in order to separate the data [36].
Data mapping is performed using a predetermined kernel
Solving the set of Equations 8, we can produce the optimal
function. Data separation is done by finding the optimal
values of the model parameters and solve the multiple
hyperplane (called the Support Vector with the maximum
regression problem. This solution is more likely to be biased
margin from the separated classes. Figure 2 illustrates the
by the available measurements. If you we have large number
idea of the optimal hyperplane in SVM that separates two
of observations the computed estimate of the parameters shall
classes. In the left part of the figure, lines separated data
be more robust. This technqiue provide poor results when the
but with small margins while on the right an optimal line
observations are small in number.
separates the data with the maximum margins.
IV. A RTIFICIAL N EURAL N ETWORKS A. Learning Process in SVM
ANNs are mathematical models which were inspired from Training SVM can be described as follows; suppose
the understanding of some ideas and aspects of the biological we have a data set {xi , yj }i=1,..,n where the input vector
neural systems such as the human brain. ANN may be xi d and the actual yi . The modeling objective of
considered as a data processing technique that maps, or SVM is to find the linear decision function represented in
relates, some type of input stream of information to an the following equation:
output stream of processing. Variations of ANNs can be used
to perform classification, pattern recognition and predictive f (x) w, i (x) > +b (11)
tasks [15], [19], [20], [22], [30].
Neural network have become very important method for where w and b are the weight vector and a constant
stock market prediction because of their ability to deal with respectively, which have to be estimated from the data set.
uncertainty and insufficient data sets which change rapidly is a nonlinear mapping function. This regression problem
in very short period of time. In Feedforward (FF) Multilayer can be formulated as to minimize the following regularized
Perceptron (MLP), which is one of the most common ANN risk function:
systems, neurons are organized in layers. Each layer consists n
of a number of processing elements called neurons; each CX 1 2
R(C) = L (f (xi ), yi ) + kwk (12)
of which contains a summation function and an activation n i=1 2
function. The summation function is given by Equation 9
where L (f (xi ), yi ) is known as intensive loss function
and an activation function can be a type of sigmoid function
and given by the following equation:
as given in Equation 10.
Training examples are used as input the network via the (
input layer, which is connected to one, or more hidden layers. |f (x) y| |f (x) y|
L (f (x), y) = (13)
Information processing takes place in the hidden layer via the 0 otherwise
connection weights. The hidden layers are connected to an
output layer with neurons most likely have linear sigmoid To measure the degree of miss classification to achieve
function. A learning algorithms such as the BP one might be an acceptable degree of error, we use slack variables i and
57 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 4, No.7, 2015
TABLE I
C OMMON SVM KERNEL FUNCTIONS
0 i C i = 1, . . . , n
i as shown in Figure 3. This addition makes the problem 0 i C i = 1, . . . , n
presented as a constrained minimum optimization problem
(See Equation 14). K(.) is the kernel function and its values is an inner
product of two vectors xi and xj in the feature space (xi )
n
1 2
X and (xj ) and satisfies the Mercers condition. Therefore,
M in. R(w, i ) = kwk + C (i + i ) (14)
2 i=1
K(xi , xj ) = (xi ).(xj ) (18)
Subject to:
Some of the most common kernel functions used in the
yi < w, xi > b
+ i
literature are shown in Table I. In general, SVMs have
< w, xi > +b yi + i (15) many advantages over classical classification approaches like
i , i 0
artificial neural networks, decision trees and others. These
where C is a regularized constant greater than zero. Thus advantages include: good performance in high dimensional
it performs a balance between the training error and model spaces; and the support vectors rely on a small subset of
flatness. C represents a penalty for prediction error that is the training data which gives SVM a great computational
greater than . i and i are slack variables that form the advantage.
distance from actual values to the corresponding boundary
values of . The objective of SVM is to minimize i , i and VI. E VALUATION C RITERION
w2 . In order to assess the performance of the developed stock
The above optimization with constraint can be converted market predication models, a number of evaluation criteria
by means of Lagrangian multipliers to a quadratic program- will be used to evaluate these models. These criteria are
ming problem. Therefore, the form of the solution can be applied to measure how close the real values to the values
given by the following equation: predicted using the developed models. They include Mean
n Absolute Error (MAE), Root Mean Square Error (RMSE)
and correlation coefficient R. They are given in Equations
X
f (x) = (i i )K(xi , x) + b (16)
i=1 19, 20 and 21, respectively.
58 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 4, No.7, 2015
v 120
u n
u1 X
RM SE = t (yi yi )2 (20) 110
n i=1 100
0 10 20 30 40 50 60 70 80 90 100
Days
Pn
(yi y)(yi y) 1
Error Difference
R = qP i=1 (21)
n 2
Pn 2
i=1 (yi y) i=1 (yi y)
0.5
of measurements. 1.5
0 10 20 30 40 50 60 70 80 90 100
Days
index in the next week. While some of these features were 160
used in previous studies [30]. The list, the description, and 150
Actual
Predicted
the sources of the potential features are given in Table III 140
system.
160
Predicted
Predicted
125
27
X 155
y = a0 + ai x i (22) 120
150
i=1
115
59 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 4, No.7, 2015
TABLE II
A R EGRESSION M ODEL WITH I NPUTS : x1 , . . . , x27
TABLE III
T HE 27 POTENTIAL INFLUENTIAL FEATURES OF THE S&P 500 I NDEX [30]
TABLE IV
C. Developed ANN Model T HE S ETTING OF MLP
The proposed architecture of the MLP Network consists
Maximum number of epochs 500
of three layers with single hidden layer. Thus input layer Number of Hidden layer 1
of our neural network model has 27 input nodes while Number of neurons in hidden layer 20
the output layer consists of only one node that gives the Learning rate 0.5
Momentum 0.2
predicted next week value. Empirically, we found that 20
neurons in the hidden layer achieved the best performance.
The BP algorithm is used to train the MLP and update its
weight. Table IV shows the settings used for MLP. Figure the ability to map non-linearly the training data and the ease
7 and Figure 8 show the actual and predicted stock prices of implementation [37][39]. The values of the parameters
for training and testing cases of the developed ANN. The C and have high influence on the accuracy of the SVM
scattered plot for the developed ANN model is shown in model. Therefore, we used grid search to obtain these values.
Figure 9. It was found that the best performance can be obtained with
C = 100 and = 0.01. Figure 10 and Figure 11 show
D. Developed SVM Model the actual and predicted stock prices for training and testing
SVM with an RBF kernel is used to develop the S&P 500 cases of the developed SVM model. The scattered plot for
index model. The RBF kernel has many advantages such as the developed SVM model is shown in Figure 12.
60 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 4, No.7, 2015
Actual and Predicted SP500 ANN Model Training Case Actual and Predicted SP500 ANN Model Training Case
150 150
Actual Actual
140 140
Predicted Predicted
130 130
120 120
110 110
100 100
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Days Days
1 0.5
0.5
0
0
0.5
0.5
1
1
1.5 1.5
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Days Days
Fig. 7. ANN: Actual and Estimated S&P 500 Index values in Training Fig. 10. SVM: Actual and Estimated S&P 500 Index values in Training
Case Case
170
170
160
160
150
150 Actual
140 Predicted
140 Actual
Predicted
130
130 0 5 10 15 20 25 30 35 40 45
0 5 10 15 20 25 30 35 40 45 Days
Days
Error Difference
Error Difference 6
15
5
4
10
3
5 1
1
0 0 5 10 15 20 25 30 35 40 45
0 5 10 15 20 25 30 35 40 45 Days
Days
Fig. 8. ANN: Actual and Estimated S&P 500 Index values in Testing Case Fig. 11. SVM: Actual and Estimated S&P 500 Index values in Testing
Case
R for Training data: 0.9989 R for Testing data: 0.99012 R for Training data: 0.99952 R for Testing data: 0.99283
170
135 135 170
165
165
130 130
160
Predicted
Predicted
160
Predicted
Predicted
125 125
155
155
120 120
150
150
115 145 115
145
110 140
110
140
135
105 105
110 120 130 140 140 150 160 170 110 120 130 140 140 150 160 170
Actual Actual Actual Actual
61 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 4, No.7, 2015
E. Comments on the Results [9] M.-D. Cubiles-de-la Vega, R. Pino-Mejas, A. Pascual-Acosta, and
J. Munoz Garca, Building neural network forecasting models from
The calculated evaluation criterion of the regression, MLP time series ARIMA models: A procedure and a comparative analysis,
and SVM models for training and testing cases are shown in Intell. Data Anal., vol. 6, no. 1, pp. 5365, Jan. 2002. [Online].
Available: http://dl.acm.org/citation.cfm?id=1293993.1293996
Table V. Based on these results it can be noticed that SVM [10] R. Majhi, G. Panda, and G. Sahoo, Efficient prediction of exchange
outperformed the MLP and MLR models in both training rates with low complexity artificial neural network models, Expert
and testing cases. SVMs has many advantages such as using System Application, vol. 36, no. 1, pp. 181189, Jan. 2009. [Online].
Available: http://dx.doi.org/10.1016/j.eswa.2007.09.005
various kernels which allows the algorithm to suits many [11] M. R. Hassan, A combination of hidden markov model and
classification problems. SVM are more likely to avoid the fuzzy model for stock market forecasting, Neurocomputing,
problem of falling into local minimum. vol. 72, no. 1618, pp. 3439 3446, 2009. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0925231209001805
[12] W. Huang, Y. Nakamori, and S. Wang, Forecasting stock market
VIII. C ONCLUSIONS AND F UTURE W ORK movement direction with support vector machine, Computers &
Operations Research, vol. 32, no. 10, pp. 25132522, 2005.
[13] R. Majhi, G. Panda, G. Sahoo, A. Panda, and A. Choubey, Prediction
In this paper, we explored the use MLP and SVM to of S&P 500 and DJIA stock indices using particle swarm optimization
develop models for prediction the S&P 500 stock market technique, in The IEEE World Congress on Computational Intelli-
index. A 27 potential financial and economic variables which gence on Evolutionary Computation (CEC2008). IEEE, 2008, pp.
12761282.
impact the stock movement were adopted to build a relation- [14] Y. Zhang and L. Wu, Stock market prediction of S&P 500 via
ship between the stock index and these variables. The basis combination of improved bco approach and bp neural network, Expert
for choosing these variables was based on their substantial Syst. Appl., vol. 36, no. 5, pp. 88498854, 2009.
[15] S. O. Olatunji, M. Al-Ahmadi, M. Elshafei, and Y. A. Fallatah, Saudi
impact on the course of S&P 500 index. The data set was arabia stock prices forecasting using artificial neural networks, pp.
sampled on a weekly bases. The developed SVM model with 8186, 2011.
RBF kernel model provided good prediction capabilities with [16] T.-S. Chang, A comparative study of artificial neural networks,
and decision trees for digital game content stocks price prediction,
respect to the regression and ANN models. The results were Expert Syst. Appl., vol. 38, no. 12, pp. 14 84614 851, Nov. 2011.
validated using number of evaluation criteria. Future research [Online]. Available: http://dx.doi.org/10.1016/j.eswa.2011.05.063
shall focus on exploring other soft computing techniques to [17] E. Guresen, G. Kayakutlu, and T. U. Daim, Using artificial neural
network models in stock market index prediction, Expert System
solve the stock market prediction problems. Application, vol. 38, no. 8, pp. 10 38910 397, Aug. 2011. [Online].
Available: http://dx.doi.org/10.1016/j.eswa.2011.02.068
[18] P.-C. Chang, D.-D. Wang, and C.-L. Zhou, A novel model by
R EFERENCES evolving partially connected neural network for stock price trend
forecasting, Expert Syst. Appl., vol. 39, no. 1, pp. 611620, Jan. 2012.
[1] S. Hoti, M. McAleer, and L. L. Pauwels, Multivariate [Online]. Available: http://dx.doi.org/10.1016/j.eswa.2011.07.051
volatility in environmental finance, Math. Comput. Simul., [19] Y. Kara, M. Acar Boyacioglu, and O. K. Baykan, Predicting direction
vol. 78, no. 2-3, pp. 189199, Jul. 2008. [Online]. Available: of stock price index movement using artificial neural networks and
http://dx.doi.org/10.1016/j.matcom.2008.01.038 support vector machines: The sample of the istanbul stock exchange,
[2] Q. Wen, Z. Yang, Y. Song, and P. Jia, Automatic stock decision Expert Syst. Appl., vol. 38, no. 5, pp. 53115319, May 2011.
support system based on box theory and svm algorithm, Expert [Online]. Available: http://dx.doi.org/10.1016/j.eswa.2010.10.027
System Application, vol. 37, no. 2, pp. 10151022, Mar. 2010. [20] W. Shen, X. Guo, C. Wu, and D. Wu, Forecasting stock indices using
[Online]. Available: http://dx.doi.org/10.1016/j.eswa.2009.05.093 radial basis function neural networks optimized by artificial fish swarm
[3] M. Kampouridis, A. Alsheddy, and E. Tsang, On the investigation algorithm, Know.-Based Syst., vol. 24, no. 3, pp. 378385, Apr. 2011.
of hyper-heuristics on a financial forecasting problem, Annals of [Online]. Available: http://dx.doi.org/10.1016/j.knosys.2010.11.001
Mathematics and Artificial Intelligence, vol. 68, no. 4, pp. 225246, [21] X. Zhu, H. Wang, L. Xu, and H. Li, Predicting stock
Aug. 2013. [Online]. Available: http://dx.doi.org/10.1007/s10472-012- index increments by neural networks: The role of trading
9283-0 volume under different horizons, Expert System Application,
[4] T. Z. Tan, C. Quek, and G. S. Ng, Brain-inspired genetic vol. 34, no. 4, pp. 30433054, May 2008. [Online]. Available:
complementary learning for stock market prediction, in Proceedings http://dx.doi.org/10.1016/j.eswa.2007.06.023
of the IEEE Congress on Evolutionary Computation, CEC 2005, 2-4 [22] W.-H. Chen, J.-Y. Shih, and S. Wu, Comparison of support-vector
September 2005, Edinburgh, UK, 2005, pp. 26532660. [Online]. machines and back propagation neural networks in forecasting the
Available: http://dx.doi.org/10.1109/CEC.2005.1555027 six major asian stock markets, International Journal of Electron
[5] A. C. Harvey and P. H. J. Todd, Forecasting economic time series Finance, vol. 1, no. 1, pp. 4967, Jan. 2006. [Online]. Available:
with structural and box-jenkins models: A case study, Journal of http://dx.doi.org/10.1504/IJEF.2006.008837
Business & Economic Statistics, vol. 1, no. 4, pp. 299307, 1983. [23] P.-C. Chang, C.-Y. Fan, and J.-L. Lin, Trend discovery in financial
[Online]. Available: http://dx.doi.org/10.2307/1391661 time series data using a case based fuzzy decision tree, Expert
[6] Y. B. Wijaya, S. Kom, and T. A. Napitupulu, Stock price System Application, vol. 38, no. 5, pp. 60706080, May 2011.
prediction: Comparison of ARIMA and artificial neural network [Online]. Available: http://dx.doi.org/10.1016/j.eswa.2010.11.006
methods - an indonesia stocks case, in Proceedings of the 2010 [24] M. R. Hassan, K. Ramamohanarao, J. Kamruzzaman, M. Rahman,
Second International Conference on Advances in Computing, Control, and M. Maruf Hossain, A HMM-based adaptive fuzzy
and Telecommunication Technologies, ser. ACT10. Washington, inference system for stock market forecasting, Neurocomput.,
DC, USA: IEEE Computer Society, 2010, pp. 176179. [Online]. vol. 104, pp. 1025, Mar. 2013. [Online]. Available:
Available: http://dx.doi.org/10.1109/ACT.2010.45 http://dx.doi.org/10.1016/j.neucom.2012.09.017
[7] L. Yu, S. Wang, and K. K. Lai, A neural-network-based nonlinear [25] A. F. Sheta and K. De Jong, Time-series forecasting using
metamodeling approach to financial time series forecasting, Appl. ga-tuned radial basis functions, Inf. Sci. Inf. Comput. Sci.,
Soft Comput., vol. 9, no. 2, pp. 563574, Mar. 2009. [Online]. vol. 133, no. 3-4, pp. 221228, Apr. 2001. [Online]. Available:
Available: http://dx.doi.org/10.1016/j.asoc.2008.08.001 http://dx.doi.org/10.1016/S0020-0255(01)00086-X
[8] S. Walczak, An empirical analysis of data requirements for [26] A. Sheta, S. E. M. Ahmed, and H. Faris, Evolving stock mar-
financial forecasting with neural networks, J. Manage. Inf. Syst., ket prediction models using multi-gene symbolic regression genetic
vol. 17, no. 4, pp. 203222, Mar. 2001. [Online]. Available: programming, Artificial Intelligence and Machine Learning AIML,
http://dl.acm.org/citation.cfm?id=1289668.1289677 vol. 15, pp. 1120, 6 2015.
62 | P a g e
www.ijarai.thesai.org
(IJARAI) International Journal of Advanced Research in Artificial Intelligence,
Vol. 4, No.7, 2015
TABLE V
E VALUATION C RITERIA FOR THE DEVELOPED MODELS
[27] D. D. Thomakos, T. Wang, and J. Wu, Market timing and cap rota-
tion, Math. Comput. Model., vol. 46, no. 1-2, pp. 278291, Jul. 2007.
[Online]. Available: http://dx.doi.org/10.1016/j.mcm.2006.12.036
[28] S. Lahmiri, Multi-scaling analysis of the S&P500 under
different regimes in wavelet domain, Int. J. Strateg. Decis.
Sci., vol. 5, no. 2, pp. 4355, Apr. 2014. [Online]. Available:
http://dx.doi.org/10.4018/ijsds.2014040104
[29] S. Lahmiri, M. Boukadoum, and S. Chartier, Exploring information
categories and artificial neural networks numerical algorithms in
S&P500 trend prediction: A comparative study, Int. J. Strateg.
Decis. Sci., vol. 5, no. 1, pp. 7694, Jan. 2014. [Online]. Available:
http://dx.doi.org/10.4018/IJSDS.2014010105
[30] S. T. A. Niaki and S. Hoseinzade, Forecasting s&p500 index using
artificial neural networks and design of experiments, Journal of
Industrial Engineering International, vol. 9, no. 1, pp. 19, 2013.
[31] A. Sheta, H. Faris, and M. Alkasassbeh, A genetic programming
model for S&P 500 stock market prediction, International Journal
of Control and Automation, vol. 6, pp. 303314, 2013.
[32] A. Nigrin, Neural networks for pattern recognition, ser. A Bradford
book. Cambridge, Mass, London: The MIT Press, 1993. [Online].
Available: http://opac.inria.fr/record=b1126290
[33] J. Leonard and M. A. Kramer, Improvement of the backpropagation
algorithm for training neural networks, Computer Chemical Engineer-
ing, vol. 14, pp. 337343, 1990.
[34] A. K. Jain, J. Mao, and K. Mohiuddin, Artificial neural networks: A
tutorial, IEEE Computer, vol. 29, pp. 3144, 1996.
[35] V. Vapnik, The Nature of Statistical Learning Theory. Springer, New
York, 1995.
[36] , An overview of statistical learning theory, IEEE Transactions
on Neural Networks, vol. 5, pp. 988999, 1999.
[37] Y. B. Dibike, S. Velickov, D. Solomatine, and M. B. Abbott, Model
induction with support vector machines: introduction and applica-
tions, Journal of Computing in Civil Engineering, 2001.
[38] R. Noori, M. Abdoli, A. A. Ghasrodashti, and M. Jalili Ghazizade,
Prediction of municipal solid waste generation with combination of
support vector machine and principal component analysis: A case
study of mashhad, Environmental Progress & Sustainable Energy,
vol. 28, no. 2, pp. 249258, 2009.
[39] W. Wen-chuan, X. Dong-mei, C. Kwok-wing, and C. Shouyu, Im-
proved annual rainfall-runoff forecasting using psosvm model based
on eemd, 2013.
63 | P a g e
www.ijarai.thesai.org