Sie sind auf Seite 1von 10

Refereed

Comparing ANN Based Models with ARIMA for Prediction of


Forex Rates
a b
Joarder Kamruzzaman and Ruhul A Sarker

Abstract Exchange rates prediction is one of the


demanding applications of modern time
In the dynamic global economy, the series forecasting. The rates are inherently
accuracy in forecasting the foreign currency noisy, non-stationary and deterministically
exchange (Forex) rates or at least chaotic [3, 22]. These characteristics
predicting the trend correctly is of crucial suggest that there is no complete
importance for any future investment. The information that could be obtained from the
use of computational intelligence based past behaviour of such markets to fully
techniques for forecasting has been proved capture the dependency between the future
extremely successful in recent times. In this rates and that of the past. One general
paper, we developed and investigated three assumption is made in such cases is that
Artificial Neural Network (ANN) based the historical data incorporate all those
forecasting models using Standard behaviour. As a result, the historical data is
Backpropagation (SBP), Scaled Conjugate the major player (/input) in the prediction
Gradient (SCG) and Backpropagation with process. However, it is not clear how good
Baysian Regularization (BPR) for Australian is these predictions. The purpose of this
Foreign Exchange to predict six different paper is to investigate and compare two
currencies against Australian dollar. Five well-known prediction techniques, under
moving average technical indicators are different parameter settings, for several
used to build the models. These models different exchange rates.
were evaluated using three performance
metrics, and a comparison was made with For more than two decades, Box and
the best known conventional forecasting Jenkins’ Auto-Regressive Integrated
model ARIMA. All the ANN based models Moving Average (ARIMA) technique [1] has
outperform ARIMA model. It is found that been widely used for time series
SCG based model performs best when forecasting. Because of its popularity, the
measured on the two most commonly used ARIMA model has been used as a
metrics and shows competitive results when benchmark to evaluate many new modelling
compared with BPR based model on the approaches [8]. However, ARIMA is a
third indicator. Experimental results general univariate model and it is
demonstrate that ANN based model can developed based on the assumption that
closely forecast the forex market. the time series being forecasted are linear
and stationary [2].

Introduction The Artificial Neural Networks, the well-


known function approximators in prediction
The foreign exchange market has and system modelling, has recently shown
experienced unprecedented growth over its great applicability in time-series analysis
the last few decades. The exchange rates and forecasting [20-23]. ANN assists
play an important role in controlling multivariate analysis. Multivariate models
dynamics of the exchange market. As a can rely on greater information, where not
result, the appropriate prediction of only the lagged time series being forecast,
exchange rate is a crucial factor for the but also other indicators (such as technical,
success of many businesses and fund fundamental, inter-marker etc. for financial
managers. Although the market is well- market), are combined to act as predictors.
known for its unpredictability and volatility,
there exist a number of groups (like Banks, a
Gippsland School of Computing and IT,
Agency and other) for predicting exchange Monash University, Churchill, VIC 3842
rates using numerous techniques. b
School of CS, UNSW@ADFA, Canberra, ACT
2600

2 ASOR BULLETIN, Volume 22 Number 2, June 2003


In addition, ANN is more effective in Backpropagation which has been studied
describing the dynamics of non-stationary considerably in other studies.
time series due to its unique non-
parametric, non-assumable, noise-tolerant After introduction, ARIMA, ANN based
and adaptive properties. ANNs are forecasting models and the performance
universal function approximators that can metrics are briefly introduced. In the
map any nonlinear function without a priori following two sections, data collection and
assumptions about the data [2]. experimental results are presented. Finally
conclusions are drawn.
In several applications, Tang and Fishwich
[17], Jhee and Lee [10], Wang and Leu [18],
Hill et al. [7], and many other researchers ARIMA: An Introduction
have shown that ANNs perform better than
ARIMA models, specifically, for more The Box-Jenkins method [1 & 2] of
irregular series and for multiple-period- forecasting is different from most
ahead forecasting. Kaastra and Boyd [11] conventional optimization based methods.
provided a general introduction of how a This technique does not assume any
neural network model should be developed particular pattern in the historical data of the
to model financial and economic time series to be forecast. It uses an iterative
series. Many useful, practical approach of identifying a possible useful
considerations were presented in their model from a general class of models. The
article. Zhang and Hu [23] analysed chosen model is then checked against the
backpropagation neural networks' ability to historical data to see whether it accurately
forecast an exchange rate. Wang [19] describes the series. If the specified model
cautioned against the dangers of one-shot is not satisfactory, the process is repeated
analysis since the inherent nature of data by using another model designed to
could vary. Klein and Rossin [12] proved improve on the original one. This process is
that the quality of the data also affects the repeated until a satisfactory model is found.
predictive accuracy of a model. More
recently, Yao et al. [20] evaluated the A general class of Box-Jenkins models for a
capability of a backpropagation neural- stationary time series is the ARIMA or
network model as an option price autoregression moving-average, models.
forecasting tool. They also recognised the This group of models includes the AR
fact that neural-network models are context model with only autoregressive terms, the
sensitive and when studies of this type are MA models with only moving average
conducted, it should be as comprehensive terms, and the ARIMA models with both
as possible for different markets and autoregressive and moving-average terms.
different neural-network models. The Bob-Jenkins methodology allows the
analyst to select the model that best fits the
In this paper, we apply ARIMA and ANNs
data. The details of AR, MA and ARIMA
for predicting currency exchange rates of
models can be found in Jarrett [6 & 9]
Australian Dollar with six other currencies
such as US Dollar (USD), Great British
Pound (GBP), Japanese Yen (JPY),
Artificial Neural Network: An
Singapore Dollar (SGD), New Zealand
Introduction
Dollar (NZD) and Swiss Franc (CHF). A
total 500 weeks (closing rate of the week)
In this section we first briefly present
data are used to build the model and 65
artificial neural networks and then the
weeks data to evaluate the models. Under
learning algorithms used in this study to
ANNs, three models using standard
train the neural networks.
backpropagation, scaled conjugate gradient
and Baysian regression were developed.
Artificial Neuron
The outcomes of all these models were
compared with ARIMA based on three
In the quest to build an intelligent machine
different error indicators. The results show
in the hope of achieving human like
that ANN models perform much better than
performance in the field of speech and
ARIMA models. Scaled conjugate gradient
pattern recognition, natural language
and Baysian regression models show
processing, decision making in fuzzy
competitive results and these models
situation etc. we have but one naturally
forecasts more accurately than standard
occurring model: the human brain itself, a

ASOR BULLETIN, Volume 22 Number 2, June 2003 3


highly powerful computing device. It follows input xj from other neuron j which is
that one natural idea is to simulate the multiplied by the connection strength called
functioning of brain directly on a computer. weight ωj (synaptic strength in biological
The general conjecture is that thinking neuron) to produce total net input as the
about computation in terms of brain weighted sum of all inputs as shown below.
metaphor rather than conventional
computer will lead to insights into the nature net = ∑ ω j x j
of intelligent behavior. This conjecture is j
strongly supported by the very unique The output of the neuron is produced by
structure of human brain. passing the net input through an activation
function. The commonly used activation
Digital computers can perform complex functions are hard limiter, sigmoidal or
calculations extremely fast without errors gaussian activation function.
and are capable of storing vast amount of
information. Human being cannot approach x1
these capabilities. On the other hand
humans routinely perform tasks like output
common sense reasoning, talking, walking, x2 net
y=f(net)
and interpreting a visual scene etc. in real
time effortlessly. Human brain consists of
hundred billions of neurons, each neuron xn
being an independent biological information
processing unit. On average each neuron is Fig.1. An artificial neuron.
connected to ten thousands surrounding
neurons, all act in parallel to build a Neural Network Architecture
massively parallel architecture. What we do
in about hundred computational steps, Neural networks can be very useful to
computers cannot do in million steps. The realize an input-output mapping when the
underlying reason is that, even though each exact relationship between input-output is
neuron is an extremely slow device unknown or very complex to be determined
compared to the state-of-art digital mathematically. Because of its ability to
component, the massive parallelism gives learn complex mapping, recently it has
human brain the vast computational power been used for modelling nonlinear
necessary to carry out complex tasks. economic relationship. By presenting a data
Human brain is also highly fault tolerant as set of input-output pair iteratively, a neural
we continue to function perfectly though network can be trained to determine a set of
neurons are constantly dying. We are also weights that can approximate the mapping.
better capable of dealing with fuzzy
situations by finding closest matches of new Multilayer feedforward network, as shown in
problem to the old ones. Inexact matching Fig. 2, is one of most commonly used
is something brain-style model seem to be neural network architecture. It consists of
good at, because of the diffuse and fluid an input layer, an output layer and one or
way in which knowledge is represented. All more intermediate layer called hidden layer.
these serve a strong motivation for the idea All the nodes at each layer are connected to
of building an intelligent machine modeled each node at the upper layer by
after biological neuron, now known as interconnection strength called weights. xi's
artificial neural networks. are the inputs, ω's are the weights, yk's are
output produced by the network. All the
Artificial neural network models are very interconnecting weights between layers are
simplified versions of our understanding of initialized to small random values at the
biological neuron, which is yet far from beginning. During training inputs are
complete. Each neuron’s input fibre called presented at the input layer and associated
dendrite receives excitatory signals through target output is presented at the output
thousands of surrounding neurons’ output layer. A training algorithm is used to attain a
fibre called axon. When the total summation set of weights that minimizes the difference
of excitatory signals becomes sufficient it the target output and actual output
causes the neuron to fire sending excitatory produced by the network.
signal to other neurons connected to it.
Figure 1 shows a basic artificial neural
network model. Each neuron receives an

4 ASOR BULLETIN, Volume 22 Number 2, June 2003


new direction is perpendicular to the old
direction. This approach to the minimum is
a zigzag path and one step can be mostly
yk Output undone by the next. In CG method, a new
ωk search direction spoils as little as possible
j
hj the minimization achieved by the previous
j Hidden layer
one and the step size is adjusted in each
iteration. The general procedure to
ji

ωj
i i Input layer determine the new search direction is to
combine the new steepest descent direction
xi with the previous search direction so that
the current and previous search directions
are conjugate as governed by the following
Fig. 2. A multiplayer feerforward ANN equations.
structure.
ωk +1 = ωk + α k pk ,
There are many different neural net learning
algorithms found in the literature. No study pk = − E ′ (ω) + α k pk +1
has been reported to analytically determine
the generalization performance of each where pk is the weight vector in k-th
algorithm. In this study we experimented iteration, pk and pk+1 are the conjugate
with three different neural network learning directions in successive iterations. αk and βk
algorithms, namely standard are calculated in each iteration. An
Backpropagation (BP), Scaled Conjugate important drawback of CG algorithm is the
Gradient Algorithm (SCG) and requirement of a line search in each
Backpropagation with regularization (BPR) iteration which is computationally
in order to evaluate which algorithm expensive. Moller introduced the SCG to
predicts the exchange rate of Australian avoid the time-consuming line search
dollar most accurately. In the following we procedure of conventional CG. SCG needs
describe the three algorithms briefly. to calculate Hessian matrix which is
approximated by
Training Algorithms

Standard BP: BP [16] uses steepest E ′ (ωk + σ k pk ) − E ′ (ωk )


gradient descent technique to minimize the E ′′ (ωk ) pk = + λ k pk
sum-of-squared error E over all training σk
data. During training, each desired output dj
is compared with actual output yj and E is where E' and E'' are the first and second
calculated as sum of squared error at the derivative of E. pk, σk and λk are the search
output layer. direction, parameter controlling the second
derivation approximation and parameter
The weight ωj is updated in the n-th training regulating indefiniteness of the Hessian
cycle according to the following equation. matrix. Considering the machine precision,
the value of σ should be as small as
∂E -4
possible (≤ 10 ). A detailed description of
∆ ω j ( n) = − η + α ∆ ω j ( n − 1)
∂ω j the algorithm can be found in [15].
The parameters η and α are the learning BPR: A desired neural network model
rate and the momentum factor, respectively. should produce small error on out of sample
The learning rate parameter controls the data, not only on sample data alone. To
step size in each iteration. For a large-scale produce a network with better
problem Backpropagtion learns very slowly generalization ability, MacKay [14]
and its convergence largely depends on proposed a method to constrain the size of
choosing suitable values of η and α by the network parameters by regularization.
user. Regularization technique forces the network
to settle to a set of weights and biases
SCGA: In conjugate gradient methods, a having smaller values. This causes the
search is performed along conjugate network response to be smoother and less
directions, which produces generally faster likely to overfit [5] and capture noise. In
convergence than steepest descent regularization technique, the cost function F
directions [5]. In steepest descent search, a is defined as

ASOR BULLETIN, Volume 22 Number 2, June 2003 5


Bank of Australia. We considered exchange
1−γ n 2
F = γE +
n
∑ω j
rate of US dollar, British Pound, Japanese
j =1 Yen, Singapore dollar, New Zealand dollar
and Swiss Franc. As outlined in an earlier
where E is the sum-squared error and γ section, 565 weekly data was considered of
(<1.0) is the performance ratio parameter, which first 500 weekly data was used in
the magnitude of which dictates the training and the remaining 65 weekly data
emphasis of the training. A large γ will drive for evaluating the model. The plots of
the error E small whereas a small γ will historical rates for US Dollar (USD), Great
emphasize parameter size reduction at the British Pound (GBP), Singapore Dollar
expense of error and yield smoother (SGD), New Zealand Dollar (NZD) and
network response. Optimum value of γ can Swiss Franc (CHF) are shown in Figure 3,
be determined using Bayesian and for Japanese Yen (JPY) in Figure 4.
regularization in combination with
Levenberg-Marquardt algorithm [4] 1.5
USD
SGD
NZD
1.3 CHF

Neural Network Forecasting Model GBP

1.1
Technical and fundamental analyses are
the two major financial forecasting 0.9

methodologies. In recent times, technical


analysis has drawn particular academic 0.7

interest due to the increasing evidence that


markets are less efficient than was 0.5

originally thought [13]. Like many other


economic time series model, exchange rate 0.3
1 51 101 151 20 25 30 35 40 45 50 551

exhibits its own trend, cycle, season and W eek N um b er

irregularity. In this study, we used time


Figure 3. Historical rates for USD, GBP,
delay moving average as technical data.
SGD, NZD and CHF
The advantage of moving average is its
tendency to smooth out some of the
Performance Metrics
irregularity that exits between market days
[21]. In our model, we used moving average
The forecasting performance of the above
values of past weeks to feed to the neural
mentioned models is evaluated against
network to predict the following week’s rate.
three widely used statistical metric, namely,
The indicators are MA5, MA10, MA20,
Normalized Mean Square Error (NMSE),
MA60, MA120 and Xi, namely, moving
Mean Absolute Error (MAE) and Directional
average of one week, two weeks, one
Symmetry (DS). These criteria are defined
month, one quarter, half year and last
in Table 1. NMSE and MAE measure the
week's closing rate, respectively. The
deviation between actual and forecasted
predicted value is Xi+1. So the neural
value. Smaller values of these metrics
network model has 6 inputs for six
indicate higher accuracy in forecasting. DS
indicators, one hidden layer and one output
measures correctness in predicted
unit to predict exchange rate. Historical data
directions and higher value indicates
are used to train the model. Once trained
correctness in trend prediction.
the model is used for forecasting.
115

Experimental Results JPY

105

In this section, we present the data 95


Exchange Rate

collection procedure and the results of 85

experiments.
75

Data Collection 65

55
The data used in this study is the foreign 1 51 101 151 201 251 301 351 401 451 501 551

exchange rate of six different currencies Week Num ber

against Australian dollar from January 1991 Figure 4. Historical rates Japanese Yen
to July 2002 made available by the Reserve

6 ASOR BULLETIN, Volume 22 Number 2, June 2003


Table 1: Performance metrics used in the 0.035
experiment. 0.03
SBP SCG BPR ARIMA
2 0.025
∑ ( x k − xˆ k ) 1 2 0.02
NMSE = k
2
= ∑ ( x k − xˆ k )
∑ (x k − x k) σ2N k 0.015
k 0.01
1 0.005
MAE = x k − xˆ k 0
N USD GBP JPY SGD NZD CHF
100
DS = ∑ dk , Fig. 5(b) MAE over 35 weeks
N k

1 if ( x k − xk −1) ( xˆ k − xˆ k −1) ≥ 0 100


SBP SCG BPR ARIMA
dk = 
0 otherwise
80

60
Simulation Results
40

Simulation was performed with different 20


neural networks and ARIMA model. The
0
performance of a neural network depends
USD GBP JPY SGD NZD CHF
on a number of factors, e.g., initial weights
chosen, different learning parameters used Fig. 5(c) DS over 35 weeks
during training (described in section 2.3)
and the number of hidden units. For each 3
algorithm, we trained 30 different networks SBP SCG BPR ARIMA
2.5
with different initial weights and learning
parameters. The number of hidden units 2
was varied between 3~7 and the training 1.5
was terminated at iteration number between
1
5000 to 10000. The simulation was done in
MATLAB using modules for SBP, SCG and 0.5
BPR from neural network toolbox. The best 0
results obtained by each algorithm are USD GBP JPY SGD NZD CHF
presented below. The ARIMA model (with
parameters setting 1,0,1) was run from Fig. 6(a) NMSE over 65 weeks
Minitab on a IBM PC.
0.05 SBP SCG BPR ARIMA
After a model is built, exchange rate is 0.04
forecasted for each currency over the test
data. Prediction performance is measured 0.03
in terms of MNSE, MAE and DS over 35 0.02
weeks and 65 weeks by comparing the
forecasted and actual exchange rate. 0.01
Figures 5(a)~(c) and 6(a)~(c) present the 0
performance metrics graphically over 35 USD GBP JPY SGD NZD CHF
and 65 weeks respectively.
Fig. 6(b) MAE over 65 weeks
1.4
SBP SCG BPR ARIMA
100
1.2
SBP SCG BPR ARIMA
1 80
0.8
60
0.6

40
0.4

0.2 20
0 0
USD GBP JPY SGD NZD CHF
USD GBP JPY SGD NZD CHF
Fig. 5(a) NMSE over 35 weeks
Fig. 6(c) DS over 65 weeks

ASOR BULLETIN, Volume 22 Number 2, June 2003 7


From Figures 5, 6 and 8, it is clear that the ANN based models outperformed ARIMA
quality of forecast with ARIMA model model measured on three different
deteriorates with the increase of the number performance metrics. Results demonstrate
of periods for the forecasting (/testing) that ANN based model can forecast the
phase. In other words, ARIMA could be Forex rates closely. Among the three ANN
suitable for shorter term forecasting than based models, SCG based model yields
longer term. However, the results show that best results measured on two popular
neural network models produce better metrics and shows results comparable to
performance than the conventional ARIMA BPR based models when measured on the
model for both shorter and longer term indicator DS.
forecasting which means ANN is more
suitable for financial modelling.
References
As we can see in Figure 5 and 6, both SCG
and BPR forecasts are better than SBP in [1] G. E. P. Box and G. M. Jenkins, Time
terms of all metrics. In our experiment this Series Analysis: Forecasting and Control,
is consistently observed in all other Holden-Day, San Francosco, CA.
currencies also. In terms of the most
commonly used criteria, i.e., NMSE and [2] L. Cao and F. Tay, “Financial
MAE, SCG perform better than BPR in all Forecasting Using Support Vector
currencies except Japanese Yen. In terms Machines,” Neural Comput & Applic, vol.
indicator DS, SCG yields slightly better 10, pp.184-192, 2001.
performance in case of Swiss France, BPR
slightly better in US Dollar and British [3] G. Deboeck, Trading on the Edge:
Pound, both perform equally in case of Neural, Genetic and Fuzzy Systems for
Japanese Yen, Singapore and New Chaotic Financial Markets, New York Wiley,
Zealand Dollar. Although we reported only 1994.
the best predictions in this paper, a sample
outputs based on error indicator NMSE for [4] F. D. Foresee and M.T. Hagan, “Gauss-
the best and worst predictions produced by Newton approximation to Bayesian
SBP for British-Pound are shown in Figure regularization,” Proc. IJCNN 1997, pp.
7. The actual and forecasted time series of 1930-1935.
six currency rates using ARIMA, and SCG
model are shown in Figures 8 and 9 [5] M.T. Hagan, H.B. Demuth and M.H.
respectively. From Figures 5 and 6, one can Beale, Neural Network Design, PWS
easily imagine the superiority of ANN based Publishing, Boston, MA, 1996.
models over ARIMA.
[6] J. Hanke and A. Reitsch, Business
Actual
Predict-SBP(worst)
Predict-SBP(best) Forecasting, Allyn and Bacon, Inc., Boston,
1986.
0.39

0.38 [7] T. Hill, M. O’Connor and W. Remus,


“Neural Network Models for Time Series
0.37
Forecasts,” Management Science, vol. 42,
MNSE

0.36 pp 1082-1092, 1996.


0.35
[8] H. B. Hwarng and H. T. Ang, “A Simple
0.34
Neural Network for ARMA(p,q) Time
0.33 Series,” OMEGA: Int. Journal of
1 5 9 13 17 21 25 29 33 37 41 45 49 53 57 61 65
Management Science, vol. 29, pp 319-333,
Week
2002.
Figure 7. Sample worst and best predictions
[9] J. Jarrett, Business Forecasting
Conclusion Methods, Basil Blackwell, Oxford, 1991.
In this study, we investigated three ANN
based forecasting models to predict six [10] W. C. Jhee and J. K. Lee,
foreign currencies against Australian dollar “Performance of Neural Networks in
using historical data and moving average Managerial Forecasting,” Intelligent
technical indicators, and a comparison was Systems in Accounting, Finance and
made with traditional ARIMA model. All the Management, vol. 2, pp 55-71, 1993.

8 ASOR BULLETIN, Volume 22 Number 2, June 2003


[18] J. H.Wang and J. Y. Leu, “Stock Market
[11] I. Kaastra and M. Boyd, “Designing a Trend Prediction Using ARIMA-based
Neural Network for Forecasting Financial Neural Networks,” Proc. of IEEE Int. Conf.
and Economic Time-Series,” on Neural Networks, vol. 4, pp. 2160-2165,
Neurocomputing, vol. 10, pp215-236, 1996. 1996.

[12] B. D. Klein and D. F. Rossin, “Data [19] S. Wang (1998) An Insight into the
Quality in Neural Network Models: Effect of Standard Back-Propagation Neural Network
Error Rate and Magnitude of Error on Model for Regression Analysis, OMEGA:
Predictive Accuracy,” OMEGA: Int. Journal Int. Journal of Management Science, 26,
of Management Science, vol. 27, pp 569- pp133-140.
582, 1999.
[20]J. Yao, Y. Li and C. L. Tan, “Option
[13] B. LeBaron, “Technical trading rule Price Forecasting Using Neural Networks,”
profitability and foreign exchange OMEGA: Int. Journal of Management
intervention,” Journal of Int. Economics, vol. Science, vol. 28, pp 455-466, 2000.
49, pp. 124-143, 1999.
[21] J. Yao and C.L. Tan, “A case study on
[14] D.J.C. Mackay, “Bayesian using neural networks to perform technical
interpolation,” Neural Computation, vol. 4, forecasting of forex,” Neurocomputing, vol.
pp. 415-447, 1992. 34, pp. 79-98, 2000.

[15] A.F. Moller, “A scaled conjugate [22] S. Yaser and A. Atiya, “Introduction to
gradient algorithm for fast supervised Financial Forecasting,“ Applied Intelligence,
learning,” Neural Networks, vol. 6, pp.525- vol. 6, pp 205-213, 1996.
533, 1993.
[23] G. Zhang and M. Y. Hu, “Neural
[16] D.E Rumelhart, J.L. McClelland and the Network Forecasting of the British
PDP research group, Parallel Distributed Pound/US Dollar Exchange Rate,” OMEGA:
Processing, vol. 1, MIT Press, 1986. Int. Journal of Management Science, 26, pp
495-506, 1998.
[17] Z. Tang and P. A. Fishwich, “Back-
Propagation Neural Nets as Models for
Time Series Forecasting,” ORSA Journal on
Computing, vol. 5(4), pp 374-385, 1993.

ASOR BULLETIN, Volume 22 Number 2, June 2003 9


0.6 0.4
Actual
0.56 Forecast
0.36

0.52 Actual

0.32
Forecast

0.48

0.28
0.44 1 11 21 31 41 51 61
1 11 21 31 41 51 61

(a) USD/AUD (b) GBP/AUD

1.05

75 A ctual
1 Forecast
A ctual
70 Forecast

0.95
65

60 0.9

55 0.85

50
0.8
1 11 21 31 41 51 61
1 11 21 31 41 51 61

(c) JPY/AUD (d) SGD/AUD

1.3 0.95

A ctual A ctual
Forecast
0.9 Forecast
1.25

0.85
1.2

0.8

1.15
0.75

1.1 0.7
1 11 21 31 41 51 61 1 11 21 31 41 51 61

(e) NZD/AUD (f) CHF/AUD

Figure 8. Forecasting of different currencies by ARIMA model over 65 weeks.

10 ASOR BULLETIN, Volume 22 Number 2, June 2003


0.6 0.4
0.39
0.58 Actual Forecast Actual Forecast
0.38
0.56 0.37
0.54 0.36
0.52 0.35
0.34
0.5
0.33
0.48 0.32
0.46 (a) USD/AUD 0.31 (b) GBP/AUD
0.44 0.3
1 11 21 31 41 51 61 1 11 21 31 41 51 61

75 1.05
Actual Forecast Actual Forecast
70 1

65 0.95

60 0.9

55 0.85
(c) JPY/AUD (d) SGD/AUD
50 0.8
1 11 21 31 41 51 61 1 11 21 31 41 51 61

1.3 1

Actual Forecast 0.95


Actual Forecast
1.26
0.9
1.22
0.85
1.18
0.8

1.14 0.75
(e) NZD/AUD (f) CHF/AUD
1.1 0.7
1 11 21 31 41 51 61 1 11 21 31 41 51 61

Figure 9. Forecasting of different currencies by SCG based neural network model over 65 weeks.

ASOR BULLETIN, Volume 22 Number 2, June 2003 11

Das könnte Ihnen auch gefallen