Sie sind auf Seite 1von 9

Agric. sci. dev., Vol(3), No (7), July, 2014. pp.

256-264

TI Journals

ISSN:

Agriculture Science Developments

2306-7527

www.tijournals.com

Copyright 2014. All rights reserved for TI Journals.

Evaluation of Artificial Intelligence Systems Performance in


Precipitation Forecasting
Abazar Solgi*
1

MSc Student, Faculty of Water Sciences, Shahid Chamran University of Ahvaz,Iran.

Feridon Radmanesh
2

Assistant Professor, Faculty of Water Sciences, Shahid Chamran University of Ahvaz,Iran

Amir Pourhaghi
3

PhD Student, Faculty of Water Sciences, Shahid chamran university, Ahvaz, Iran.

Mohammad Bagherian Marzouni


4

Msc Student, Faculty of Water Sciences. Shahid chamran university, Ahvaz, Iran.

* Corresponding author: Aboozar_Solgi@yahoo.com.

Keywords
Artificial Intelligence Systems
Adaptive Neural Fuzzy Inference System
Artificial Neural Network
Monthly precipitation Forecast.

1.

Abstract
Evaluation of the factors affecting the behavior of hydrology, proposed in the field of dynamical
systems analysis, with a high degree of nonlinearity. In this regard, the advent of powerful theories
like fuzzy algorithms, neural networks, the state-space theory and etc. created megatrend in analyzing
dynamic systems behavior and various water engineering sciences. In the present study for prediction
of precipitation of Vrayneh Station which were in Nahavand, Hamedan, Iran, used Artificial Neural
Network and the results compared with Model of Adaptive Neural Fuzzy Inference System. In this
study relative humidity and temperature in addition to precipitation were used which in the superior
structures of the model observed that relative humidity and temperature improves the modeling
results. So it is suggested in studies of precipitation forecasts in addition to precipitation parameter,
temperature and relative humidity also could be used. The results showed that Adaptive Neural Fuzzy
Inference System model and Artificial Neural Network has fairly similar performance. Also training
and Transfer functions affecting on precipitation forecasts presented as recommendations for similar
work.

Introduction

Dissimilarity of spatial and temporal scales of hydrological processes and inaccuracies in the estimation of some parameters related to these
processes, causes problems in estimation and prediction in hydrology issues. According to non-linearity of the rainfall-runoff processes and
these phenomenon is random in terms of time and place, it is not easy to explain them with simple models. That is why today, nonlinear
networks as one of intelligent systems are widely used to predict complex nonlinear phenomena. Significant methods based on Artificial
Intelligence, Adaptive Neural Fuzzy Inference System, and Artificial Neural Networks and could mentioned. In recent years the use of
these methods in hydrological processes including precipitation and rainfall - runoff modeling has been considered by researchers .
Improvement in the performance of artificial neural networks (ANN) to predict seasonal time series was reviewed. So, several structures of
is proposed artificial neural network presented to predict seasonal time series. The model for four full time series was tested. The results of
proposed neural network had been compared with the results of current statistical models and other structure of neural network. This
comparison showed that the proposed model of neural network, has less prediction error than other method (Hamzaebi 2008). Rainfall runoff modeling of Susurluk catchment with neural network and fuzzy system is carried out. The results showed that fuzzy model and neural
network has almost similar performance (Dorum et al. 2010). Adaptive Neural Fuzzy Inference System in daily and monthly rainfall runoff
Modeling of Ligvan Chai (Tabriz, Iran) catchment was used. Finally, results with the results obtained by means of linear regression and Auto
Regressive Integrated Moving Average (ARIMA) were compared. The other hand, rainfall-runoff parameters used in the modeling, assumed
that has error and uncertainty, therefore, fuzzy logic is a useful tool for modeling these systems (Nourani et al. 2010). Long-term forecasts of
Zayandeh-Rood River runoff using fuzzy inference systems and artificial neural networks were carried out. The results indicate that the
combined use of two mentioned models to predict flow has acceptable accuracy (karamouz and Araghinejad 2011). Two combination methods
of artificial intelligence for modeling precipitation-runoff for two watersheds in Azerbaijan, Iran is presented. Two hybrid AI-based models
which are reliable in capturing the periodicity features of the process are introduced for watershed precipitation-runoff modeling. In the first
model, the SARIMAX-ANN (Seasonal Auto Regressive Integrated Moving Average with exogenous) model, an ANN and in the second
model, the wavelet-ANFIS model is used. The results showed although the proposed models can predict both short and long terms runoff
discharges, considering seasonality effects, but the second model is relatively more appropriate because it used the multi-scale time series of
precipitation and runoff data in the ANFIS input layer(Nourani et al. 2011). Rainfall-runoff modeling using an artificial neural network and
singular spectrum analysis was done in China. Results showed that the neural network has better performance compared with singular
spectrum analysis (Wu and Chau 2011). Runoff forecasting using a TakagiSugeno neural-fuzzy model with online learning. So to local
Learning Neural Fuzzy System (NFS) were used for modeling rainfall - runoff. The results showed that the performance of local learning
model is better than the results obtained from physical models e.g. the kinematic wave model (KWM), Storm Water Management Model
(SWMM) model, and HBV (Hydrologiska Byrns Vattenbalan savdelning) model. Also the real time to run local learning model without

257

Evaluation of Artificial Intelligence Systems Performance in Precipitation Forecasting


Agriculture Science Developments Vol(3), No (7), July, 2014.

requirement of re-implement has better results in comparison to run time of a neural fuzzy system (Talei et al. 2013). A comparison of
methods to avoid overfitting in neural networks training in the Annapolis River catchment runoff modeling has been studied. Extensive
calculations showed that the evolutionary algorithm based on a simple calculation has better performance than ANN (Piotrowski and
Napiorkowski 2013). A new combination of neural networks for modeling precipitation - runoff in the basin Aq Chay Iran Presented. The
model was combined of data processing methods, genetic algorithms and Levenberg Marquardt algorithm for training the neural network
input. Results showed that this method has more accurately predict runoff from artificial neural networks and Adaptive Neural-Fuzzy
Inference System (Asadi et al. 2013). In accordance to the widespread use of artificial intelligence systems in various disciplines, especially
science related to water, this two types of models (e.g. Adaptive Neural Fuzzy Inference System and Artificial Neural Network) was evaluated
to predict precipitation. Also in past constrained researches, parameter for predicting precipitation is just rainfall. In this study we used
temperature and relative humidity in addition to rainfall, to examine their influence on precipitation prediction.

2. Material and Methods


2.1 Study Area
Verayneh Rain gauge station in the Nahavand city, is in geographical position 48 degrees 24 minutes 15 seconds East longitude and 34 degrees,
04 minutes and 32 seconds North latitude. The station was established in 1969 and has a height of 1795 meters above sea level with 521 mm
long-term average annual precipitation. In this study, precipitation, temperature and relative humidity in 43 years period (1969-2012) were
collected and obtained from Vrayneh station (Table 1). To check the homogeneity of the data, Vesej station (as an auxiliary station) and the
double mass curve used which result confirmed homogeneity of our data .
Table 1. Some climatic variables of Vrayneh station.

climatic variable

Average

Maximum

Minimum

Precipitation(mm)

43.8

266

0.0

Temperature( C )

10

26.5

-9.1

Relative Humidity ( Percent)

68.1

87

20

Standard
deviation
48.8

Variance
2385.4

Coefficient of
Variation
1.1

9.2

84.4

0.9

11.2

132.9

0.2

Fig.1. Location of the Nahavand in state Hamedan and Iran.

Because importing raw data reduces the accuracy and speed of networks. Data normalization method is used which prevents the excessive
shrinkage of the weights and avoid early saturation of the neurons. By Normalization method each number convert to a number between 0
and 1 to be applicable to the neural network function (Riad et al. 2004). The following equation was used for this work.

Abazar Solgi *, Feridon Radmanesh, Amir Pourhaghi, Mohammad Bagherian Marzouni

258

Agriculture Science Developments Vol(3), No (7), July, 2014.

Y=0.5+ (0.5 ((x-x )/(x_max-x_min )))


Y=((x-x_min)/(x_max-x_min ))
Y=0.05+(0.95((x-x )/(x_max-x_min )))

(1)
(2)
(3)

Which X: data, X : average of data, Xmax: max value of the data, Xmin: min value of data, y is standardized data.

Table 2. Comparison of the results of the application of normalization relations

normalization relations

R2 Train

R2 Simulate

RMSE Train

RMSE Simulate

1
2

0.96
0.96

0.76
0.74

0.0283
0.0556

0.0658
0.1319

0.96

0.74

0.0558

0.1111

According to table 2, using No 1 Normalized relation has lower simulation error, and has more simulation coefficient of determination, so
we used relation 1 to normalization in this study. Then 75% of the data was used for training data, 25% for simulation data is considered.
2.2 The Most Popular Types of Neural Networks
Neural networks from their structural components, learning methods or the name of their inventor are named.
1- Feed Forward networks
2- Radial Basis Function (RBF) networks
3- Hopfield networks
4- Self organizing feature map (SOFM) networks or Kohonen networks
5- Boltzman networks
6- Back Propagation networks.
2.3 Affecting Parameter to Artificial Neural Networks Modeling
1-Appropriate amount of training
2-The number of network layers
3-The number of neurons in the middle layers
4-Training Rules
5-Transfer of Functions.
2.3.1 Amount of Training
An important criteria of training network, is number of courses or iterations (epoch) that network performs during training. Correct
Determination of these iterations in the training of the network is very important. Generally, if number of iterations were exceeded in network
training, simulation error (prediction) in the network would lesser. But when the number of iterations exceeds the certain value, an error by
the trial group is increased. Best number for training iteration is the value which both test and training group minimized as much as possible.
It can also be interpreted otherwise. Maintaining strength in the neural networks means that a network to what extent and what error can
estimate output for a particular input for training set. In contrast, the generalization ability is the ability to accurate estimation of an output
corresponding to an input, which was not in the network training set. Whatever the ability to maintain is high in a network, the generalization
ability reduced. In the neural network, a certain method or relationship for determining the appropriate amount of training and maintaining
ability does not exist. The criteria for neural network obtained using trial and error and specifically for each networks (karamouz and
Araghinejad 2011).
2.3.2 The number of network layers
The number of network layers is one of the main criteria in the design of neural networks. Normally, a neural network has three layer as (1)
an input layer, (2) an Hidden layers and (3) an output layer. The number of Hidden layers is determined using trial and error. Generally, using
less number of middle layers recommend in the neural networks.
2.3.3 The number of neurons in the Hidden layers
Number of neurons in the neural networks input and output layers is a function of the type of problem. But there is no special function for
the number of Hidden layer neurons, and these neurons are determined by trial and error for each Hidden layer.
2.3.4 Transfer functions
Neurons using reaction function produce the output for various inputs. The following figure shows an example of neuron and its input and
output.

259

Evaluation of Artificial Intelligence Systems Performance in Precipitation Forecasting


Agriculture Science Developments Vol(3), No (7), July, 2014.

Fig 2. Schematic of a neuron.

In the above figure, P is Input to the neuron, W is the weight of this entry, b Bias, f is response function (reaction function), is the the output
of the neuron. Thus, the output neuron is defined by equation 4:
a=f (wp+b)
(4)
Response (reaction) function f is the function which defined of each neuron based on the type of learning algorithm. Two types of popular
functions are linear functions (karamouz and Araghinejad 2011). Table 3 types of Transfer functions which used in this study are given.
Table 3. Types of Transfer functions used for ANN

Number
1
2
3
4
5
6
7
8
9

Transfer function
Hard Limit
Symmetrical Hard Limit
Linear
Saturating Linear
Symmetric Saturating Linear
Log-Sigmoid
Hyperbolic Tangent Sigmoid
Positive Linear
Competitive

Transfer Function in Matlab


hardlim
hardlims
purelin
satlin
satlins
logsig
tansig
poslin
compet

2.3.5 Training Functions


Feed Forward Network training Functions.
Another key parameter in neural networks is the learning function. Table 4 shows training functions that have been studied in this research.
Table 4. Types of Training Functions used for ANN

Number

Training function

Training function in Matlab

Levenberg-Marquardt

Trainlm

BFGS Quasi-Newton

Trainbfg

Resilient Backpropagation

Trainrp

Scaled Conjugate Gradient

Trainscg

Conjugate Gradient with Powell/Beale Restarts

Traincgb

Fletcher-Powell Conjugate Gradient

Traincgf

Polak-Ribire Conjugate Gradient

Traincgp

One Step Secant

Trainoss

Variable Learning Rate Gradient Descent

Traingdx

10

Bayesian Regularization

11

Gradient Descent with Momentum

12

Gradient Descent

Trainbr
Traingdm
Traingd

2.4 Adaptive Neuro-Fuzzy Inference System (ANFIS)


Of course, the leading theory in quantifying uncertainty in scientific models from the late nineteenth century until the late twentieth century
had been the probability theory. However, the gradual evolution of the expression of uncertainty using probability theory was challenged,
first in 1937 by Max Black, with his studies in vagueness, then with the introduction of fuzzy sets by Zadeh 1965. Zadehs paper had a
profound influence on the thinking about uncertainty because it challenged not only probability theory as the sole representation for
uncertainty but also the very foundations upon which probability theory was based: classical binary (two-valued) logic (Ross 1995).
Each fuzzy system contains three main parts, fuzzifier, fuzzy data base and de-fuzzifier. Fuzzy data base contains two main parts, fuzzy rule
base, and inference engine. In fuzzy rule base, rules related to fuzzy propositions are described (Jang et al. 1997). Thereafter, analysis

Abazar Solgi *, Feridon Radmanesh, Amir Pourhaghi, Mohammad Bagherian Marzouni

260

Agriculture Science Developments Vol(3), No (7), July, 2014.

operation is applied by fuzzy inference engine. There are several fuzzy inference engines which can be employed for this goal, which Sugeno
and Mamdani are the two of well-known ones (Lin et al. 2005). Neuro-fuzzy simulation refers to the algorithm of applying different learning
techniques pro- duced in the neural network literature to fuzzy modeling or a fuzzy inference system (FIS) (Brown and Harris 1994). This is
done by fuzzification of the input through member-ship functions (MFs), where a curved relationship maps the input value within the interval
of [01]. The parameters associated with input as well as output membership functions are trained using a technique like back-propagation
and/or least squares. Therefore, unlike the multi-layer perceptron (MLP), where weigh-ts are tuned, in ANFIS, fuzzy language rules or
conditional (ifthen)statements, are deter-mined in order to train the model (Rajaee et al. 2009). The ANFIS is a universal approximator and
as such is capable of approximating any real continuous function on a compact set to any degree of accuracy. The ANFIS is function-ally
equivalent to fuzzy inference systems (Jang et al. 1997). Specifically the ANFIS system of interest here is functionally equivalent to the
Sugeno first- order fuzzy model (Jang et al. 1997). The general construction of the ANFIS is presented in Fig. 5. Fig. 5a shows the fuzzy
reasoning mechanism for the Sugeno model to derive an output function f from a given input vector [x,y]. The corresponding equivalent
ANFIS construction is shown in Fig. 5b. According to this Figure, it is assumed that the FIS has two inputs x and y and one output f. For the
first order Sugeno fuzzy model, atypical rule set with two fuzzy ifthen rules can be expressed as (Aqil et al. 2007):
Rule (1): If (x) is A_1and (y) is B_1; then f_1 = P_1x + q_1y +r_1.
Rule (2): If (x) is A_2 and (y) is B_2; then f_2 = P_2x + q_2y +r_2.
Where A_1, A_2 and B_1, B_2 are the MFs for inputs x and y, respectively; P_1, q_1, r_1 and P_2, q_2, r_2 are the parameters of the output
function. The functioning of the ANFIS is as follows.
Layer 1: Each node in this layer produces membership grades of an input variable. The output of ith node inlayer k is denoted asQ_i^k.
Assumin g ageneralized bell function (gbellmf) as the membership function (MF), the output Q_i^1can be computed as(Jang et al. 1995):

1 = () =

( ) 2
1+(
)

(5)

Where {a_i,b_i, c_i}a are adaptable variable s known as premise parameters.


Layer 2: Every node in this layer multiplies the incoming signals:

2 = = (). ()

= 1,2

(6)

Layer 3: The ith node of this layer calculates the normalized firing strengths as:

3 =
=

1 +2

= 1,2

(7)

Layer 4: Node i in this layer calculates the contribution of the ith rule towards the model output, with the following node function (Jang et al.
1995):

4 = (
+ + ) =

(8)

Where, w is the output of layer 3and {P_1, q_1, r_1} is the parameter set.
Layer 5: The single node in this layer calculates the overall output of the ANFIS as (Jang et al. 1995):

5 =
=

(9)

The learning algorithm for ANFIS is a hybrid algorithm, which is a combination of the gradient descent and least-squares method (Aqil et al.
2007). The parameters for optimization are the premise parameters {a_i, b_i, c_i} and the consequent parameters {P_1, q_1, r_1}. In the
forward pass of the hybrid learning approach, node outputs go for- ward until layer (4) and the consequent parameters are identified by the
least-squares technique. In the backward pass, the error signals propagate backward and the premise parameters are updated by gradient
descent. More information for ANFIS can be found in related literatures (Jang et al. 1995, Jang et al. 1997).
ANFIS System of learning algorithms, neural network and fuzzy logic in order to design a nonlinear mapping between the input and output
uses. Also due to capability in combined of linguistic power a fuzzy systems with a numerical strength of a neural network, the modeling of
processes such as hydrology reservoir management and estimating suspended sediment load is very powerful (Nayak et al. 2004, Kii 2009).
Adaptive Neuro-Fuzzy based on changes in the amount and range of functions belonging to different iterations to achieve the appropriate
network based on the minimum error functions. Takagi Sugeno inference method is used in the ANFIS model. The number and type of inputs,
the membership functions shape are affected Neuro-Fuzzy model (Jang et al. 1997). Figure 3 shows the structures, interactions and connection
between layers in the adaptive fuzzy neural inference model.

261

Evaluation of Artificial Intelligence Systems Performance in Precipitation Forecasting


Agriculture Science Developments Vol(3), No (7), July, 2014.

Fig.3 Schematic diagram of the ANFIS model.

3. Model evaluation criteria


The aim of model evaluation is to obtain the error rate of model according to the input data to train and it is based on various criteria of error
calculation. In this study, the following criteria were used to evaluate the model:
1-Root mean square error or RMSE:

RMSE =

(Pobs Ppre )2
n

(10)

Where P obs and P pre are the observed and simulated precipitation rates, respectively and n is the total number of observations.
2- Coefficient of determination or R^2:
2
N
i=1(Pobs Ppre )
R =1 N
i=1(ppre p)2
2

(11)

Where p the average observed precipitation is. Shows the degree of co-linearity between the observed and simulated time series and has a
range of 0.01.0, with higher values indicating a higher degree of co-linearity.
3- NashSutcliffe coefficient of efficiency or CE:

CE = 1

(pobs pPre )2
(pobs p)2

(12)

Where p the average is observed precipitation. This measure which was introduced by Nash and Sutcliffe (1970) has a range between 1
(perfect fit) and-. Zero or negative CE values indicate that the mean value of the observed time series could be a better predictor than the
model (Talei et al. 2013)
4-Another index that is used in this research is the Akaike Information Criterion (AIC).

AIC = m ln(RMSE) + 2(Npar)

(13)

Which based on this index each model that has lower AIC is suitable. In equation 13, m is the number of input data, Npar number of trained
parameters (Nourani and Komasi 2013)

4- Results and Discussion


Feed Forward Network is used in this study. In the neural network, various training rules and different transfer functions for middle layer
neurons was studied with trial and error test. In the neural network, number of input layer neurons equal to the network input parameters, the
number of neurons in the hidden layer with the trial and error between 3 to 20, number of neurons in the output layer is considered as one.
Another key point in network training is a number of iterations (Epoch). Determination of the correct number of Epoch in training is very
important. In general, if the number of iterations in the training of the network increases, the network prediction error is lesser but when the
number of iterations exceeds a particular value, the test group error increases. Thus the optimal value for the number of iterations must be
considered to models quality for both training and testing was acceptable. In this study, due to changes in networks error in the state of
training and test, the optimal number of epoch for each structure is considered. Structure and results this research are shown in table 5 .

Abazar Solgi *, Feridon Radmanesh, Amir Pourhaghi, Mohammad Bagherian Marzouni

262

Agriculture Science Developments Vol(3), No (7), July, 2014.

In this study, T(t), P(t), N(t) are relative humidity, precipitation and monthly temperature respectively, N(t-1), P(t-1), T(t-1) are relative
humidity, precipitation and monthly temperature with a time delay respectively and P(t+1) is next month precipitation.
Table 5. Result of ANN model.

function
of
transfer
logsig

Structur
e
1

tansig

BFGS Quasi-Newton

P(t), P(t-1)

364

2-8-1

728

4-9-1

72

1-5-4-4

1000

4-7-1

102

6-5-5-1

Levenberg-Marquardt

satlin

Epoch

Levenberg-Marquardt

tansig

Input
Network

Levenberg-Marquardt

satlin

Function of training

BFGS Quasi-Newton

T(t),T(t-1), N(t),
N(t-1)
N(t), N(t-1), P(t1), P(t)
P(t-1), P(t), T(t),
T(t-1)
T(t),T(t-1), N(t),
N(t-1),P(t-1),
P(t)

structure
Network

R2

R2

Train

Simulate

RMSE
Train

RMSE
Simulate

0.60

0.0414

0.0567

0.47

0.0518

0.0615

0.47

0.0527

0.0588

0.61

0.0344

0.0575

0.67

0.0414

0.0530

0.
82
0.
72
0.
71
0.
88
0.
82

Different structure in adaptive neural fuzzy inference system model comparing various membership functions and various number of epoch
were examined. To find the best model, various Indices were assessed in accordance with Table 6.
Table 6. Result of ANFIS model

structure

Input
Network
P(t), P(t-1)
T(t), T(t-1), N(t), N(t-1)
N(t), N(t-1), P(t-1), P(t)
P(t-1), P(t), T(t), T(t-1)
T(t), T(t-1), N(t), N(t-1), P(t1), P(t)

1
2
3
4
5

membership
function
Pimf
Trimf
Trimf
Trapmf
Trimf

Epoch

R2

R2

20
10
15
20
10

Train
0.74
0.47
0.49
0.61
0.98

Simulate
0.63
0.29
0.18
0.41
0.68

RMSE
Train
0.0483
0.0644
0.0653
0.0573
0.0126

RMSE
Simulate
0.0684
0.1002
0.1283
0.0897
0.0713

Finally, performance of ANFIS compared with ANN which results are presented in Table 7. Also observed precipitation and predicted
precipitation by the two models is shown in Figure 5.References MUST be specified in the text by roman numbers like [1] and they should
be addressed at the end of paper.
Table 7. Comparison of different precipitation modeling

Model Type
ANFIS

Stage Train
RMSE
R2
0.0126
0.98

CE
0.85

AIC
760.30

Stage Simulate
RMSE
R2
0.0713
0.68

CE
0.65

AIC
243.74

ANN

0.0414

0.82

759.26

0.0530

0.51

244.25

0.82

0.67

200
150
100
50
0

1
5
9
13
17
21
25
29
33
37
41
45
49
53
57
61
65
69
73
77
81
85
89
93
97
101
105
109
113
117
121
125
129

Precipitation(mm)

250

Time(month)
ANFIS

ANN

observation

Figure 4. Comparison of different precipitation modeling.

263

Evaluation of Artificial Intelligence Systems Performance in Precipitation Forecasting


Agriculture Science Developments Vol(3), No (7), July, 2014.

Whatever the CE index or Nash-Sutcliffe index is greater that is better model. According to the results which given in Table 7, the ANFIS
model has almost better performance. This is the same for the Coefficient of determination or R^2. Since AIC and RMSE indices is lesser,
the model better. Therefor ANN model is better. According to figure 4, it is also concluded that in the estimation of minimum points, ANFIS
models is quite better and ANN models is fairly good to estimations of the maximum points. But generally the performance of the two models
is similar.
The result of this study is the same with the results of Dorum et al. (2010) based on the similar performance of two models.

5. Conclusions
In this study, the ANN model was used to predict rainfall in the Vrayneh station then the results of the hybrid model compared with the
ANFIS model. By examining different structures, this result was obtained that an increase in the number of neurons in the hidden layer is not
the reason for models better result. Since, in this study in all superior structures, the number of neurons in the hidden layer is less than 10. It
means that by lesser number of neurons, the desired results can be expected. Also by evaluation of the various training functions, it can be
concluded that the use of all training functions is not recommended because of time-consuming. So as recommendation it is proposed that
three types of Levenberg-Marquardt, BFGS Quasi-Newton and Bayesian Regularization were used because of better performance. Also by
evaluation of various transfer functions, it is recommended that the four functions tansig, logsig, satlin and poslin be used according to their
better performance. In this study relative humidity and temperature in addition to rainfalls were used that in the superior structures of the
model observed that relative humidity and temperature improves the modeling results. So it is suggested in studies of precipitation forecasts
in addition to rainfall parameter, temperature and relative humidity also could be used. Generally according to evaluation of various structures
and indices examined in this study, neural network and ANFIS has the same performance and both models can be used for precipitation
prediction.

References
Aqil, M., I. Kita, A. Yano and S. Nishiyama (2007). "A comparative study of artificial neural networks and neuro-fuzzy in continuous modeling of the
daily and hourly behaviour of runoff." Journal of Hydrology 337(12): 22-34.
Asadi, S., J. Shahrabi, P. Abbaszadeh and S. Tabanmehr (2013). "A New Hybrid Artificial Neural Networks for RainfallRunoff Process Modeling."
Neurocomputing: 05-23.
Brown, M. and C. Harris (1994). "Neuro-Fuzzy Adaptive Modeling and Control." Prentice-Hall International.New Jersey.
Dorum, A., A. Yarar, M. Faik Sevimli and M. Onyildiz (2010). "Modelling the rainfallrunoff data of susurluk basin." Expert Systems with Applications
37(9): 6587-6593.
Hamzaebi, C. (2008). "Improving artificial neural networks performance in seasonal time series forecasting." Information Sciences 178(23): 4550-4559.
Jang , J. S., C. T. Sun and E. Mizutani (1995). "Neurofuzzy and soft computing: A computational approach to learning and machine intelligence", Prentice."
Hall Inc. New Jersey.
Jang , J. S. R., C. T. Sun and E. Mizutani (1997). "Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence."
Prentice-Hall International.New Jersey.
karamouz, M. and S. Araghinejad (2011). Advanced Hydrology. Amirkabir University of Technology. Tehran.
Kii, . (2009). "Evolutionary fuzzy models for river suspended sediment concentration estimation." Journal of Hydrology 372(14): 68-79.
Lin, J. Y., C. T. Cheng, Y. G. Sun and K. Chau (2005). "Long-term prediction ofdischarges in Manwan hydropower using adaptive-network-based fuzzy
inference systems models." Lect.Notes Comput .Sci . (3612): 1152-1161.
Mandeville, A. N., P. E. O'Connell, J. V. Sutcliffe and J. E. Nash (1970). "River flow forecasting through conceptual models part III - The Ray catchment
at Grendon Underwood." Journal of Hydrology 11(2): 109-128.
Nayak, P. C., K. P. Sudheer, D. M. Rangan and K. S. Ramasastri (2004). "A neuro-fuzzy computing technique for modeling hydrological time series."
Journal of Hydrology 291(12): 52-66.
Nourani, V., M. A. Kynejad and L. Malekani (2010). "The use of fuzzy systems - Adaptive Neural in Modeling of rainfall - runoff." Journal of Civil and
Environmental Engineering University of Tabriz 39(4): 75-81.
Nourani, V., . Kisi and M. Komasi (2011). "Two hybrid Artificial Intelligence approaches for modeling rainfallrunoff process." Journal of Hydrology
402: 4159.
Nourani, V. and M. Komasi (2013). "A geomorphology-based ANFIS model for multi-station modeling of rainfallrunoff process." Journal of Hydrology
490(0): 41-55.
Piotrowski, A. P. and J. J. Napiorkowski (2013). "A comparison of methods to avoid overfitting in neural networks training in the case of catchment runoff
modelling." Journal of Hydrology 476(0): 97-111.
Rajaee, T., S. A. Mirbagheri, M. Zounemat-Kermani and V. Nourani (2009). "Daily suspended sediment concentration simulation using ANN and neurofuzzy models." Sci.Total Environ 407: 4916-4927.
Riad, S., J. Mania, L. Bouchaou and Y. Najjar (2004). "Rainfall-runoff model usingan artificial neural network approach." Mathematical and Computer
Modelling 40(78): 839-846.
Ross, T. J. (1995). Fuzzy logic with engineering application. McGraw Hill Inc., USA.

Abazar Solgi *, Feridon Radmanesh, Amir Pourhaghi, Mohammad Bagherian Marzouni

264

Agriculture Science Developments Vol(3), No (7), July, 2014.

Talei, A., L. H. C. Chua, C. Quek and P.-E. Jansson (2013). "Runoff forecasting using a TakagiSugeno neuro-fuzzy model with online learning." Journal
of Hydrology 488(0): 17-32.
Wu, C. L. and K. W. Chau (2011). "Rainfallrunoff modeling using artificial neural network coupled with singular spectrum analysis." Journal of
Hydrology 399(34): 394-409.

Das könnte Ihnen auch gefallen