Beruflich Dokumente
Kultur Dokumente
|
|
.
|
\
|
+ =
k j j i
i ij h jk
k i
i ik k
x w f w x w f y
0
-----------------------(1)
This is carried out by adjusting the weights (W) of given interconnections according to
some learning algorithm. MLP employs a supervised learning algorithm called back propagation.
The learning is guided by specifying the desired response to the network for each training input
pattern through the comparison with the actual output computed by the network in order to adjust
the weights. These adjustments have the purpose of minimize some energy function, normally the
square difference between the desired and actual outputs. The derivatives of the function with
respect to the weights are employed to propagate the error backwards through the network from
output to hidden layer(s), until it reaches the input layer. Each weight is modified according with
its particular contribution to the global error. The performance of the net is measured most of
times in terms of the root mean square error. After a number of loops, when the benefits of further
optimization are regarded as small, the training process converges and stops. A typical MLP
Architecture has been shown below.
I nternational J ournal of Creative Mathematical Sciences & Technology (I J CMST) 2(1): 35-41, 2012
ISSN (P): 2319 7811, ISSN (O): 2319 782X
37
Corresponding Author: K. K. Pradhan, Vikash School of Business Management,
Bargarh, Odisha, India
Multilayer Perception Architecture
RADIAL BASIS FUNCTION (RBF)
A Radial Basis Function (RBF) is a real-valued function whose value depends only on
the distance fromthe origin, so that ; or alternatively on the distance from
some other point c, called a center, so that . Any function that
satisfies the property is a radial function. The norm is usually Euclidean
distance, although other distance functions are also possible. According to Lukaszyk (2004) by
using Lukaszyk-Karmowski metric it is possible to avoid problems with ill conditioned matrix for
some radial functions and also we can determine coefficients w
i
since the is always greater
than zero. Sums of radial basis functions are typically used to approximate given functions. This
approximation process can also be interpreted as a simple kind of neural network. Commonly
used types of radial basis functions include( )
i
c x r = :
- Gaussian: ) exp( ) (
2
r r = for some 0 > .
- Multiquadric:
2 2
) ( + = r r for some 0 > .
- Polyharmonic spline:
=
=
=
,... 6 , 4 , 2 ) ln(
,... 5 , 3 , 1
) (
k r r
k r
r
k
k
- Thin plate spline (a special polyharmonic spline): ) ln( ) (
2
r r r =
Radial Basis Functions are typically used to build up function approximations of the form
( )
=
=
n
i
i i
c x w x y
1
) (
where the approximating function y(x) is represented as a sumof N radial basis functions, each
associated with a different center c
i
, and weighted by an appropriate coefficient w
i
. The weights
w
i
can be estimated using the matrix methods of linear least squares, because the approximating
function is linear in the weights.
I nternational J ournal of Creative Mathematical Sciences & Technology (I J CMST) 2(1): 35-41, 2012
ISSN (P): 2319 7811, ISSN (O): 2319 782X
38
Corresponding Author: K. K. Pradhan, Vikash School of Business Management,
Bargarh, Odisha, India
The sumcan also be interpreted as a rather simple single-layer type of artificial neural
network called a radial basis function network, with the radial basis functions taking on the role
of the activation functions of the network. It can be shown that any continuous function on a
compact interval can in principle be interpolated with arbitrary accuracy by a sumof this form, if
a sufficiently large number N of radial basis functions is used.
( )
=
=
n
i
i i
c x w x y
1
) (
The approximant y(x) is differentiable with respect to the weights w
i
. The weights could
thus be learned by using any of the standard iterative methods for neural networks.
METHODOLOGY RESULTS AND CONCLUSION
The present study aims at using ANN as a tool to solve resource use optimization
problems. As an illustration, the methodology has been applied for modeling and forecasting of
rice crop yield on the basis of seven variables. We have taken rice crop yield data as the output
(Y) and Total Cost of Bullock/ Machine Labor-per acre (X
1
), Total Cost of Human Labor per acre
(X
2
), Cost of Seeds per acre (X
3
), Fertilizer cost per acre (X4), Total-Irrigation Cost (X
5
), Cost of
pesticide (X
6
) and Credit per acre (X
7
) as different inputs. The network information regarding the
RBF using SPSS 17.0 has been shown in the Table 1. Overall Percent Correctness during training
of the data is found to be 15.5%. Training Time required was 0:00:05.657. The Primary data has
been collected through personal interview method based on a pre-designed questionnaire from84
numbers of small farmers of a village situated in the Bargarh district of Odisha. The poor record
keeping systemexisting among the farmers is one of the main constraints to this kind of research
where the farmers have given data fromtheir memory.
We trained the data and we changed our data set and assigned zero value to X
1
, X
2
, X
3
,
X
4
, X
5
, X
6
and X
7
with a targeted level of output and we ran the network to get the values of Xi
(where i =X
1..
X
7
). For example, we fixed the target to be 7106 Rupees and we got the X
1
, X
2
,
X
3
, X
4
, X
5
, X
6
and X
7
values in Rupees that has been shown in the Figure-1 as well as in the
Table-3. The same thing was repeated for running the Neuro Solution 5.0 and SPSS 17.0
software. The Figure-1 shows the comparative result of SPSS 17.0, Neuro Solutions 5.0 and the
actual data gathered from the farmers. The Network Information and the Case processing
summary for SPSS 17.0 using MLP network have been shown in the Table 2 and 3. The
cumulative mean square error in case of Neuro Solutions 5.0 was found to be 0.1075. The number
of epochs was found to be 1000. Neuro Solutions 5.0 neural network used Time-Lagged Feed
forward Network (TLFN) with two hidden layers. To know the difference between the
predictions we used two cases where the target was Rs. 6847.6 but the primary data were
different. We found that the SPSS 17.0 predicted the same value for both the cases while the
Neuro Solutions 5.0 gave different values which have been depicted in Graph 2 and 3 and also in
Table 3. Therefore to our view the Neuro Solutions 5.0 is giving better result by taking care of the
primary data and predicting the values while the SPSS 17.0 is only considering the target value
given which is the input in this case for the Networks. Evidently, predicted and actual values in
both the methods i.e RBF and MLP are quite close.
I nternational J ournal of Creative Mathematical Sciences & Technology (I J CMST) 2(1): 35-41, 2012
ISSN (P): 2319 7811, ISSN (O): 2319 782X
39
Corresponding Author: K. K. Pradhan, Vikash School of Business Management,
Bargarh, Odisha, India
The neural network results show that the total cost human labor. /Ac. in RS. X2 is to be
reduced and cost towards the irrigation facilities X5 is to be increased. The result seems to be
practical since the labor cost is high and the farmers should invest more in irrigation for better
productivity. Thus, it could be concluded that artificial neural network methodology is successful
in describing the given data and can be used as a reliable tool for resource use optimization
problems.
Table 1 Network I nformation using SPSS 17.0
Input Layer Factors 1 Output_Y
Number of Units 46
Hidden Layer Number of Units 2
a
Activation Function Softmax
Output Layer Dependent Variables 1 Bullock_X1
2 Humane_X2
3 Seeds_X3
4 Fertilizer_X4
5 Irrigation_X5
6 Pesticide_X6
7 Credit_X7
Number of Units 271
Activation Function Identity
Error Function Sumof Squares
a. Determined by the Bayesian Information Criterion: The "best" number of
hidden units is the one that yields the smallest BIC in the training data.
Table 2 Case Processing Summary using SPSS 17.0
N Percent
Sample Training 60 100.0%
Valid 60 100.0%
Excluded 44
Total 104
I nternational J ournal of Creative Mathematical Sciences & Technology (I J CMST) 2(1): 35-41, 2012
ISSN (P): 2319 7811, ISSN (O): 2319 782X
40
Corresponding Author: K. K. Pradhan, Vikash School of Business Management,
Bargarh, Odisha, India
Table 3 Predicted Data by NeuroSolutions 5.0, SPSS 17.0 and the Original Data
Y X1 X2 X3 X4 X5 X6 X7
MLP Predicted Values (Neuro Solutions 5.0 )
7106 572.65 1387.08 303.54 663.01 433.3 214.06 803.81
6847.6 580.15 1436.32 306.17 673.24 446.85 202.67 793.26
6847.6 581.74 1446.8 306.77 675.42 449.74 200.24 791.01
RBF Predicted Values(SPSS 17.0)
7106 530 1200 287.5 700 394.65 214.29 642.86
6847.6 540 940 250 580 263.1 250 750
6847.6 540 940 250 580 263.1 250 750
Original Data
7106 500 1600 270 670 131.55 260 666.67
6847.6 520 1600 248 760 131.55 280 500
6847.6 520 1680 260 720 131.55 240 400
Note: The value of output (Y) and inputs (X
1
to X
7
) are measured in Rupees (Rs.) term.
For a Target Output of Rs.7106
0
200
400
600
800
1000
1200
1400
1600
1800
X1 X2 X3 X4 X5 X6 X7
Inputs
V
a
l
u
e
s
i
n
R
s
.
Neuro Solutions
SPSS(RBF)
Original
Figure-1
For a Tar get Output of Rs.6847.6
0
200
400
600
800
1000
1200
1400
1600
1800
X1 X2 X3 X4 X5 X6 X7
Inputs
V
a
l
u
e
s
i
n
R
s
.
Neuro Solutions
SPSS(RBF)
Original
Figure- 2
I nternational J ournal of Creative Mathematical Sciences & Technology (I J CMST) 2(1): 35-41, 2012
ISSN (P): 2319 7811, ISSN (O): 2319 782X
41
Corresponding Author: K. K. Pradhan, Vikash School of Business Management,
Bargarh, Odisha, India
For a Target Output of Rs.6847.6
0
200
400
600
800
1000
1200
1400
1600
1800
X1 X2 X3 X4 X5 X6 X7
Input s
V
a
l
u
e
s
i
n
R
s
.
Neuro Solutions
SPSS(RBF)
Original
Figure- 3
REFERENCES
[1]. Buhmann, Martin D. (2003), Radial Basis Functions: Theory and Implementations,
Cambridge University Press, ISBN 978-0-521-63338-3.
[2]. Cheng, B. and Titterington, D. M. (1994). Neural networks: A review from a
statistical perspective. Statistical Science, 9: 2-54.
[3]. Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function.
Mathematics of Control, Signals and Systems, 2: 303-14.
[4]. Hertz, J ., Krogh, A. and Palmer, R.G. (1991). Introduction to the Theory of Neural
Computation. Reading, MA: Addison-Wesley.
[5]. Lukaszyk, S. (2004) A new concept of probability metric and its applications in
approximation of scattered data sets. Computational Mechanics, 33, 299-3004.
[6]. Singh, R. K., and Prajneshu (2008). Artificial Neural Network Methodology for
Modeling and Forecasting Maize Crop Yield, Agricultural Economics Research
Review, 21: 5-10.
[7]. Warner, B. and Misra, M. (1996). Understanding neural networks as statistical tools.
American Statistician, 50: 284-93.
[8]. Zhang, G. P. (2007). Avoiding pitfalls in neural network research. IEEE Transactions
on Systems, Man and Cybernetics Part C: Applications and Reviews, 37: 3-16.
____________________