Beruflich Dokumente
Kultur Dokumente
ISSN 2278-6856
Computer Science and Engineering Department, Bundelkhand Institute of Engineering and Technology,Jhanshi
Abstract
to be also investigated.
1. INTRODUCTION
2. PERFORMANCE METRICS
Artificial Neural Network is a representation of human
brain that tries to learn and simulate its training input
patterns by the predefined set of example patterns.The
network is trained with particular specifications. The
obtained output after training the network is compared
with the desired target value and error is calculated based
upon these values.
Where,
Page 25
ISSN 2278-6856
= Performance Ratio
Wi= Weight of the network
The network learns by adjusting weights. The process of
adjusting the weights to make the neural network learn the
relationship between the input and targets is known as
learning or training. There are several techniques for
training a network gradient descent method which is the
most common.
Error
with
= Performance Ratio
Wi= Weight of the network
MSEREG Algorithm:
1. Initialize n number of input patterns (integer
values)
2. Do
3. For each training pattern n train the network
4. O=neural_net_output (network,n)
5. T=neural_net_target (desired)
6. Compute error e=(T-O) at the output units
Page 26
ISSN 2278-6856
3. RESEARCH METHODOLOGY
The Artificial Neural Network (ANN) and learning
techniques have been used for predicting the software
attempt using dataset of software projects in order to
compare the performance results obtained from the
various models.
4.1. Empirical Data Collection
The data we have used is collected from a binary
image of 20 pixel values. The MSEREG is calculated
for all the 20 pixel values. Then the AMSEREG is
being calculated by taking the trigonometrically
tangent inverse of error value which is actual target
minus obtained output. A threshold value is set (let
0.5) so as to attain the number of training cases
recognized. If the obtained output is more than the
threshold value then the neural network has
recognized the training pattern.
4.2. Diagrammatical
representation
of
error
calculation
Firstly initialize the training patterns and set a
particular target value for the input training pattern.
Set the maximum number of iterations up to which
the input datas are to be iterated. Then train the
input pattern by using Backpropagation training
algorithm. Compare the obtained output with the
target value set. If it is near the target value then the
network has realized the training pattern else it has
not.
ISSN 2278-6856
4. PERFORMANCE COMPARISON
From the table 1.2 it is seen that the mean square error
with Regularization value for Inverse tangent function has
the least error value. Thus it proves that our assumption
has proved better results. The result is obtained from 20
pixel value obtained by binarization of input data.
5. RESULT
As seen from the figure 1.2 that the mean square error
with Regularization value is being evaluated by using
various inverse functions. The Arctan mean square error
with Regularization value shows the least error, from
which it can be further concluded that Arctan MSEREG
can be advantageous in increasing the accuracy of a large
trained neural network.
The Arctan mean square error with Regularization value
shows the least error, from which it can be further
concluded that Arctan MSEREG can be advantageous in
increasing the accuracy of a large trained neural network.
And our proposed concept of reducing the cost error
function has shown positive and effective results.
THE
7.CONCLUSION
GRAPH BELOW SHOWS A COMPARISON BETWEEN THE
MSEREG
RESPECTIVELY.
STANDARD
0.014
0.012
0.01
0.008
0.006
0.004
0.002
0
Arctan
MSEREG
MSEREG
Arcsin
MSEREG
Mean squared Error with
Regularization
Arccos
MSEREG
REFERENCES
[1] Sapna Singh, and ShobhitKumar, Modified Mean
Square Error Algorithm with Reduced Cost of
Training and Simulation Time for Character
Recognition in Backpropagation Neural Network,
International Conference on Frontiers in Intelligent
Computing Theory & Applications (FICTA)
Springer, AISC Proceedings, PP-137-145,November2013.
[2] Dr.Dhafer r. Zaghar Reduction of the error in the
hardware
neural
network,
Al-Khwarizmi
Engineering Journal, Vol.3, No.2, pp1-7, 2007.
[3] Bogdan
M.
Wilamowski,
SerdarIplikci,
OkyayKaynak, M. nderEfe,An Algorithm for Fast
Convergence in Training Neural, 0-7803-70449/01/$10.00 2001 IEEE.
Page 28
ISSN 2278-6856
AUTHOR
Dr. R.N. Verma is currently working
as
AssosiateProfesssor
inComputer
Science and Engineering Departmentat
Bundelkhand Institute of Engineering
and Technology, Jhanshi..He has great
Experience in the field of Computer
Science and also hascommendable and
growing list of Publications and is a member of reputed
professional societies.
AUTHOR
Dr.A. K. Solanki is currently
working as Professsor inComputer
Science and Engineering Departmentat
Bundelkhand Institute of Engineering
and Technology, Jhanshi..He has
great Experience in the field of
Computer
Science
and
also
hascommendable and growing list of
AUTHOR
SHOBHIT KUMAR Belongs to
Lucknow the capital of Uttar Pradesh.
Mr.Shobhitkumar has received his
primary education and professional
education both from Lucknow. He
completed his High school and
Intermediate
from
UP
Board.
Mr.Shobhitkumar has received the
Bachelors & Masters in Technology degree in Computer
Science and Engineering from Dr. A. P. J Abdul Kalam
Technical University (Formerly Known as UPTU).And
Pursing Phd in ComputerScience and Engineering from
the same University.
He has worked as lecturer and Asst. Prof. in many reputed
Engineering colleges of Uttar Pradesh technical university
and currently working as an Asst. Prof. in Rajkiya
Engineering College, Ambedkar Nagar.(A Government
Engineering College) affiliated to Dr. A. P. J. Abdul
Kalam Technical University (Formerly Known as UPTU
and GBTU).
Page 29