Sie sind auf Seite 1von 5

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)

Web Site: www.ijettcs.org Email: editor@ijettcs.org


Volume 5, Issue 5, September - October 2016

ISSN 2278-6856

Modified Mean Square Error with


Regularization Algorithmfor Efficient
Classification of patternsin Back-propagation
Neural Network
Shobhit Kumar1, Raghu Nath Verma2 and Anil Kumar3
1

Information Technology Department,RajkiyaEngineering College, Ambedkar Nagar, India.

Computer Science and Engineering Department, Bundelkhand Institute of Engineering and Technology,Jhanshi

Abstract

to be also investigated.

Learn by example concept utilizes Neural Network


where each sample is fed to the network. Characters
are used every day either in the form of Signature,
Number plate,Name plate,Postal card address
recognition and many more which can be employed by
using Neural Network. This research investigates the
suitability of using backpropagation neural network
for the task of Off-line Hand written Character
Recognition.This paper utilizesnew mean square error
with regularization function for implementing
Backpropagation neural network while employing
hand written characters. The modified MSEREG have
shown optimal results in the field of convergence rate,
training time, simulation time and performance. The
datas are implemented using MATLAB simulator.

In this research, character recognition is off-line and only


concerned with the recognition (classification) problem.
Further, though off-line systems have to cope with
different pen types, thickness, and ink colours, in this
research, the process is simplified by using one pen to
produce all sample signatures.

Keywords: Mean square error with regularization,


Backpropagation neural network, Character Recognition,
Sigmoid function.

For training an input pattern and measuring its


performance, a function must be defined. The various
functions being included in neural network are:

1. INTRODUCTION

2.1Sum of Squared error (SSE)

Handwritten Characters are the most widely employed


form of information, especially for Signature verification
in banks, postal address recognition in postal cards
,number plate recognition of vehicles on streets and many
more. However, for several reasons the task of verifying
human written characters cannot be considered a trivial
pattern recognition problem. It is a difficult problem
because character samples from the same person are
similar but not identical. In addition, a person's writing
often changes radically with time. So, it is being suggested
that classification of variations in patterns can be broadly
and efficiently classified using Neuralnetwork [2].
The goal of this research is to investigate the capacity of
using backpropagation neural networks for the task of offline handwritten character recognition. The emphasis is
on investigating the performance of the network when
presented with raw character images and not a vector of
extracted features. Further, the capability of
backpropagation networks to deal with each of the
different classes of forgeries (casual, skilled and traced) is

Volume 5, Issue 5, September October 2016

2. PERFORMANCE METRICS
Artificial Neural Network is a representation of human
brain that tries to learn and simulate its training input
patterns by the predefined set of example patterns.The
network is trained with particular specifications. The
obtained output after training the network is compared
with the desired target value and error is calculated based
upon these values.

The firstbasic cost evaluation function. TheSum of


Squared error is defined as

Where,

tpi= Predicted value for data point i;

ypi =Actual value for the data point i;


N = Total number of data points
2.2 Mean squared error(MSE) [1]
This is widely used and the most effective performance
function. The Mean Squared error is defined as

Page 25

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 5, Issue 5, September - October 2016

ISSN 2278-6856

Where tpi= Predicted value for data point i;


ypi =Actual value for the data point i;
N = Total number of data points
Where,

tpi= Predicted value for data point i;


ypi =Actual value for the data point i;

N = Total number of data points


2.3 Relative Absolute Error (RAE)
The Relative absolute error is defined as the summation of
the difference between predictive value and given value
for the sample case j to that divide it by the summation of
the difference between the given value and average of the
given value. The relative absolute error of individual data
set j is defined as

Where,tij= Predicted value by the individual dataset j for


data point in i;

yi=Actual value for the data point i;


N = Total number of data points

ym = Mean of all ypi


2.4 Mean Absolute Error (MAE)[4]
The mean absolute error measures of how far the estimates
are from actual values. It could be applied to any two pairs
of numbers, where one set is actual and the other is an
estimate prediction.

Where, tpi= Predicted value for data point i;

= Performance Ratio
Wi= Weight of the network
The network learns by adjusting weights. The process of
adjusting the weights to make the neural network learn the
relationship between the input and targets is known as
learning or training. There are several techniques for
training a network gradient descent method which is the
most common.

3.Proposed Mean Square


Regularization Algorithm

Error

The mean squared error with regularization


(MSEREG) of a network is one of many ways to quantify
the difference between values implied by an estimator and
the true values of the quantity being estimated. MSEREG
is a performance function, corresponding to the target
value of the squared error loss or quadratic loss.
MSEREG measures the average of the squares of the
"errors." The error is the amount by which the value
implied by the estimator differs from the quantity to be
estimated.
Minimizing squared error would increase the accuracy of
a particular system with defined number of input training
dataset. The decrease in the value of error is evaluated by
a mathematical term known as arctan. Arctanis the term
defined for inverse tangent of a particular value.
The standard Mean square error with Regularization
value can be evaluated by the formula

Where tpi= Predicted value for data point i;

ypi=Actual value for the data point i;

ypi =Actual value for the data point i;

N = Total number of data points.

N = Total number of data points

The above equation represents the output nodes, tpi and


ypiwhich are target and actual network output unit on the
pthpattern, respectively.
2.5 Mean
squared
errorwith
Regularization
(MSEREG)
The Mean squared error with Regularization is define as:

Volume 5, Issue 5, September October 2016

with

= Performance Ratio
Wi= Weight of the network
MSEREG Algorithm:
1. Initialize n number of input patterns (integer
values)
2. Do
3. For each training pattern n train the network
4. O=neural_net_output (network,n)
5. T=neural_net_target (desired)
6. Compute error e=(T-O) at the output units
Page 26

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 5, Issue 5, September - October 2016
7. Square the error e + Square the weights W
8. Calculate the summation of error e & W for
all input patterns
9. While (n!=0)
10. Divide the summation obtained by the n
number of patterns
The standard mean square error with Regularization is
advantageous for having no prior knowledge for class
distributions. It is widely used as it results in the least
error value as compared to other error values.
Minimizing MSEREG will further result in more accurate
system for pattern recognition and other applications.
Arctan Mean square error with Regularization
(AMSEREG) has proved further minimization of the
MSEREG. It is advantageous for a system with large
dataset, where thousands of values are to be enumerated to
result thousand of values. It has proved useful in
calculation of error for Backpropagation NN.
The arctanmean squared error with regularizationcan be
estimated by the following formula

Wheretpi= Predicted value for data point i;


ypi =Actual value for the data point i;
N = Total number of data points
= Performance Ratio
Wi= Weight of the network
AMSEREG Algorithm:

ISSN 2278-6856

trigonometrical function i.e. inverse tangent of the


MSEREG value to the network shows quite improvement
in the performance. Thus, it proves that our mathematical
modification in the formula of MSEREG shows better
results with improvement in the recognition accuracy of
the Backpropagation network.

3. RESEARCH METHODOLOGY
The Artificial Neural Network (ANN) and learning
techniques have been used for predicting the software
attempt using dataset of software projects in order to
compare the performance results obtained from the
various models.
4.1. Empirical Data Collection
The data we have used is collected from a binary
image of 20 pixel values. The MSEREG is calculated
for all the 20 pixel values. Then the AMSEREG is
being calculated by taking the trigonometrically
tangent inverse of error value which is actual target
minus obtained output. A threshold value is set (let
0.5) so as to attain the number of training cases
recognized. If the obtained output is more than the
threshold value then the neural network has
recognized the training pattern.
4.2. Diagrammatical
representation
of
error
calculation
Firstly initialize the training patterns and set a
particular target value for the input training pattern.
Set the maximum number of iterations up to which
the input datas are to be iterated. Then train the
input pattern by using Backpropagation training
algorithm. Compare the obtained output with the
target value set. If it is near the target value then the
network has realized the training pattern else it has
not.

1. Initialize n number of input patterns (integer


values)
2. Do
3. For each training pattern n train the network
4. O=neural_net_output (network,n)
5. T=neural_net_target (desired)
6. Compute error e= (T-O) at the output units
7. Compute inverse tangent of error e and the
inverse tangent of weight W
8. Square the error e and weights W
9. Calculate the summation of error e and weight
w for all input pattern
10. While (n!=0)
11. Divide the summation obtained by the n
number of patterns
The proposed Arctan Mean square error with
Regularization algorithm is a description of the way how
the mean squared error value is being modified to improve
its performance with reduced error cost. By applying the

Volume 5, Issue 5, September October 2016

Figure 1.1 Steps for Character Recognition


Page 27

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 5, Issue 5, September - October 2016

ISSN 2278-6856

4. PERFORMANCE COMPARISON

The calculated error value for standard MSE,MSEREG


Value and AMSEREG values is being shown through the

The mean square error with Regularization is being


calculated for the training input patterns .Then Mean
square error with Regularization value is calculated for
various trigonometrical functions so as to see whether the
arctan value gives minimum error value or any other
function value which yields better results.

From the table 1.2 it is seen that the mean square error
with Regularization value for Inverse tangent function has
the least error value. Thus it proves that our assumption
has proved better results. The result is obtained from 20
pixel value obtained by binarization of input data.

The graph below clearly depicts the mean square error


with Regularization value evaluated by using various
inverses of trigonometrical functions with mean square
error with Regularization value. The different functions
are being evaluated so as to check the affinity of our
proposed function.

5. RESULT

Table 1.1: Comparison betweenvarious Error functions of


MSEREG

As seen from the figure 1.2 that the mean square error
with Regularization value is being evaluated by using
various inverse functions. The Arctan mean square error
with Regularization value shows the least error, from
which it can be further concluded that Arctan MSEREG
can be advantageous in increasing the accuracy of a large
trained neural network.
The Arctan mean square error with Regularization value
shows the least error, from which it can be further
concluded that Arctan MSEREG can be advantageous in
increasing the accuracy of a large trained neural network.
And our proposed concept of reducing the cost error
function has shown positive and effective results.

THE

7.CONCLUSION
GRAPH BELOW SHOWS A COMPARISON BETWEEN THE

MSEREG
RESPECTIVELY.
STANDARD

VALUES WITH THEIR MODIFICATIONS

0.014
0.012
0.01
0.008
0.006
0.004
0.002
0

Arctan
MSEREG
MSEREG
Arcsin
MSEREG
Mean squared Error with
Regularization

Arccos
MSEREG

FIGURE 1.2COMPARATIVE ANALYSIS OF VARIOUS


MSEREG
Table 1.2: Comparison betweenvarious Error functions

Volume 5, Issue 5, September October 2016

The Backpropagation training algorithm is advantageous


as it increases the accuracy of the simulated data.Thus the
paper proposesthatArctanmean square error with
regularization (AMSEREG) reduces the error value
andincreases the accuracy of a particular network as
compare to standard mean square error with
regularization (MSEREG).Further work is under process
for increasing the accuracy and reducing the training time
for a network using Backpropagationalgorithm on hand
written character recognition.

REFERENCES
[1] Sapna Singh, and ShobhitKumar, Modified Mean
Square Error Algorithm with Reduced Cost of
Training and Simulation Time for Character
Recognition in Backpropagation Neural Network,
International Conference on Frontiers in Intelligent
Computing Theory & Applications (FICTA)
Springer, AISC Proceedings, PP-137-145,November2013.
[2] Dr.Dhafer r. Zaghar Reduction of the error in the
hardware
neural
network,
Al-Khwarizmi
Engineering Journal, Vol.3, No.2, pp1-7, 2007.
[3] Bogdan
M.
Wilamowski,
SerdarIplikci,
OkyayKaynak, M. nderEfe,An Algorithm for Fast
Convergence in Training Neural, 0-7803-70449/01/$10.00 2001 IEEE.

Page 28

International Journal of Emerging Trends & Technology in Computer Science (IJETTCS)


Web Site: www.ijettcs.org Email: editor@ijettcs.org
Volume 5, Issue 5, September - October 2016
[4] Hussein Rady, Reynis Entropy and Mean Square
Error for Improving the Convergence of Multilayer
Backpropagation Neural Networks: A Comparative
,Study117905-8282 IJECS-IJENS
Vol: 11 No:
05,October 2005.
[5] Hossam Osman, Steven D. Blostan,New cost function
for Backpropagation neural network with application
to SAR imagery classification,K7l396.
[6] G. R. Finnie and G.E. Wittig, A Comparison of
Software Effort Estimation Techniques: Using
Function Points with Neural Networks, Case Based
Reasoning and Regression Models, Journal of
Systems and Software, vol.39, pp.281-289, 1997.
[7] G. R. Finnie and G.E. Wittig, AI Tools for Software
Development Effort Estimation, Proceedings of the
International Conference on Software Engineering:
Education and Practice (SEEP 96).
[8] K. Srinivasan and D. Fisher, Machine Learning
Approaches to Estimating Software Development
Effort, IEEE Transactions on Software Engineering,
vol.21, Feb.1995.
[9] Srinivasakumardevireddy, Settipalliapparao, Hand
written character recognition using backpropagation
network, Journal of Theoretical and Applied
Information Technology,2005 - 2009 JATIT.
[10] MadhuShahi, Dr. Anil K Ahlawat, Mr. B.N
PandeyLiterature Survey on Offline Recognition of
Handwritten Hindi Curve Script Using ANN
Approach, International Journal of Scientific and
Research Publications, Volume 2, Issue 5, May 2012
,ISSN 2250-3153.

ISSN 2278-6856

AUTHOR
Dr. R.N. Verma is currently working
as
AssosiateProfesssor
inComputer
Science and Engineering Departmentat
Bundelkhand Institute of Engineering
and Technology, Jhanshi..He has great
Experience in the field of Computer
Science and also hascommendable and
growing list of Publications and is a member of reputed
professional societies.
AUTHOR
Dr.A. K. Solanki is currently
working as Professsor inComputer
Science and Engineering Departmentat
Bundelkhand Institute of Engineering
and Technology, Jhanshi..He has
great Experience in the field of
Computer
Science
and
also
hascommendable and growing list of

Publications and is a member of reputed professional


societies.

AUTHOR
SHOBHIT KUMAR Belongs to
Lucknow the capital of Uttar Pradesh.
Mr.Shobhitkumar has received his
primary education and professional
education both from Lucknow. He
completed his High school and
Intermediate
from
UP
Board.
Mr.Shobhitkumar has received the
Bachelors & Masters in Technology degree in Computer
Science and Engineering from Dr. A. P. J Abdul Kalam
Technical University (Formerly Known as UPTU).And
Pursing Phd in ComputerScience and Engineering from
the same University.
He has worked as lecturer and Asst. Prof. in many reputed
Engineering colleges of Uttar Pradesh technical university
and currently working as an Asst. Prof. in Rajkiya
Engineering College, Ambedkar Nagar.(A Government
Engineering College) affiliated to Dr. A. P. J. Abdul
Kalam Technical University (Formerly Known as UPTU
and GBTU).

Volume 5, Issue 5, September October 2016

Page 29

Das könnte Ihnen auch gefallen