You are on page 1of 30

1

NUST-PNEC

Project Report

Currency Recognition

Group: Naveel Arshad


Assad Mehmood
Faizan Adil

Course Instructor: Cdr. Dr Rana Hammad
2

Contents

Introduction
Aim of the Project
Data Flow
Dataset
GUI
Process Flow
Creating Database-1
Training, Results, Test
Creating Database-2
Training, Results, Test
Creating Database-3
Training, Results, Test
Conclusion
References
3

Aim of the Project


To Design an efficient currency recognition system through which we can recognize


different paper currencies with High Accuracy.

Introduction

Paper currency recognition (PCR) is an important area of Pattern Recognition. A system


for the recognition of paper currency is one kind of intelligent system, which is a very
important need of the current automation systems in the modern world of today.

It has various potential applications including electronic banking, currency monitoring


systems, money exchange machines, etc.

Monetary transactions are a vital part in our everyday activities. Automated paper currency
recognition with good accuracy and high processing speed has great importance. The
technology of currency recognition aims to search and extract visible as well as hidden
marks on paper currency for efficient classification.

A currency recognition system consists of following modules: Feature extraction, training


and testing. Feature extraction deals with extraction of potential features in image based on
color, texture and shape. Implementation of training and testing phase is by using
a classifier.

There are approximately 50 currencies all over the world, with each of them looking totally
different. For instance the size of the paper is different, the same as the color and pattern.
Humans cannot recognize currencies of different countries easily.

Background Research

Coordinate data extraction method, from specific parts of Euro banknotes, is being used. In
Italy or recognition of Italian Liras they use Learning Vector Quantization (LVQ). Banknote
direction, size and face value recognition method is widely being used for different
currencies across the globe. Neural Network based banknotes recognition methods are also
being used.

Data Flow

The data flow of out project is shown below:


The Data flow is divided into 5 Main parts. In the first step Image will be taken from the
camera. In the second step, Image Pre-processing is done. In the third step, Feature
extraction is done of that image. In the fourth step, Classifier is used to correctly classify the
image, which is being recognized. In the last and Fifth step Result is displayed.
5

Data Set

7 denominations of Pakistani Bank Note Classes:

1. 10 PKR

2. 20 PKR

3. 50 PKR

4. 100 PKR

5. 500 PKR

6. 1000 PKR

7. 5000 PKR

GUI

We managed to make a simple GUI for our Project for better understanding. The GUI was
developed in MATLAB using GUIDE command.
GUI is displayed on the next page. It consists of 3 Buttons, which has different functions.
Step-by-Step Procedure will be explained on how to use this GUI to get the desired results.

6

Understanding the GUI


Step 1:
After running the GUI the first step is to create a Database of Images. Clicking on the
Create database Button on the GUI can do this.
Once this is done a pop up window will open and from there we need to select the images
we want to create database of.

Once the images are selected, they go through the process of Image Preprocessing, Edge
Detection and Feature extraction.

Image Pre-Processing


Image Pre-Processing is done using the code below.

%preprocessing
%resize image

im=imresize(im,[128 128]);

%remove noise;
%seperate channels

r_channel=im(:,:,1);
b_channel=im(:,:,2);
g_channel=im(:,:,3);

%denoise each channel

r_channel=medfilt2(r_channel);
g_channel=medfilt2(g_channel);
b_channel=medfilt2(b_channel);

%denoise each channel

rgbim(:,:,1)=r_channel;
rgbim(:,:,2)=g_channel;
rgbim(:,:,3)=b_channel;

A 3 Step Process in which image is resized, channels separated and noise removed in each
channel.
9

Feature Extraction

Features are then extracted from the Image;


We are extracted three features in our project, Color, Edge Feature and Texture Feature.

Following code is used for feature extraction;

%color feature
fet1=color_luv(rgbim);
%edge feature
fet2=edgehist(rgbim);
%texture feature
%glcm-gray level co occurrence matrix
glcm=graycomatrix(rgb2gray(rgbim));
fet3=glcm(:);
fet=[fet1;fet2;fet3];

Edge Detection

The Canny edge detector is an edge detection operator that uses a multi-stage
Algorithm to detect a wide range of edges in images. John F. Canny developed it in 1986.
Canny also produced a computational theory of edge detection explaining why the
technique works.

Canny edge detection is a technique to extract useful structural information from different
10

vision objects and dramatically reduce the amount of data to be processed. It has been
widely applied in various computer vision systems.

Code used for Edge Detection;


edim = edge(y, 'canny');

Figure 2: Original Image Figure 3: After applying canny Edge Detection



Database

We have used 3-Databases in our project. We have trained our classifier using these
Databases separately and compared results.
Databases are shown in the next page with details of how the images were obtained.
11



Database-1
In this database we have used Good and bad Currency notes. The images were taken from
a mobile camera. There was no alignment of the camera. Images were taken from different
angles. This database contains a total of 279 images.







Database-2
In this database we have used Good and bad Currency notes. The images were also taken
from a mobile camera. Only difference is that we have now fixed the position of the camera.
This database contains a total of 279 images.
12



Database-3
In this database we have used Good and bad Currency notes. This Database is a
combination of images from Database-1 and Database-2.

Test Case 1- Database-1


Lets now first Test Database-1. A step By Step Procedure is shown below for better
understanding.

Once we have created a database and are done with the Feature Extraction we can
now test this using NNPR Toolbox available in MATLAB.
For this, we just need to click on the GUI button it will automatically open the NNPR
Toolbox.
13

Now we select the Input matrix and the Target Matrix, which was created in the first
step.

After this we select the training and testing data.


14

We have three combinations available for the images Database

1. (75% Training, 15% Validation, 10% Testing)


2. (85% Training, 10% Validation, 5% Testing)
3. (90% Training, 5% Validation, 5% Testing)

Results of Test Case 1- Database-1

1. (75% Training, 15% Validation, 10% Testing)


Confusion Matrix Error Histogram



15

ROC Performance

2. (85% Training, 10% Validation, 5% Testing)


16

3. (90% Training, 5% Validation, 5% Testing)


17

ROC Performance

Results of Test Case 2- Database-2



1. (75% Training, 15% Validation, 10% Testing)
18



2. 85% Training, 10% Validation, 5% Testing)
19

3. (90% Training, 5% Validation, 5% Testing)


20

Results of Test Case 3- Database-3



1. (75% Training, 15% Validation, 10% Testing)
21

2. (85% Training, 10% Validation, 5% Testing)


22

3. (90% Training, 5% Validation, 5% Testing)


23


Image Testing (Classifier Function-Generated by NNPR Toolbox)

Once the images are trained the tool box also generates a NN Function w.r.t to the
combination we have chosen.
For example, below is a function generated by NNPR Toolbox for combination 80-10-10

Target = myNeuralNetworkFunction801010(fet); => Generated by NNPR Toolbox

Once we have this function we can test images.


24

Procedure to test Image??

Select the Testing image and wait for result.


25

Display Result
26


Results and Comparison


Below is a summary of results for Test Case-1 in which we worked with Database-1

Test Case-1

DataBase-1


75% Training, 15% Total Test Images 12 Accuracy 42%
Validation, 10% Testing
Correctly Classified 5



85% Training, 10%
Total Test Images 12 Accuracy 58%
Validation, 5% Testing
Correctly Classified 7



90% Training, 5%
Total Test Images 12 Accuracy 75%
Validation, 5% Testing
Correctly Classified 9

27

Below is a summary of results for Test Case-2 in which we worked with Database-2

Test Case-2

DataBase-2


75% Training, 15%
Total Test Images 26 Accuracy 96%
Validation, 10% Testing
Correctly Classified 25



85% Training, 10%
Total Test Images 26 Accuracy 96%
Validation, 5% Testing
Correctly Classified 25



90% Training, 5%
Total Test Images 26 Accuracy 100%
Validation, 5% Testing
Correctly Classified 26

28

Below is a summary of results for Test Case-3 in which we worked with Database-3

Test Case-3

DataBase-3


75% Training, 15%
Total Test Images 39 Accuracy 76%
Validation, 10% Testing
Correctly Classified 30



85% Training, 10%
Total Test Images 39 Accuracy 79%
Validation, 5% Testing
Correctly Classified 31



90% Training, 5%
Total Test Images 39 Accuracy 85%
Validation, 5% Testing
Correctly Classified 33

29

Conclusion

The results are self-explanatory. From the work carried out, we can make a deduction that
using Database-2 we achieved better results as compared to Database-1.
On average while using Database-1 the correct classification was 58%. While, the average
correct classification using Database-2 was 97%.
We even achieved better results when we combined the two Databases (Database-3).
Hence improving the results of Database-1. On average, the correct classification for
Database-3 was 80%.
Paper currency recognition is an important application of pattern recognition. Many studies
were made to recognize currencies using neural networks. In this project another method of
recognizing currencies has been introduced, which is based on correlation between images.
The method uses feed forward back propagation neural networks. This method is quite
reasonable in terms of accuracy.
For better accuracy we need images to be taken from a fixed point rather than taking
images randomly.
30

References

Main Paper used for implementing this Project
[1] Paper Currency Recognition Using Backpropagation Neural Network
P.P.S.Subhashini 1, Dr.M.Satya Sairam 2, Dr. D.Srinivasa Rao 3

Other resources
[2] An Intelligent Paper Currency Recognition System, Muhammad Sarfaraz, Proceida
Computer Science 65 (2015) 538 - 545.
[3] Currency Recognition Using Smart Phone, Comparison between color SIFT and gray
scale SIFT algorithms, Iyad Abu Doush, Journal of King Saud University.
[4] Currency Recognition Using Image Processing, Chinmay Bhurke, International Journal
of Innovative Research in Computer and Communication Engineering, Vol 3, 2015
[5] Vila A, Ferrer N, Mantecon J, Breton D, Garcia, JF. Development of a fast and non-
destructive procedure original and fake euro notes.
[6] Zhang EH, Jiang B, Duan JH, Bian ZZ. Research on paper currency recognition by
neural networks. In Proceeding of the second international conference on machine learning
and cybernetics, p. 2193 2197; 200
[7] Liu Q, Tang L. Study of Printing Identification Based on Multi-spectrum Imaging Analysis,
Proceedings of the International Conference on Computer Science and Software
Engineering, p. 229 232; 2008.S.Bhuvaneswari, T.S.Subashini, Automatic Detection and
Inpainting of Text Images, International Journal of Computer Applications (0975 8887)
Volume 61 No.7, 2013
[8] Mark Nixon, Alberto Aguado. Feature Extraction & Image Processing, Academic Press,
2nd edition; 20