Sie sind auf Seite 1von 20

License Plate Recognition System |2020

1. Title:
“License Plate Recognition System (LPR)”

2. Abstract:

Transportation is one of the essential needs for the economy of a developed or developing
country. Population of automobiles is increasing rapidly and every four minutes one death
occurs due to road accidents in India. Due to which it is difficult to keep a track of all these
accidents. Parking areas are seen densely packed that leads to minor accidents and causes
unnoticeable damage to vehicles. whether in road accidents or in parking slots, in most of the
cases, car owner may end up in not knowing about the vehicle (person) who had dent or
damaged the car and bears unnecessary expenditures for the mistake committed by some
unknown person.

In this project, the License Plate Recognition system, it is possible to identify the license plate
of a car or any vehicle which is involved in the accident and traffic rules violation can be
identified and collect the details of the vehicle with the help of license number. In this project
we mainly focus on license plate detection and character recognition on a given image using
Computer Vision, image processing and Optical Character Recognition (OCR) on images.

License Plate Recognition systems are widely used on a wide range of applications nowadays.
Applications like Highway Tolling, Smart Parking and Enhanced vehicle theft and damage
preventions. LPR system is the heart of any ITS (Intelligent Transportation) systems, which is
used by police forces and highway patrolling for traffic monitoring and Effective enforcement
of traffic rules monitoring.

3. Keywords:

 License Plate Recognition System (LPRS)


 Machine Learning
 Computer Vision
 Image Processing
 Support Vector Machine
 Support Vector Classifier
 K-Nearest Neighbour
 Optical Character Recognition

Department of Computer Applications, BMSCE Page | 1


License Plate Recognition System |2020

4. Introduction:

The increase in population and its requirements has increased the number of vehicles on road
and that has led to an increase in accidents and violation of traffic rules. Monitoring vehicles
for law enforcement and security purposes is a difficult problem because of the number of
automobiles on the road today. An example is these lies in border patrol: it is time-consuming
for an officer to physically check the license plate of every car. Additionally, it is not feasible
to employ a number of police officers to act as full-time license plate inspectors. Police patrols
cannot just drive in their cars staring at the plates of other cars. There must exist a way of
detecting and identifying license plates without constant human intervention. As a solution, we
have implemented a system that can extract the license plate number of a vehicle from an
image – given a set of constraints.
A number plate is the unique identification of a vehicle. LPR is designed to identify the
number plate and to recognize the characters in it and further from the obtained license number
we can collect the necessary details to track the vehicle involved in the accident or any other
law enforcement violations.

4.1 Objective of the project:

The main objectives of this project are:


1. To identify the license plate number of the vehicle which has involved in an accident or
vehicles involved in traffic rules violation.
2. The second step is to extract the characters present in the identified license plate which will
be helpful for further actions.

4.2 Scope of the project:

To develop a License Plate Recognition (LPR) algorithm for identification of the


number plate and recognizing characters in it.

Department of Computer Applications, BMSCE Page | 2


License Plate Recognition System |2020

5. Literature survey:

Real-time segmentation of dynamic regions or objects in the images are often referred to as
background subtraction or foreground segmentation and is a basic step in several computer
vision applications.

The method proposed by Prabhakar has four major steps as follows:


Preprocessing of the captured image, extracting license number plate region, segmentation, and
character recognition of license plate. In pre-processing the desired vehicle image is given as
an input to the model, brightness of the image is adjusted, noise removal using filters, and the
image is converted to grayscale. Extraction of the license plate region consists of finding the
edges in the image where the exact location of the license plate is located and crop it into a
rectangular frame. Segmentation plays a vital role in vehicle license plate recognition; the
legibility of character recognition completely relies on the segmentation done [1].
Another strategy utilizing Gabor filtering for character recognition in the grayscale images is
proposed in this paper. Components are separated directly from gray-scale character images by
Gabor filters which are exceptionally intended for measurable data of character structures. A
template matching system is used to find a sub-image of a target image that coordinates a
template image [2].

A strong technique for localization, segmentation, and recognition of the characters within the
located plate. Images from still cameras or videos are obtained and regenerated into grayscale
images. Hough lines are determined by using Hough transform and therefore, the segmentation
of greyscale image generated by finding edges for smoothing image is employed to cut back
the quantity of the connected part, and then connected part is calculated. Finally, a single
character within the registration code is detected. The aim is to indicate that the planned
technique achieved high accuracy by optimizing numerous parameters that have a higher
recognition rate than the standard ways [3].

In this method using neural networks, a perceptron is trained by providing a sample set and
few intelligent rules. The problem with neural networks is that training a per-caption is quite
difficult and it involves huge sample sets to train the network. If the neural network is not
trained in an appropriate manner, it may not address scale and orientation invariance. But
training a network with a rule that solves these problems is even more difficult. Template
matching on the other hand is an easier technique as compared to neural networks. Also, it
does not require powerful hardware to perform its operations. But it is susceptible to the
problems of scale and orientation [4].

In Vinay Kumar V, Dr. R Srikant Swamy's proposed project Sobel edge detector was used
to locate edge points in the image. Intensity variation and periodogram were used to identify
license plates from the image. Iterative backpropagation approach was used to construct High-
Resolution images from various Low-Resolution images to overcome the effects of motion
blur, camera misfocus, and aging of the sensor. Character extraction was done by connected
component analysis. Character recognition was done by feature extraction from images and
classification through SVM. Feature extraction was done by three methods Principal
Component Analysis, Linear Discriminant Analysis, and HOG Transform method. Results of

Department of Computer Applications, BMSCE Page | 3


License Plate Recognition System |2020

all three methods were compared [5].


6. System overview:

6.1 License Plate Recognition (LPR) Algorithm using KNN and SVM:

The Fig: 1.0 represents the flow of the LPR algorithm, captured image is first converted into
full contrast gray scale image, then it is passed through Gaussian filter for noise removal and

Figure 1.0: License Plate Recognition (LPR) functional algorithm

adaptive thresholding is performed for better output. For plate extraction, the KNN will look for
possible characters in the scene. When a possible character is first found, it will check for the
character besides and identifies the plate length. The algorithm uses contours to predict the
characters taking account of their rectangle and horizontal bounding areas with a particular
aspect ratio. The plate extracted is again gone through preprocess and thresholding.

When the plate is extracted, it will check for list of possible matching character in the extracted
plate, before that basic mathematical operations are applied to determine the distance between
the characters. Pythagoras theorem to calculate the distance between two characters,
trigonometric operations to calculate the angle. Character recognition is done by K-Nearest
Neighbors (KNN) classifier, It uses large number of datasets to generate classifications and
image value files with which it compares with the input samples. Each character is matched
against the values present in the flattened files and classifications file and the output is acquired
accordingly.

Department of Computer Applications, BMSCE Page | 4


License Plate Recognition System |2020

6.2 K-Nearest Neighbors (KNN) overview:

The image value (Flattened image) file and classifications file can be generated through the
above operation in Fig: 5.4, Usage of a greater number of image samples will lead to more
accuracy. At the end, when the training is completed the characters are matched against the
standard files and output is obtained as shown in the Fig: 5.5. The different types of character
samples that are taken for training is illustrated in the Fig: 5.6

Figure 2.1: Training a KNN Model

Figure 2.2: Matching of the character

Department of Computer Applications, BMSCE Page | 5


License Plate Recognition System |2020

Figure 2.3: Input samples for training

Department of Computer Applications, BMSCE Page | 6


License Plate Recognition System |2020

6.3 Support vector machines (SVMs) overview:


SVM is a powerful yet flexible supervised machine learning method used for classification,
regression, and, outlier’s detection. SVMs are very efficient in high dimensional spaces and
generally are used in classification problems. SVMs are popular and memory-efficient because
they use a subset of training points in the decision function.
The main goal of SVMs is to divide the datasets into several classes to find a maximum
marginal hyperplane (MMH) which can be done in the following two steps
 Support Vector Machines will first generate hyperplanes iteratively that separates the
classes in the best way.
 After that, it will choose the hyperplane that segregates the classes correctly.
Some important concepts in SVM are as follow:
 Support Vectors − They may be defined as the data points which are closest to the
hyperplane. Support vectors help in deciding the separating line.
 Hyperplane − The decision plane or space that divides a set of objects having different
classes.
 Margin − The gap between two lines on the closet data points of different classes is
called margin.
Following diagrams will give you an insight into these SVM concepts

Figure 3.1: SVM in Scikit-learn supports both sparse and dense sample vectors as input.

Department of Computer Applications, BMSCE Page | 7


License Plate Recognition System |2020

Classification of SVM

Provides three classes namely SVC, NuSVC, and LinearSVC which can perform multiclass-


class classification.

SVC

The objective of a Linear SVC (Support Vector Classifier) is to fit the data you provide,
returning a "best fit" hyperplane that divides or categorizes your data. From there, after getting
the hyperplane, you can then feed some features to your classifier to see what the "predicted"
class is.

The algorithm
SVC uses the Support Vector Domain Description (SVDD) to delineate the region in the data
space where the input examples are concentrated. SVDD belongs to the general category
of kernel-based learning. In its "linear" version SVDD looks for the smallest sphere that
encloses the data. When used in conjunction with a kernel function, it looks for the smallest
enclosing sphere in the feature space defined by the kernel function. While in feature space the
data is described by a sphere, when mapped back to data-space the sphere is transformed into a
set of non-linear contours that enclose the data (see Figure 2). SVDD provides a decision
function that tells whether a given input is inside the feature-space sphere or not, indicating
whether a given point belongs to the support of the distribution. More specifically, it is the
radius-squared of the feature-space sphere minus the distance-squared of the image of a data
point x from the center of the feature-space sphere. This function, denoted by f(x) returns a value
greater than 0 if x is inside the feature space sphere and negative otherwise

The contours where f(x)=0 is then interpreted as cluster boundaries. An example of such


contours is shown in Figure 2. However, these boundaries define the clusters implicitly, and an
additional step is required to "tease" the cluster membership out of the SVDD.

Department of Computer Applications, BMSCE Page | 8


License Plate Recognition System |2020

Figure 3.2: The line segment that connects points in different clusters has to go through a
low-density region in data space where the SVDD returns a negative value.

The key geometrical observation that enables to infer clusters out of the SVDD is that given a
pair of data points that belong to different components (clusters), the line segment that connects
them must go through a region in data space which is part of a "valley" in the probability density
of the data, i.e., does not belong to the support of the distribution. Such a line must then go
outside the feature-space sphere, and therefore have a segment with points that return a negative
value when tested with the SVDD decision function (see Figure 1). This observation leads to the
definition of an adjacency matrix A between pairs of points in our dataset. For a given pair of
points xi and xj the i,j element of A is given by Aij={1,0if f(x)>0 for all x on the line segment
connecting xi and xj otherwise. Clusters are now defined as the connected components of the
graph induced by A. Checking the line segment is implemented by sampling several points (20
points were used in numerical experiments).

Figure 3.3: Contours generated by SVDD as γ is increased.

Department of Computer Applications, BMSCE Page | 9


License Plate Recognition System |2020

7. Tests & Results:

7.1 License Plate Recognition (LPR) using KNN Algorithm:

The captured image shown in Figure 4.1. It is converted into greyscale image as shown in the
Figure 4.2. After greyscale conversion adaptive thresholding is performed as shown in the
Figure 4.3. After thresholding, all the contours are extracted which illustrated in the Figure 4.4.

Figure 4.1: Captured Image

Figure 4.2: Grayscale image generated from captured image

Department of Computer Applications, BMSCE Page | 10


License Plate Recognition System |2020

Figure 4.3: Thresholded image after performing adaptive thresholding on grayscale image

Figure 4.4: Contours extracted from thresholded image

Figure 4.5: Contours filtered from all extracted contours based on certain parameters

Department of Computer Applications, BMSCE Page | 11


License Plate Recognition System |2020

All the extracted contours are filtered and only characters are identified. After this, plate length
is determined by looking for all characters which are adjacent to each other as shown in Figure
4.5. Plate is extracted as shown in the Figure 4.6. It again undergoes thresholding and all the
characters are extracted, resized and fed to the KNN classifier as vectors. Output of the classifier
is a label of the class i.e. A, B, C,1,2.... etc. So in this way we obtain a string which is license
plate number of vehicles.

Figure 4.6: Extracted license plate image

Figure 4.7: Final output after reading characters

7.2 License Plate Recognition (LPR) using SVM Algorithm:

The approach used to segment the images is Connected Component Analysis. Connected
regions will imply that all the connected pixels belong to the same object. A pixel is said to be
connected to another if they both have the same value and are adjacent to each other.

Department of Computer Applications, BMSCE Page | 12


License Plate Recognition System |2020

Car Image -> Grayscale Image -> Binary Image -> Applying CCA to get connected regions ->
Detect license plate out of all connected regions (Assumptions made: width of the license plate
region to the full image ranges between 15% and 40% and height of the license plate region to
the full image is between 8% & 20%)

The output of the first step is a license plate image detected in a car image. This is provided as
input to step2 and CCA is applied to this image to bound the characters in a plate. Each
character identified is appended into a list.

Model is trained using SVC (4 cross-fold validation) with dataset present in directory
train20X20. The model is saved as finalized_model.sav which is then loaded to predict each
character.

Once the characters of the plate are obtained and the model is trained, the model is loaded to
predict each character.

Figure 5.1: Car image to grayscale

Department of Computer Applications, BMSCE Page | 13


License Plate Recognition System |2020

Figure 5.2: Detecting a number plate

Figure 5.3: Recognizing the character and given the output

Department of Computer Applications, BMSCE Page | 14


License Plate Recognition System |2020

8.1 Challenges Faced:


In some cases, characters in license plate are broken as shown in Figure 6.1. But our algorithm is
not affected by it since we perform dilation and erosion operation on the extracted license plate
image, some small holes in images are filled which is illustrated in Figure 6.2. Our algorithm
performs well in low light if characters are visible because we perform contrast enhancement
during preprocessing stage. This is illustrated in Figure 6.3

Figure 6.1: License plate with broken Character

Figure 6.2: Thresholded image of License Plate

Figure 6.3: Image with low illumination near license Plate

Department of Computer Applications, BMSCE Page | 15


License Plate Recognition System |2020

8.2 Failure Analysis:

The license plate recognition algorithm does not give expected results in following cases:

• If characters are not visible and if license plate is damaged which is shown in Figure 6.4.
• If scene is too complex, then many contours extracted from thresholded image are
misinterpreted as characters as shown in Figure 6.5. So, plate extraction will be difficult.
• If illumination is too low and characters are not properly visible.

Figure 6.4: Damaged License Plate

Figure 6.5: Scene with high details

Department of Computer Applications, BMSCE Page | 16


License Plate Recognition System |2020

9. Conclusion:

In this project, the License Plate Recognition system using vehicle license plate is presented.
This system uses image processing techniques for recognition of the vehicle. The system works
satisfactorily for wide variation of conditions and different types of number plates. The system
is implemented and executed in PyCharm and performance is tested on genuine images. This
LPR system works quite well however, there is a need in improvement of character recognition
techniques because real time implementation of LPR is a tedious task. The OCR method is
sensitive to misalignment and to different sizes, so we have to create different kind of templets
for different RTO specifications. At present there are certain limits on parameters like script on
the vehicle number plate, skew in the image which can be removed by enhancing the algorithms
further.

Department of Computer Applications, BMSCE Page | 17


License Plate Recognition System |2020

10. References:

[1] Prabhakar, Priyanka Anupama, P R Rasmi, S. Automatic vehicle number plate detection and
recognition. International Conference on Control, Instrumentation, Communication and
Computational Technologies, ICCICCT, 185-190, 2014.

[2] B. Pechiammal and J. A. Renjith, “An efficient approach for automatic license plate
recognition system,” Third International Conference on Science Technology Engineering
Management (ICONSTEM), Chennai, pp. 121-129, 2017.

[3] P. Prabhakar, P. Anupama and S. R. Resmi, “Automatic vehicle number plate detection and
recognition,” International Conference on Control, Instrumentation, Communication and
Computational Technologies (ICCICCT), Kanyakumari, pp-185-190, 2014.

[4] H. Karwal and A. Girdhar, “Vehicle Number Plate Detection System for Indian Vehicles,”
IEEE International Conference on Computational Intelligence Communication Technology,
Ghaziabad, pp. 8-12, 2015.

[5] Vinay Kumar V, Dr R Srikant swamy, “Automatic License Plate Recognition using
Histogram of Orient Gradients for character recognition”, 2014.

Department of Computer Applications, BMSCE Page | 18


License Plate Recognition System |2020

11. Plagiarism Report:

1.

2.

Department of Computer Applications, BMSCE Page | 19


License Plate Recognition System |2020

3.

4.

Department of Computer Applications, BMSCE Page | 20

Das könnte Ihnen auch gefallen