Sie sind auf Seite 1von 47

A PROJECT REPORT

ON
“Smart attendance system using facial recognition”

A Project Report is submitted to MLVTEC BHILWARA (RTU kota)


In the partial fulfillment of the requirements for the award of degree of

BACHELOR OF TECHNOLOGY
In
INFORMATION TECHNOLOGY

Submitted by:
Ayush Sharma (16EMBIT015)
Chirag Sharma (16EMBIT020)
Umesh Yadav (16EMBIT057)

Under the guidance of: Submitted to:


Mr. ARUN KUMAR Mr. Rohit Negi
Assistant Professor, Dept. Of IT Project Incharge

------------------------------------------------------------------------------------------------------------

MLV Textile and Engineering College


Bhilwara 311001, Rajasthan
DEPARTMENT OF INFORMATION TECHNOLOGY
Session-2019-20

1|Page
CANDIDATE DECLERATION
We hereby declare that the work presented in this project titled “Smart Attendance
System” submitted towards completion of Major project in 8th Semester of B.Tech
(IT) at the M.L.V. Textile and Engineering College Bhilwara. It is an authentic
record of my original work pursued under the guidance of Mr. ARUN KUMAR
Assistant Professor Department Of Information Technology, M.L.V. TEXTILE &
ENGINEERING COLLEGE,BHILWARA. We have not submitted the matter
embodied in this project for the award of any other degree.

Ayush Sharma:

Chirag Sharma:

Umesh Yadav:

Date:

2|Page
M.L.V. TEXTILE & ENGINEERING COLLEGE
(An Autonomous Institute of Govt. Of Rajasthan)

Information Technology
BONAFIDE CERTIFICATE

This is to certify that the project work entitled “Smart Attendance System Using Facial
recognition” is carried out by:

Ayush Sharma (16EMBIT008)

Chirag Sharma (16EMBIT045)

Umesh Yadav (16EMBIT052)

Under my supervision and guidance during the academic year 2019-20 and to the best of
our knowledge is original work.

Submitted for viva-voce examination held on Date:

Project Guide External Examiner 1 External Examiner 2


Mr. ARUN KUMAR Mr. Nitesh Chouhan Mr. Amit Gupta
Assistant Professor Head of Department Assistant Professor

3|Page
Acknowledgement

We take immense pleasure in thanking Dr. Rajiv Kumar Chaudhary Principal


M.L.V.Textile and Engineering College, Bhilwara for permitting us to carry out this
project work.

We give our thanks to HOD Mr. Nitesh Chouhan, Project In-charge Mr. Rohit
Negi and to my college M.L.V. Textile and Engineering College, Bhilwara for their
extreme co-operation.

We pay our thanks to Mr. Arun Kumar Assistant Professor Department Of


InformationTechnology for his encouragement and appreciation that we have
received from him.

We are also thankful to all the faculty members, because in the suggestions and
guidance.

Finally, yet importantly, we would like to express our heartfelt thanks to our beloved
parents for their blessings, my friends/classmates for their help and wishes for the
successful completion of this project.

Ayush Sharma (16EMBIT015)

Chirag Sharma (16EMBIT020)

Umesh Yadav (16EMBIT057)

B.Tech. IV Year Discipline of Information Technology

4|Page
Abstract

In this project we have made an system interface in which we are trying to develop
an smart attendance system, which will reduce the overall time taken by the teacher
to take the attendance of class & as well as chances of false attendance will become
negligible.

In our system we are using facial recognition to detect & extract faces of students.

First a student need to sign up in our system in which a student needs to fill some
details of himself/herself & face will be saved in encodings in a csv file database.

Once the student’s data will be saved, the next time system try to match the student’s
face with the existing saved face in the database.

If the face matches the system will mark the attendance of the student otherwise not
Face can be match in our system as an individual & as well as with group of
students.

5|Page
Fig. No. Fig. Name Fig.
Page
1 Facial Feature 8

2 OpenCv Face Recognition working 9


process

3 Face Detection Algorithm 10

4 Face Detection Methods 11

5 Blockk diagram of facialll 14


detectionn

6 Image Processingg 15

7 System Architecture 17

8 Student 18
Registration

9 Face detection 19
flow chart

10 Facial Recognition 21
process

11 Attendance 22
System

12 Attendance flow 23

13 Detection methods 25

14 Face 28
detection

15 Gray scaling 30

6|Page
16 Nueral 32
Network

17 Nueral 32
network
example

18 Fundamentals 35
step in DIP

19 Signup page 36

20 Image 37
acquistion

21 Image processing 38

22 Image detection 39

23 Image Recognition 40

24 Multiple face recognition 41

25 Mark the attendance 42

7|Page
CONTENTS

 Candidate Declaration………………………………………………I
 Bonafide Certificate………………………………………………..II
 Acknowledgement………………………………………………....III
 Abstract…………………………………………………………….IV
 List Of figures……………………………………………………..6-7

1. INRODUCTION…………………………………………………………………...8
1.1Face Recognition

1.2Image acquisition
1.3Face Detection

1.4Image processing

2. SYSTEM INTERFACE……………………………………………………….......18
2.1Student Registration

2.2Detection of face
2.3Recognition of a face
2.4Attendance Management System

3. LITERATURE SURVEY………………………………………………………….25
3.1Feature Base Approach
3.2PDM

3.3Low Level Analysis


3.4Nueral Network

8|Page
4. DIGITAL IMAGE PROCESSING………………………………………………..35
4.1DIP Methods

4.2Simple Image Model


4.3Types of Image Processing

4.4Fundamentals Steps in DIP

5. PRACTICAL IMPLEMENTATION OF PROJECT …………………………...38

6. CONCLUSION……………………………………………………………………...46

REFERENCES ……………………………………………………………………..47

9|Page
CHAPTER-1
INTRODUCTION

Face recognition is the task of identifying an already detected object as a known or


unknown face. Often the problem of face recognition is confused with the problem
of face detection. Face Recognition on the other hand is to decide if the "face" is
someone known, or unknown, using for this purpose a database of faces in order to
validate this input face.

FACE RECOGNIZATION:

DIFFERENT APPROACHES OF FACE RECOGNITION:

There are two predominant approaches to the face recognition problem: Geometric
(feature based) and photometric (view based). As researcher interest in face
recognition continued, many different algorithms were developed, three of which
have been well studied in face recognition literature.

Popular recognition algorithms include:

1. Principal Component Analysis using Eigen faces. (PCA)


2. Linear Discriminate Analysis.
3. Elastic Bunch Graph Matching using the Fisher face algorithm.

10 | P a g e
HOW FACIAL RECOGNITION WORKS:

FIG 1: Facial Features or harr Features of a face

Every face has at least 80 distinguishable parts called nodal points.


Here are few nodal points below:
- Distance between the eyes
- Width of the nose - Depth of eye sockets, Structure of the cheek
bone, Length of jaw line

11 | P a g e
Fig 2: Opencv Face recognition working process

In order to build our OpenCV face recognition pipeline, we’ll be applying deep
learning in two key steps:

1. To apply face detection, which detects the presence and location of a face in an
image, but does not identify it.

2. To extract the 128-d feature vectors (called “embedding’s”) that quantify each face
in an image.

12 | P a g e
FACE DETECTION:

Face detection is a computer technology being used in a variety of applications that


identifies human faces in digital images. Face detection also refers to the
psychological process by which humans locate and attend to faces in a visual scene.

Fig: 3 Face Detection Algorithm

13 | P a g e
FACE DETECTION METHODS:

Fig 4: Face Detection Methods

1.Knowledge-Based:-

The knowledge-based method depends on the set of rules, and it is based on human
knowledge to detect the faces. Ex- A face must have a nose, eyes, and mouth within
certain distances and positions with each other. The big problem with these methods
is the difficulty in building an appropriate set of rules. There could be many false

14 | P a g e
positive if the rules were too general or too detailed. This approach alone is
insufficient and unable to find many faces in multiple images.

2.Feature-Based:-

The feature-based method is to locate faces by extracting structural features of the


face. It is first trained as a classifier and then used to differentiate between facial and
non-facial regions. The idea is to overcome the limits of our instinctive knowledge of
faces. This approach divided into several steps and even photos with many faces they
report a success rate of 94%.

3.Template Matching:-

Template Matching method uses pre-defined or parameterised face templates to locate


or detect the faces by the correlation between the templates and input images. Ex- a
human face can be divided into eyes, face contour, nose, and mouth. Also, a face
model can be built by edges just by using edge detection method. This approach is
simple to implement, but it is inadequate for face detection. However, deformable
templates have been proposed to deal with these problems.

15 | P a g e
FACE RECOGNITION DIFFICULTIES:

1. Identify similar faces (inter-class similarity)

2. Accommodate intra-class variability due to

3. head pose

4. illumination conditions

5. expressions

6. facial accessories

7. aging effects

8. Cartoon faces

16 | P a g e
IMAGE ACQUISTION:

• Facial-scan technology can acquire faces from almost any static camera or
video system that generates images of sufficient quality and resolution.

• High-quality enrolment is essential to eventual verification and identification


enrolment images define the facial characteristics to be used in all future
authentication events.

FIG5: Block diagram of Facial detection

17 | P a g e
IMAGE PROCESSING:
• Images are cropped and color images are normally converted to black and
white in order to facilitate initial comparisons based on gray scale
characteristics.

• First the presence of faces or face in a scene must be detected. Once the face
is detected, it must be localized and Normalization process may be required
to bring the dimensions of the live facial sample in alignment with the one on
the template.

FIG 6: Image processing

18 | P a g e
CHAPTER-2

SYSTEM INTERFACE

• SYSTEM COMPONENTS:

1. Student Registration

2. Face Detection

3. Face Recognition

o Feature Extraction
o Feature Classification

4. Attendance Management system

Attendance Management will handle:


o Automated attendance marking.

o Manual attendance marking.

o Attendance details of users

19 | P a g e
• SYSTEM ARCHITECTURE:

FIG 7: System Architecture

 Our System can be used by an Administrator in our case a teacher.

 The admin will click a photograph of all class as an attendance.


 System check the all faces stored in existing Data base.
 Faces which will match their attendance will be marked automatically in the
system.
 The faces which will not be detected they have to further contact to admin or
they have to again register in database

20 | P a g e
STUDENT REGISTRATION:

FIG 8: Student Registration

 Student can enter details like:


Name
Email
Father’s Name
Contact No

21 | P a g e
FACE DETECTION:

FIG 9: Facial detection flow chart

 Image will be taken through a webcam.

 Now the image acquisition process starts.

 Image will be resized now.

22 | P a g e
 Now this resize image will be converted into grayscale image.

 Now we use the Har Casscade classifier algorithm to extract the facial features
of face.

 In final a Label will be assigned.

23 | P a g e
FACE RECOGNITION:

FIG 10: Facial recognition process

 First Captured Image is Extracted.

 Then it will compare it with existing image.

 If matching success face is recognized otherwise not.

24 | P a g e
ATTENDANCE SYSTEM:

FIG 11: Attendance System

 First Student have to appear infront of camera or as guided by administrator.

 Now the student image have to go through with face detection and face
recognition phase.

25 | P a g e
 If there is a successful match or true result in system with existing image in
database then the student attendance will be recorded in system and saved in
the details entered by the user.

 If match not found attendance will be marked as absent.

FIG 12: Attendance flow

26 | P a g e
CHAPTER-4

LITERATURE SURVEY

Face detection is a computer technology that determines the location and size of human
face in arbitrary (digital) image. The facial features are detected and any other objects
like trees, buildings and bodies etc. are ignored from the digital image.

It can be regarded as a specific case of object-class detection, where the task is finding
the location and sizes of all objects in an image that belong to a given class. Face
detection, can be regarded as a more general case of face localization.

In face localization, the task is to find the locations and sizes of a known number of
faces (usually one).

Basically there are two types of approaches to detect facial part in the given image i.e.
feature base and image base approach.

Feature base approach tries to extract features of the image and match it against the
knowledge of the face features. While image base approach tries to get best match
between training and testing images.

27 | P a g e
Fig 13: Detection methods

FEATURE BASE APPROCH:

Active Shape Model Active shape models focus on complex non-rigid features like
actual physical and higher level appearance of features Means that Active Shape Models
(ASMs) are aimed at automatically locating landmark points that define the shape of any
statistically modelled object in an image.

When of facial features such as the eyes, lips, nose, mouth and eyebrows. The training
stage of an ASM involves the building of a statistical

a) facial model from a training set containing images with manually annotated
landmarks.
ASMs is classified into three groups i.e. snakes, PDM, Deformable templates

28 | P a g e
1.1)Snakes:The first type uses a generic active contour called snakes, first
introduced by Kass et al. in 1987 Snakes are used to identify head boundaries
[8,9,10,11,12]. In order to achieve the task, a snake is first initialized at the proximity
around a head boundary.
It then locks onto nearby edges and subsequently assume the shape of the head. The
evolution of a snake is achieved by minimizing an energy function, Esnake (analogy
with physical systems), denoted asEsnake = Einternal + EExternal WhereEinternal
and EExternal are internal and external energy functions.Internal energy is the part
that depends on the intrinsic properties of the snake and defines its natural evolution.
The typical natural evolution in snakes is shrinking or expanding.

PDM (Point Distribution Model):


Independently of computerized image analysis, and before ASMs were developed,
researchers developed statistical models of shape.

The idea is that once you represent shapes as vectors, you can apply standard
statistical methods to them just like any other multivariate object.

These models learn allowable constellations of shape points from training examples
and use principal components to build what is called a Point Distribution Model.

These have been used in diverse ways, for example for categorizing Iron Age
broaches.

Ideal Point Distribution Models can only deform in ways that are characteristic of
the object.

Coot and his colleagues were seeking models which do exactly that so if a beard,
say, covers the chin, the shape model can override the image" to approximate the
position of the chin under the beard.

It was therefore natural (but perhaps only in retrospect) to adopt Point Distribution
Models.

29 | P a g e
LOW LEVEL ANALYSIS:
Based on low level visual features like color, intensity, edges, motion etc. Skin Color
Base Color is avital feature of human faces.

Using skin-color as a feature for tracking a face has several advantages. Color
processing is much faster than processing other facial features. Under certain lighting
conditions, color is orientation invariant.

This property makes motion estimation much easier because only a translation
model is needed for motion estimation.

Tracking human faces using color as a feature has several problems


like the color representation of a face obtained by a camera is influenced by many
factors (ambient light, object movement) etc.

30 | P a g e
Fig 14: Face Detection

Majorly three different face detection algorithms are available based on RGB,
Ycb Cr, and HIS color space models. In the implementation of the algorithms there
are three main steps viz.

31 | P a g e
(1) Classify the skin region in the color space.

(2) Apply threshold to mask the skin region.

(3) Draw bounding box to extract the face image.

MOTION BASE:
When use of video sequence is available, motion information can be used to locate
moving objects. Moving silhouettes like face and body parts can be extracted by
simply thresholding accumulated frame differences. Besides face regions, facial
features can be located by frame differences.

Gray Scale Base:

Gray information within a face can also be treat as important features. Facial features
such as eyebrows, pupils, and lips appear generally darker than their surrounding
facial regions.

Various recent feature extraction algorithms search for local gray minima within
segmented facial regions.

In these algorithms, the input images are first enhanced by contrast-stretching and
gray-scale morphological routines to improve the quality of local dark patches and
thereby make detection easier.

The extraction of dark patches is achieved by low-level gray-scale thresholding.


Based method and consist three levels.

32 | P a g e
Fig 15: Gray Scaling of an image

Edge Base:

Face detection based on edges was introduced by Sakai et al. This work was
based on analyzing line drawings of the faces from photographs, aiming to
locate facial features.
Than later Craw et al. proposed a hierarchical framework based on Sakai work
to trace a human head outline. Then after remarkable works were carried out
by many researchers in this specific area.
Method suggested by Anile and Devarajan was very simple and fast.

33 | P a g e
Neural Network:
Neural networks gaining much more attention in many pattern recognition
problems, such as OCR, object recognition, and autonomous robot driving. Since
face detection can be treated as a two class pattern recognition problem, various
neural network algorithms have been proposed.

The advantage of using neural networks for face detection is the feasibility of
training a system to capture the complex class conditional density of face
patterns.

However, one demerit is that the network architecture has to be extensively


tuned (number of layers, number of nodes, learning rates, etc.) to get exceptional
performance.

In early days most hierarchical neural network was proposed by Agui et al. [43].
The first stage having two parallel subnetworks in which the inputs are filtered
intensity values from an original image.

The inputs to the second stage network consist of the outputs from the sub
networks and extracted feature values. An output at the second stage shows the
presence of a face in the input region.

34 | P a g e
Fig 16: Basic Neural Network

Fig 17: Nueral Network Example

35 | P a g e
CHAPTER-4
DIGITAL IMAGE PROCESSING

DIGITAL IMAGE PROCESSING

Interest in digital image processing methods stems from two principal


application areas:

1. Improvement of pictorial information for human interpretation

2. Processing of scene data for autonomous machine perception

In this second application area, interest focuses on procedures for extracting image
information in a form suitable for computer processing.

Examples includes automatic character recognition, industrial machine vision for


product assembly and inspection, military recognizance, automatic processing of
fingerprints etc.

Image:

An image refers a 2D light intensity function f(x, y), where(x, y) denotes spatial
coordinates and the value of f at any point (x, y) is proportional to the brightness or
gray levels of the image at that point.

A digital image is an image f(x, y) that has been discretized both in spatial
coordinates and brightness. The elements of such a digital array are called image
elements or pixels.

36 | P a g e
A simple image model:

To be suitable for computer processing, an image f(x, y) must be digitalized both


spatially and in amplitude.

Digitization of the spatial coordinates (x, y) is called image sampling.


Amplitude digitization is called gray-level quantization.

The storage and processing requirements increase rapidly with the spatial
resolution and the number of gray levels.

Example: A 256 gray-level image of size 256x256 occupies 64k bytes of


memory.

Types of image processing

1. Low level processing

2. Medium level processing

3. High level processing

37 | P a g e
Fundamental steps in image processing are:

1. Image acquisition: to acquire a digital image.

2. Image pre-processing: to improve the image in ways that increases the chances
for success of other processes.
3. Image segmentation: to partitions an input image into its constituent parts of
objects.

4. Image segmentation: to convert the input data to a from suitable for computer
processing.

5. Image description: to extract the features that result in some quantitative


information of interest of features that are basic for differentiating one class of
objects from another.

6. Image recognition: to assign a label to an object based on the information


provided by its description.

Segmentation Representation
and description

Pre-processing

Recognition
Image
And
acquisition Knowledge base
problem

Fig 18: Fundamentals steps in DIP

38 | P a g e
CHAPTER-5

PRACTICAL IMPLEMENTATION

• SIGNUP PAGE:

FIG 19: Signup Page

39 | P a g e
• IMAGE ACQUISTION:

FIG 20: Image Acquisition

40 | P a g e
• IMAGE PROCESSING:

FIG 21: Image processing

41 | P a g e
• FACE DETECTION:

FIG 22: Image detection

42 | P a g e
• IMAGE RECOGNITION:

FIG 23: Image Recognition

43 | P a g e
• MULTIPLE FACE RECOGNITION:

FIG 24: Multiple face recognition

44 | P a g e
MARKING THE ATTENDANCE:

FIG 25: Mark the Attendance:

45 | P a g e
CHAPTER-6

CONCLUSION

• Traditionally, student’s attendances are taken manually by using


attendance sheet given by the faculty in class, which is a time
consuming event.

• Moreover, it is very difficult to verify one by one student in a large


classroom environment with distributed branches whether the
authenticated students are actually responding or not.

• FACE RECOGNITION technology is gradually evolving to a


universal biometric solution since it requires virtually zero effort
from the user end while compared with other biometric options. It is
accurate and allows for high enrolment and verification rates.

46 | P a g e
REFERENCES

[1] https://en.m.Wikipedia.org/FacialRecognitionsystem

[2] https://www.Encyclopedia.com/facialrecognition

[3] https://web.standard.edu/facedetectionproject

[4] MIT open courseware/Face detection Article

[5] Courseera OpenCV Docs.

[6] Brunelli, R. and Poggio, T. (1993), Face Recognition: Features versus


Templates. IEEE Transactions on Pattern Analysis and Machine
Intelligence, 15(10):1042-1052.

[7] Handbook of face recognition Li. Stan Z.

47 | P a g e

Das könnte Ihnen auch gefallen