Sie sind auf Seite 1von 69

THANTHAI PERIYAR GOVERMENT INSTITUTE OF TECHNOLOGY

ECE DEPARTMENT

AUTOMATIC ATTENDANCE BASED ON FACIAL RECOGNITION


USING ARDUINO AND MATLAB

PROJECT GUIDE PROJECT MEMBERS

PROF.P.SAKTHIVEL DEVI PRIYA R

CHITRA M

ANISHA T

ELANGO
TABLES OF CONTENTS

CHAPTER NO TITLE PAGE NO

ABSTRACT

LIST OF FIGURES

1 INTRODUCTION 1

1.2 Methodology

1.3 Existing System

1.4 Proposed System


Advantages of Proposed System
1.5 Block Diagram

1.6 Circuit Diagram

2 LITERATURE SURVEY 9

2.1 convolution neural network approach

2.2 Algorithm for efficient attendance

management

2.3 Face time-deep learning

2.4 Arduino based face recognition system


Using PCA
2.5 Classroom attendance system using facial
Recognition system
System Design

System Architecture

3 DIGITAL IMAGE PROCESSING 14


3.1 Introduction

3.2 Digital Image Processing

3.3 Fandamental Steps in Image Processing

3.4 Elements of Digital Image Processing

image processing fundamentals

image Enhancement

image Restoration

image Analysis

image Compression

image Synthesis

Applications

4 ARDUINO 21

4.1 Description

4.2 Features

4.3 Application

4.4

4.5 Architecture Diagram

4.6 Pin Description

5 PAN/TILT SERVO 29

5.1 Servomotor

5.2 Features

5.3 Description

6 FACE DETECTION 33

6.1 Face Detection in Image


6.2 Real time face detection

6.3 Face detection process

7 FACE RECOGNITION 37

7.1 Using geometric features

7.2 Using template matching

7.3 face recognition difficulties

7.4 understanding eigen factors

7.5 principal compound analysis

8 SOFTWARE IMPLEMENTATION 40

9 PROGRAM 55

CONCLUSION 64

FUTURE

REFERENCE

AUTOMATED ATTENDANCE BASED ON FACIAL RECOGNITION


USING MATLAB AND ARDUINO
ABSTRACT:
The management of the attendance can be a great burden on the teachers
if it is done by hand. To resolve this problem, smart and auto attendance
management system is being utilized. But authentication is an important issue in
this system. The smart attendance system is generally executed with the help of
biometrics. Face recognition is one of the biometric methods to improve this
system. Being a prime feature of biometric verification, facial recognition is
being used enormously in several such applications, like video monitoring and
CCTV footage system, an interaction between computer & humans and access
systems present indoors and network security. By utilizing this framework, the
problem of proxies and students being marked present even though they are not
physically present can easily be solved. The main implementation steps used in
this type of system are face detection and recognizing the detected face. In this
project we have implemented the automated attendance system using MATLAB
and Arduino. The application includes face identification, which saves time and
eliminates chances of proxy attendance because of the face authorization.
Hence, this system can be implemented in a field where attendance plays an
important role. The system is designed for face detection and tracking using
MATLAB software interfacing with Arduino board. The aim of this project is to
develop a real-time application like security system that is necessary in several
platforms. In this paper the real-time face detection and tracking is implemented
using hardware devices like Webcam and Arduino board with Microcontroller
as input and output devices respectively. The face detection algorithm has been
developed on the MATLAB platform which was proposed by Paul Viola and
Michael Jones. Finally the attendance is stored in database.

KEY WORDS - face detection; face tracking; arduino, matlab

CHAPTER-1
INTRODUCTION

1.1 INTRODUCTION:

To verify the student attendance record, the personnel staff ought to have

an appropriate system for approving and maintaining the attendance record

consistently. By and large, there are two kinds of student attendance framework,

i.e. Manual Attendance System and Automated Attendance System. Practically

in manual system, the staff may experience difficulty in both approving and

keeping up every student's record in a classroom all the time. In a classroom

with a high teacher-to-student ratio, it turns into an extremely dreary and

tedious process to mark the attendance physically and cumulative attendance of

each student. Consequently, we can execute a viable framework which will

mark the attendance of students automatically via face recognition. Automated

Attendance System may decrease the managerial work of its staff. Especially,

for an attendance system which embraces Human Face Recognition, it normally

includes the students' facial images captured at the time he/she is entering the

classroom, or when everyone is seated in the classroom to mark the attendance.

Generally, there are two known methodologies to deal with Human Face

Recognition, one is the feature-based methodology and the other is the

brightness-based methodology.

1
The feature based methodology utilizes key point features present on the face,

called landmarks, of the face, for example, eyes, nose, mouth, edges or some

other unique attributes,. In this way, out of the picture that has been extricated

beforehand, just some part is covered during the calculation process. Then

again, the brightness-based methodology consolidates and computes all parts of

the given picture. It is also called holistic-based or image-based methodology.

Since the overall picture must be considered, the brightness based methodology

takes longer handling time and is likewise more complicated.

There are different advances that are done during the process of this face

recognition framework, yet the essential steps of these are face detection and

face recognition. Firstly, to mark the attendance, the images of students' faces

will be required. This image can be captured from the camera, which will be

installed in the classroom at a position from where the entire classroom is

visible. This image will be considered as an input to the system. For efficient

face identification, the picture should be upgraded by utilizing some image

processing methods like gray scale conversion and histogram equalization.

After image quality upgrade, the image will be passed to perform face detection.

The face identification process is trailed by face recognition process. There are

different strategies accessible for face recognition like Eigen face, PCA and

LDA hybrid algorithm.

2
In the Eigen face, when faces are identified, they are trimmed from the

picture. With the assistance of the element extractor, different face highlights

are extracted. Utilizing these faces as Eigen features, the student is recognized

and by coordinating with the face database, their attendance is marked.

Developing the face database is required with the end goal of comparison.

There are various advances that are finished during the procedure of this face

acknowledgment system, yet the fundamental steps of these are face location

and face acknowledgment. Right off the bat, to check the participation, the

pictures of understudies' appearances will be required. This picture can be

caught from the camera, which will be introduced in the homeroom at a

situation from where the whole study hall is noticeable. This picture will be

considered as a contribution to the framework. For productive face

distinguishing proof, the image ought to be updated by using some picture

handling strategies like grayscale transformation what's more, histogram

leveling. After picture quality redesign, the picture will be passed to perform

face identification. The face distinguishing proof procedure is trailed by face

acknowledgment process. There are various methodologies open for face

acknowledgment like Eigen face, PCA and LDA half breed calculation. In the

Eigen face, when countenances are recognized, they are cut from the image.

With the help of the component extractor, distinctive face features are removed.

3
Using these countenances as Eigen includes, the understudy is perceived and by

planning with the face database, their participation is checked. Building up the

face database is required with the ultimate objective of examination.

1.2 METHODOLOGY

In this project we have proposed algorithm for face recognition using image

processing and manipulation of the output pin state of Arduino board with

ATmega328P controller by tracking the face of a human. The face recognition

algorithm has been developed on MATLAB platform by the combination of

several image processing algorithms. Using the theory of Image Acquisition and

Fundamentals of Digital Image Processing, the face of a user has been detected

in real time. By using Face Recognition and Serial data communication, the

state of Arduino board pin has been controlled. MATLAB programming

develops a computer vision system in the real time for face detection and

tracking using camera as image acquisition hardware. Arduino programming

provides an interfacing of a hardware prototype with control signals generated

by real time face detection and tracking. After tracking the face, then attendance

of student is saved to the database

1.3 EXISTING SYSTEM:

In existing system, finger print biometric attendance system is used and


in many organizations manual method used to mark the attendance of students.

4
The main application of attendance system is seen in teaching
institutions, where the attendance of students has to be regularly monitored on
daily basis. The method developed provides a insecure and time consuming for
recording attendance. Attendance system uses finger pattern algorithm,
biometric device for marking the attendance in database. Sometimes, finger
print may not work properly and failure of device.

DISADVANTAGES:

 Classification is not efficient


 Feature extraction is not compact.
 Finger print device may failure
 High cost for maintenance

1.4 PROPOSED SYSTEM:


In this project we have implemented the automated attendance system
using MATLAB and Arduino. We have projected our ideas to implement
“Automated Attendance System Based on Facial Tracking and Recognition”, in
which it imbibes large applications. The application includes face identification,
which saves time and eliminates chances of proxy attendance because of the
face authorization. Hence, this system can be implemented in a field where
attendance plays an important role.

The system is designed using MATLAB platform. The proposed system


uses Principal Component Analysis (PCA) algorithm which is based on eigen
face approach. This algorithm compares the test image and training image and
determines students who are present and absent. The attendance record is
maintained in database which is updated automatically in the system.

5
ADVANTAGES:

 Eliminates proxy attendance


 Faster updating of record in database

BLOCK DIAGRAM:
6

CIRCUIT DIAGRAM

Firstly , code in Matlab detects a face from every frame of the live video stream
and inserts a bounding box around the Region of Interest., which is a face in this
case(by detecting some haar features present in the human faces).The project
code follows the Viola Jones algorithm for face detection. The set of frames
with bounding boxes make up the addition of a bounding box around the face in
live video.While adding a bounding box , we also calculate the coordinates of
centroid of the bounding box. These coordinates are sent as a string to the
arduino UNO microcontroller., from Matlab and these are processed according
to the code written on arduino IDE for the movement of motors. During
processing , the arduino gets the positions of PAN and TILT servo motors (that
are attached as shown in the project image).Then , arduino checks if the
centroid coordinates lie in the centre region of the screen. We are trying to
7

move the camera in such a way that the centroid lies at the center of the frame.
For this reason the frame is divided into left and right halves and also top and
bottom halves. If the centroid falls in the left half , the camera is panned right
and if it falls in the right half , camera is panned left and the same with the top
and bottom halves and tilting.

REQUIREMENTS SPECIFICATION

HARDWARE REQUIREMENTS:

Processor : DUAL CORE 2.5GHZ


Ram : 1 GB SD RAM
Monitor : 15” COLOR
Hard Disk : 80 GB
Keyboard : STANDARD 102 KEYS
Camera : Quality usb camera
Arduino Uno : standard Arduino Board

SOFTWARE CONFIGURATION:

Operating System : Windows XP Professional, 7, 10


Environment : MATLAB
MATLAB : Version 18a or above

8
CHAPTER-2

LITERATURE SURVEY

2.1. TITLE: CONVOLUTIONAL NEURAL NETWORK APPROACH


FOR VISION BASED STUDENT RECOGNITION SYSTEM.
AUTHOR: N. M. Ara, N. S. Simul and M. S. Islam

DESCRIPTION:

Computers are now too smart to interact with the human in different

approaches. This interaction will be more acceptable for both human and

computer if it is based on recognition process. In this article, author’s concern is

to integrate and develop a student recognition system using exist-ing

algorithms. Among various face recognition methods, here author use deep

learning based face recognition method. This method uses Convolutional Neural

Networks (CNN) to generate a low dimensional representation called

embeddings. Then those embeddings are used to classify the person’s facial

image. By this system different types of applications like student attendance-

system, building security etc. can be developed after building the system.

2.2 TITLE: ALGORITHM FOR EFFICIENT ATTENDANCE


MANAGEMENT: FACE RECOGNITION BASED APPROACH.

AUTHOR: Naveed Khan Balcoh, M. Haroon Yousaf, Waqar Ahmad and M.

Iram Baig

9
DESCRIPTION:

Students attendance in the classroom is very important task and if taken

manually wastes a lot of time. There are many automatic methods available for

this purpose i.e. biometric attendance. All these methods also waste time

because students have to make a queue to touch their thumb on the scanning

device. This work describes the efficient algorithm that automatically marks the

attendance without human intervention. This attendance is recorded by using a

camera attached in front of classroom that is continuously capturing images of

students, detect the faces in images and compare the detected faces with the

database and mark the attendance. Reviews the related work in the field of

attendance system then describes the system architecture, software algorithm

and results.

2.3. TITLE: FACETIME—DEEP LEARNING BASED FACE


RECOGNITION ATTENDANCE SYSTEM.

AUTHOR: M. Arsenovic, S. Skadojevic and A. Anderla

DESCRIPTION:

In the interest of recent accomplishments in the development of deep

convolutional neural networks (CNNs) for face detection and recognition tasks,

a new deep learning based face recognition attendance system is proposed in

this paper. The entire process of developing a face recognition model is

10
described in detail. This model is composed of several essential steps developed

using today's most advanced techniques: CNN cascade for face detection and

CNN for generating face embeddings. The primary goal of this research was the

practical employment of these state-of-the-art deep learning approaches for face

recognition tasks. Due to the fact that CNNs achieve the best results for larger

datasets, which is not the case in production environment, the main challenge

was applying these methods on smaller datasets. A new approach for image

augmentation for face recognition tasks is proposed. The overall accuracy was

95.02% on a small dataset of the original face images of employees in the real-

time environment. The proposed face recognition model could be integrated in

another system with or without some minor alternations as a supporting or a

main component for monitoring purposes.

2.4. TITLE: ANDROID BASED FACE RECOGNITION SYSTEM USING


PCA.

AUTHOR: P. Wagh, S. Patil, J. Chaudhari and R. Thakare.

DESCRIPTION:

Face recognition is one of pattern recognition approachment for personal


identification needs beside the other biometrics approachment such as
fingerprint recognition, handwriting sign, eye recognition, etc. It is very

11
common on these days for the people to have a mobile phone with integrated

digital camera. This provides a good opportunity to develop face recogtion

system through the mobile phone. On this paper, android based face recognition

system has developed. Approached method that to be used are PCA (Principal

Component Analysis) i.e., Eigen Face. System testing will be done to see how

fast the capable of mobile phone to process the system. This shows the mobile

phone performance for android based face recognition system.

2.5. TITLE: CLASS ROOM ATTENDANCE SYSTEM USING FACIAL


RECOGNITION SYSTEM

AUTHOR: Abhishek Jha.

DESCRIPTION:

The face is the identity of a person. The methods to exploit


this physical feature have seen a great change since the advent of image
processing techniques. The accurate recognition of a person is the sole aim of a
face recognition system and this identification maybe used for further
processing. Traditional face recognition systems employ methods to identify a
face from the given input but the results are not usually accurate and precise as
desired. The system described in this paper aims to deviate from such traditional
systems and introduce a new approach to identify a student using a face
recognition system i.e. the generation of a 3D Facial Model. This paper
describes the working of the face recognition system that will be deployed as an
Automated Attendance System in a classroom environment

12
SYSTEM DESIGN:

SYSTEM ARCHITECTURE:

13
CHAPTER 3
DIGITAL IMAGE PROCESSING
3.1 INTRODUCTION

Digital image processing refers processing of the image in digital form.


Modern cameras may directly take the image in digital form but generally
images are originated in optical form. They are captured by video cameras and
digitalized. The digitalization process includes sampling, quantization. Then
these images are processed by the five fundamental processes, at least any one
of them, not necessarily all of them.

3.2 DIGITAL IMAGE PROCESSING

Interest in digital image processing methods stems from two principal


application areas:

1. Improvement of pictorial information for human interpretation

2. Processing of scene data for autonomous machine perception

In this second application area, interest focuses on procedures for extracting


image information in a form suitable for computer processing

Examples includes automatic character recognition, industrial machine vision


for product assembly and inspection, military recognizance, automatic
processing of fingerprints etc

IMAGE:

Am image refers a 2D light intensity function f(x, y), where(x, y) denotes spatial
coordinates and the value of f at any point (x, y) is proportional to the
brightness or gray levels of the image at that point. A digital image is an image
f(x, y) that has been discretized both in spatial coordinates and brightness. The
elements of such a digital array are called image elements or pixels

A SIMPLE IMAGE MODEL:


14
To be suitable for computer processing, an image f(x, y) must be digitalized
both spatially and in amplitude. Digitization of the spatial coordinates (x, y) is
called image sampling. Amplitude digitization is called gray-level quantization.

The storage and processing requirements increase rapidly with the spatial
resolution and the number of gray levels.

Example: A 256 gray-level image of size 256x256 occupies 64k bytes of


memory.

TYPES OF IMAGE PROCESSING

• Low level processing

• Medium level processing

• High level processing

3.3. FUNDAMENTAL STEPS IN IMAGE PROCESSING:


Fundamental steps in image processing are

1. Image acquisition: to acquire a digital image

2. Image pre-processing: to improve the image in ways that increases the


chances for success of the other processes.

3. Image segmentation: to partitions an input image into its constituent parts of


objects.

4. Image segmentation: to convert the input data to a from suitable for


computer processing.

5. Image description: to extract the features that result in some quantitative


information of interest of features that are basic for differentiating one class of
objects from another.

6. Image recognition: to assign a label to an object based on the information


provided by its description.

15
Fig 3.1 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING

3.4 ELEMENTS OF DIGITAL IMAGE PROCESSING SYSTEM IMAGE


PROCESSING FUNDAMENTALS

3.4 IMAGE PROCESSING TECHNIQUES

This section gives various image processing techniques.

Image Enhancement

Image Restoration

Image Analysis

IP
Image Compression

Image Synthesis

16
3.4.1 IMAGE ENHANCEMENT

Image enhancement operations improve the qualities of an image like


improving the image’s contrast and brightness characteristics, reducing its noise
content, or sharpen the details. This just enhances the image and reveals the
same information in more understandable image. It does not add any
information to it.

3.3.2 IMAGE RESTORATION

Image restoration like enhancement improves the qualities of image but


all the operations are mainly based on known, measured, or degradations of the
original image. Image restorations are used to restore images with problems
such as geometric distortion, improper focus, repetitive noise, and camera
motion. It is used to correct images for known degradations.

3.3.3 IMAGE ANALYSIS

Image analysis operations produce numerical or graphical information


based on characteristics of the original image. They break into objects and then
classify them. They depend on the image statistics. Common operations are
extraction and description of scene and image features, automated
measurements, and object classification. Image analyze are mainly used in
machine vision applications.

3.3.4 IMAGE COMPRESSION

Image compression and decompression reduce the data content necessary to


describe the image. Most of the images contain lot of redundant information,
compression removes all the redundancies. Because of the compression the size
is reduced, so efficiently stored or transported. The compressed image is

17
decompressed when displayed. Lossless compression preserves the exact data
in the original image, but Lossy compression does not represent the original
image but provide excellent compression.

3.3.5 IMAGE SYNTHESIS

Image synthesis operations create images from other images or non-


image data. Image synthesis operations generally create images that are either
physically impossible or impractical to acquire.

3.4 APPLICATIONS OF DIGITAL IMAGE PROCESSING

Digital image processing has a broad spectrum of applications, such as


remote sensing via satellites and other spacecrafts, image transmission and
storage for business applications, medical processing, radar, sonar and acoustic
image processing, robotics and automated inspection of industrial parts.

3.4.1 MEDICAL APPLICATIONS

In medical applications, one is concerned with processing of chest X-


rays, cineangiograms, projection images of transsexual tomography and other
medical images that occur in radiology, nuclear magnetic resonance (NMR) and
ultrasonic scanning. These images may be used for patient screening and
monitoring or for detection of tumors or other disease in patients.

3.4.2 SATELLITE IMAGING

Images acquired by satellites are useful in tracking of earth resources;


geographical mapping; prediction of agricultural crops, urban growth and
weather; flood and fire control; and many other environmental applications.
Space image applications include recognition and analysis of objects contained
in image obtained from deep space-probe missions.

18
3.4.3 COMMUNICATION

Image transmission and storage applications occur in broadcast


television, teleconferencing, and transmission of facsimile images for office
automation, communication of computer networks, closed-circuit television
based security monitoring systems and in military communications.

3.4.4 RADAR IMAGING SYSTEMS


Radar and sonar images are used for detection and recognition of various
types of targets or in guidance and maneuvering of aircraft or missile systems.

3.4.5 DOCUMENT PROCESSING

It is used in scanning, and transmission for converting paper documents


to a digital image form, compressing the image, and storing it on magnetic tape.
It is also used in document reading for automatically detecting and recognizing
printed characteristics.

3.4.6 DEFENSE/INTELLIGENCE

It is used in reconnaissance photo-interpretation for automatic interpretation Of


earth satellite imagery to look for sensitive targets or military threats and target
acquisition and guidance for recognizing and tracking targets in real-time smart-
bomb and missile-guidance syste

19
CHAPTER-4

ARDUINO

DESCRIPTION :

Arduino is an open source, computer hardware and software company, project,


and user community that designs and manufactures Single-board
microcontrollers and microcontroller kits for building digital devices and
interactive objects that can sense and control objects in the physical world.

Arduino is an open-source electronics platform based on easy-to-use hardware


and software. Arduino boards are able to read inputs - light on a sensor, a finger
on a button, or a Twitter message - and turn it into an output - activating a
motor, turning on an LED, publishing something online.

The Arduino Uno is a microcontroller board based on the ATmega328


(datasheet). It has 14 digital input/output pins (of which 6 can be used as PWM
outputs), 6 analog inputs, a 16 MHz crystal oscillator, a USB connection, a
power jack, an ICSP header, and a reset button.

20

ARDUINO UNO
Arduino is an open-source project that created microcontroller-based kits for
building digital devices and interactive objects that can sense and control
physical devices. The project is based on microcontroller board designs,
produced by several vendors, using various microcontrollers. These systems
provide sets of digital and analog input/output (I/O) pins that can interface to
various expansion boards (termed shields) and other circuits. The boards feature
serial communication interfaces, including Universal Serial Bus (USB) on some
models, for loading programs from personal computers. For programming the
microcontrollers, the Arduino project provides an integrated development
environment (IDE) based on a programming language named Processing, which
also supports the languages C and C++

FEATURES

 Microcontroller: ATmega328P
 Operating voltage: 5V
 Input voltage: 7-12V
 Flash memory: 32KB
 SRAM: 2KB
 EEPROM: 1KB

APPLICATIONS

 Real time biometrics


 Robotic applications
 Academic applications

ATmega328 IC

21

DESCRIPTION:
The ATmega328 is a single-chip microcontroller created by Atmel in the mega
AVR family. The Atmel 8-bit AVR RISC-based microcontroller combines
32 kB ISP flash memory with read-while-write capabilities, 1 kB EEPROM,
2 kB SRAM, 23 general purpose I/O lines, 32 general purpose
working registers, three flexible timer/counters with compare modes, internal
and external interrupts, serial programmable USART, a byte-oriented 2-wire
serial interface, SPI serial port, 6-channel 10-bit A/D converter (8-channels
in TQFP and QFN/MLF packages), programmable watchdog timer with
internal oscillator, and five software selectable power saving modes. The device
operates between 1.8-5.5 volts. The device achieves throughput approaching
1 MIPS per MHz.

ATmega328P IC

22
PIN DIAGRAM

High Performance, Low Power Atmel®AVR® 8-Bit Microcontroller Family

 Advanced RISC Architecture

 131 Powerful Instructions

 Most Single Clock Cycle Execution


 32 x 8 General Purpose Working Registers
 Fully Static Operation
 Up to 20 MIPS Throughput at 20MHz
 On-chip 2-cycle Multiplier
 High Endurance Non-volatile Memory Segments

4/8/16/32KBytes of In-System Self-Programmable Flash


program memory

23
 256/512/512/1KBytes EEPROM

 512/1K/1K/2KBytes Internal SRAM

 Write/Erase Cycles: 10,000 Flash/100,000 EEPROM

C(1)C/100 years at 25̶ Data retention: 20 years at 85

ARCHITECTURE

24
ARCHITECTURE DIAGRAM

Special Microcontroller Features

 Power-on Reset and Programmable Brown-out Detection


 Internal Calibrated Oscillator
 External and Internal Interrupt Sources
 Six Sleep Modes: Idle, ADC Noise Reduction, Power-save,
Power-down, Standby, and Extended Standby

 I/O and Packages


 23 Programmable I/O Lines
 28-pin PDIP, 32-lead TQFP, 28-pad QFN/MLF and 32-pad
QFN/MLF

 Operating Voltage: 1.8 - 5.5V


 Temperature Range: -40 CC to 85
 Speed Grade: 0 - 4MHz@1.8 - 5.5V, 0 - 10MHz@2.7 - 5.5.V, 0 - 20MHz
@ 4.5 - 5.5V C
 Power Consumption at 1MHz, 1.8V, 25
 Active Mode: 0.2Ma
 Power-down Mode: 0.1µA
 Power-save Mode: 0.75µA (Including 32kHz RTC)

PIN DISCRIPTION

VCC Digital supply voltage

GND Ground.

25
Port B (PB7:0) XTAL1/XTAL2/TOSC1/TOSC2

Port B is an 8-bit bi-directional I/O port with internal pull-up resistors


(selected for each bit). The Port B output buffers have symmetrical drive
characteristics with both high sink and source capability. As inputs, Port B pins that are
externally pulled low will source current if the pull-up resistors are activated.
The Port B pins are tri-stated when a reset condition becomes active, even if the
clock is not running. Depending on the clock selection fuse settings, PB6 can be
used as input to the inverting Oscillator amplifier and input to the internal clock
operating circuit. Depending on the clock selection fuse settings, PB7 can be
used as output from the inverting Oscillator amplifier. If the Internal Calibrated
RC Oscillator is used as chip clock source, PB7..6 is used as TOSC2..1 input for
the Asynchronous Timer/Counter2 if the AS2 bit in ASSR is set.

Port C (PC5:0)

Port C is a 7-bit bi-directional I/O port with internal pull-up resistors


(selected for each bit). The PC5 0 output buffers have symmetrical drive
characteristics with both high sink and source capability. As inputs, Port C pins
that are externally pulled low will source current if the pull-up resistors are
activated. The Port C pins are tri-stated when a reset condition becomes active,
even if the clock is not running.

PC6/RESET

If the RSTDISBL Fuse is programmed, PC6 is used as an I/O pin.


Note that the electrical characteristics of PC6 differ from those of the other pins
of Port C. If the RSTDISBL Fuse is un programmed, PC6 is used as a Reset
input. A low level on this pin for longer than the minimum pulse length will

26
generate a Reset, even if the clock is not running. The minimum pulse length is
given in Table 28-3 on page 308. Shorter pulses are not guaranteed to generate a
Reset.

Port D (PD7:0)

Port D is an 8-bit bi-directional I/O port with internal pull-up resistors


(selected for each bit). The Port D output buffers have symmetrical drive
characteristics with both high sink and source capability. As inputs, Port D pins
that are externally pulled low will source current if the pull-up resistors are activated.

The Port D pins are tri-stated when a reset condition becomes active, even if the
clock is not running.

AVCC

AVCC is the supply voltage pin for the A/D Converter, PC3:0, and ADC7:6. It
should be externally connected to VCC, even if the ADC is not used. If the
ADC is used, it should be connected to VCC through a low-pass filter. Note that
PC6..4 use digital supply voltage, VCC

AREF

AREF is the analog reference pin for the A/D Converter

ADC7:6 (TQFP and QFN/MLF Package Only)

In the TQFP and QFN/MLF package, ADC7:6 serve as analog inputs to the A/D
converter. These pins are powered from the analog supply and serve as 10-bit
ADC channels.

27
CHAPTER-5
PAN /TILT SERVO

5.1 SERVOMOTOR

A servomotor is an electrical device which can push or rotate an object with


greater precision. if an object want to rotate and object at some specific angles
or distance, then use servomotor.

It is just made up of simple motor which run through servo mechanism

Servo mechanism

It consist of three parts:

1.controlled device

2.output sensor

3.feedback system

A servomotor is a rotary actuator or linear actuator that allows for precise


control of aAngular or linear position ,velocity and acceleration.it consist of
asuitable motor coupled to a sensor for position feedback.it also requires a
relatively sophisticated controller, ofen a dedicated module designed
specifically for use with servomotors.
servomotors are not a specific class of motor, although the term servomotor is
often used to refer to a motor suitsble to use in a closed loop control system

We can get a very high torque servo motor in a small light weight packages.

28
5.2 Feature:
- Two degrees of freedom Robot PTZ , high-torque servos, cost-effective small
head

- You can do two degrees of freedom movement in the horizontal and vertical
directions

- Easy to install the camera

- You can achieve video surveillance, image recognition location tracking

- Install infrared sensors or ultrason distance sensor can be


combined into integrated detection deve, so that the robot
can sense surrounding obstacles

- Enabling the robot obstacle avoidance function

- Of course, you can also install various sensors to complete


the works by a variety of innovative interactive servo
controller.

29
5.3 DESCRIPTION:

Dimension: 22mm x 11.5mm x 22.5mm

Net Weight: 9 grams

Operating speed: 0.12second/ 60degree ( 4.8V no load)

Stall Torque (4.8V): 17.5oz /in (1kg/cm)

Temperature range: -30 to +60

Dead band width: 7usec

Operating voltage: 3.0V~7.2V

Fit for ALL kind of R/C Toys

Coreless motor

30
3 pole wure

All nylon gear

Dual ball bearing

Connector wire length 150mm

31
CHAPTER-6

FACE DETECTION

The problem of face recognition is all about face detection. This is a fact
that seems quite bizarre to new researchers in this area. However, before face
recognition is possible, one must be able to reliably find a face and its
landmarks. This is essentially a segmentation problem and in practical systems,
most of the effort goes into solving this task. In fact the actual recognition
based on features extracted from these facial landmarks is only a minor last
There are two types of face detection problems:

1) Face detection in images and step.


2) Real-time face detection

6.1 FACE DETECTION IN IMAGES

Most face detection systems attempt to extract a fraction of the whole


face, thereby eliminating most of the background and other areas of an
individual's head such as hair that are not necessary for the face
recognition task. With static images, this is often done by running across
the image. The face detection system then judges if a face is present
inside the window (Brunelli and Poggio, 1993). Unfortunately, with static
images there is a very large search space of possible locations of a face in
an image
Most face detection systems use an example based learning approach to
decide whether or not a face is present in the window at that given instant
(Sung and Poggio,1994 and Sung,1995). A neural network or some other
classifier is trained using supervised learning with 'face' and 'nonface'
examples, thereby enabling it to classify an image (window in face
detection system) as a 'face' or 'non-face'.. Unfortunately, while it is
relatively easy to find face examples, how would one find a
32
representative sample of images which represent non-faces (Rowley et
al., 1996)? Therefore, face detection systems using example based
learning need thousands of 'face' and 'nonface' images for effective
training. Rowley, Baluja, and Kanade (Rowley et al.,1996) used 1025
face images and 8000 non-face images (generated from There is another
technique for determining whether there is a face inside the face detection
system's window - using Template Matching.

6.2 REAL-TIME FACE DETECTION

Real-time face detection involves detection of a face from a series of


frames from a videocapturing device. While the hardware requirements
for such a system are far more stringent, from a computer vision stand
point, real-time face detection is actually a far simpler process than
detecting a face in a static image. This is because unlike most of our
surrounding environment, people are continually moving. We walk
around, blink, fidget, wave our hands about, etc.
Since in real time face detection, the system is presented with a series of
frames in which to detect a face, by using spatio-temporal filtering exact
face locations can be easily identified by using a few simple rules, such
as,

1)the head is the small blob above a larger blob -the body

2)head motion must be reasonably slow and contiguous -heads won't


jump around erratically

33
6.3. FACE DETECTION PROCESS:

It is process of identifying different parts of human faces like eyes,


nose, mouth, etc… this process can be achieved by using MATLAB codeIn this
project the author will attempt to detect faces in still images by using image
invariants. To do this it would be useful to study the greyscale intensity
distribution of an average human face. The following 'average human face' was
constructed from a sample of 30 frontal view human faces, of which 12 were
from females and 18 from males. A suitably scaled colormap has been used to
highlight grey-scale intensity differences

The grey-scale differences, which are invariant across all the sample faces are
strikingly apparent. The eye-eyebrow area seem to always contain dark intensity
(low) gray-levels while nose forehead and cheeks contain bright intensity (high)
grey-levels. After a great deal of experimentation, the researcher found that the
following areas of the human face were suitable for a face detection system
based on image invariants and a deformable template.

34
6.4 FACE DETECTION ALGORITHM

35
CHAPTER-7
FACE RECOGNITION

Over the last few decades many techniques have been proposed for face
recognition. Many of the techniques proposed during the early stages of
computer vision cannot be considered successful, but almost all of the recent
approaches to the face recognition problem have been creditable. According to
the research by Brunelli and Poggio (1993) all approaches to human face
recognition can be divided into two strategies

(1) Geometrical features and


(2) Template matching
7.1. FACE RECOGNITION USING GEOMETRICAL FEATURES

This technique involves computation of a set of geometrical


features such as nose width and length, mouth position and chin shape,
etc. from the picture of the face we want to recognize. This set of features
is then matched with the features of known individuals. A suitable metric
such as Euclidean distance (finding the closest vector) can be used to find
the closest match. Most pioneering work in face recognition was done
using geometric features (Kanade, 1973), although Craw et al. (1987) did
relatively recent work in this area.
The advantage of using geometrical features as a basis for face
recognition is that recognition is possible even at very low resolutions
and with noisy images (images with many disorderly pixel intensities).
7.1.1. FACE RECOGNITION USING TEMPLATE MATCHING

36
This is similar the template matching technique used in face detection,
except here we are not trying to classify an image as a 'face' or 'non-face'
but are trying to recognize a face.
Whole face, eyes, nose and mouth regions which could be used in a
template matching strategy. The basis of the template matching strategy
is to extract whole facial regions (matrix of pixels) and compare these
with the stored images of known individuals. Once again Euclidean
distance can be used to find the closest match. The simple technique of
comparing grey-scale intensity values for face recognition was used by
Baron (1981). However there are far more sophisticated methods of
template matching for face recognition. These involve extensive
preprocessing and transformation of the extracted grey-level intensity
values. For example, Turk and Pentland (1991a) used Principal
Component Analysis, sometimes known as the Eigen faces approach, to
pre-process the gray-levels An investigation of geometrical features
versus template matching for face recognition by Brunelli and Poggio
(1993) came to the conclusion that template based techniques offer
superior recognition accuracy.

7.2 FACE RECOGNITION DIFFICULTIES

1. Identify similar faces (inter-class similarity)


2. Accommodate intra-class variability due to

7.2.1 Inter - class similarity

Different persons may have very similar appearance

37
Face recognition and detection system is a pattern recognition approach
for personal identification purposes in addition to other biometric
approaches such as fingerprint recognition, signature, retina and so forth.

7.2.2 Inter – class variability

Faces with intra-subject variations in pose, illumination, expression,


accessories, color, occlusions, and brightnes

7.3 UNDERSTANDING EIGENFACES

Any grey scale face image I(x,y) consisting of a NxN array of


intensity values may also be consider as a vector of N2. For example, a
typical 100x100 image used in this thesis will have to be transformed into
a 10000 dimension vector!

This vector can also be regarded as a point in 10000 dimension


space. Therefore, all the images of subjects' whose faces are to be
recognized can be regarded as points in 10000 dimension space. Face
recognition using these images is doomed to failure because all human
face images are quite similar to one another so all associated vectors are
very close to each other in the 10000dimension space.
The transformation of a face from image space (I) to face space (f)
involves just a simple matrix multiplication. If the average face image is
A and U contains the (previously calculated) eigenfaces,
f = U * (I - A)
This is done to all the face images in the face database (database
with known faces) and to the image (face of the subject) which must be
recognized. The possible results when projecting a face into face space
are given in the following figure
38
7.4. PRINCIPAL COMPONENT ANALYSIS (PCA)

Principal Component Analysis (or Karhunen-Loeve expansion) is a


suitable strategy for face recognition because it identifies variability between
human faces, which may not be immediately obvious. Principal Component
Analysis (hereafter PCA) does not attempt to categorise faces using familiar
geometrical differences such as nose length or eyebrow width. Instead, a set of
human faces is analysed using PCA to determine which 'variables' account for
the variance of faces. In face recognition, these variables are called eigen faces

39
CHAPTER 8

SOFTWARE IMPLEMENTATION

The entire algorithm for face recognition is a based on image processing. The
proposed system uses MATLAB as a platform on which image processing
algorithm has been developed and tested. As an image acquisition devise,
camera is used. A camera can be an inbuilt camera of laptop or it can be a USB
camera as well. To get the details of the hardware device interfaced with the
computer, imaqhwinfo command of MATLAB is used. Entire MATLAB
program for this algorithm can be divided in parts as follows

8.1 INTERFACING MATLAB WITH ARDUINO

To interface MATLAB with Arduino, certain Arduino libraries are downloaded


in MATLAB IDE, this creates a Arduino to run in MATLAB environment.
Communication of hand gesture algorithm with Arduino board is done

40
MODULES AND DESCRIPTION

 MATLAB
 Image Processing Toolbox
 Computer Vision Toolbox
 Image Acquisition Toolbox
 Spreadsheet Link

8.2 DESCRIPTIONS:

MATLAB:

MATLAB (matrix laboratory) is a multi-paradigm numerical computing


environment and fourth-generation programming language. A proprietary
programming language developed by Math Works, MATLAB allows matrix
manipulations, plotting of functions and data, implementation of algorithms,
creation of user interfaces, and interfacing with programs written in other
languages, including C, C++, Java, Fortran and Python.

Image Processing Toolbox

The Image Processing Toolbox is a collection of functions that extend the


capability of the MATLAB® numeric computing environment. The toolbox
supports a wide range of image processing operations, including:

•Spatial image transformations

•Morphological operations

•Neighborhood and block operations

41
•Linear filtering and filter design

•Transforms

•Image analysis and enhancement

Image registration

•De blurring

•Region of interest operations

Computer Vision Toolbox:

The field is of importance for such various applications as autonomous


vehicles, navigating with the help of images, captured by a mounted camera,
and high precision measurements using images, ta- ken by calibrated cameras.
In this paper, we will present a number of numerical routines, implemented in
MATLAB, that are useful in a variety of computer vision applications. The
collection of routines will be called the Computer Vision Toolbox. One of the
main problems in Computer Vision is to calculate the 3D-structure of the scene
and the motion of the camera from measurements in the images taken from
different viewpoints in the scene. This problem is called structure and motion,
referring to the fact that both the structure of the scene and the motion of the
camera are calculated from image measurements only. A number of different
sub problems, arising from different knowledge of the intrinsic properties of the
camera appear. Other important problems are to calculate the structure of the
3D-scene given the motion of the camera and to calculate the motion of
Automated Attendance System based on Facial Recognition Department of
ECE, SMVITM, Bantakal Page 18 the camera given the structure of the scene.
These problems are somewhat simpler than the general structure and motion

42
problem, but are nevertheless important for navigation and obstacle avoidance.
A related problem is to calibrate a camera, i.e. calculate the focal distance, the
principal point etc.,

Image Acquisition Toolbox:

The Image Acquisition Toolbox as in figure 5-1, is a collection of


functions that extend the capability of the MATLAB® numeric computing
environment. The toolbox supports a wide range of image acquisition
operations, including

 Acquiring images through many types of image acquisition devices,


from

professional grade frame grabbers to USB-based Webcams

 Viewing a preview of the live video stream

 Triggering acquisitions (includes external hardware triggers

 Configuring call back functions that execute when certain events occur

 Bringing the image data into the MATLAB workspace

8.3 Spreadsheet Link:

The Spreadsheet Link EX software Add-In integrates the Microsoft®


Excel® and MATLAB® products in a computing environment running
Microsoft® Windows®. It connects the Excel® interface to the MATLAB
workspace, enabling you to use Excel work sheet and macro programming tools
to leverage the numerical, computational, and graphical power of MATLAB.
You can use Spreadsheet Link EX functions in an Excel work sheet or macro to
exchange and synchronize data between Excel and MATLAB, without leaving
the Excel Automated Attendance System based on Facial Recognition

43
Department of ECE, SMVITM, Bantakal Page 20 environment. With a
small number of functions to manage the link and manipulate data, the
Spreadsheet Link EX software is powerful in its simplicity. The Spreadsheet
Link EX software supports MATLAB two-dimensional numeric arrays, one-
dimensional character arrays (strings), any two-dimensional cell arrays. It does
not work with MATLAB multidimensional arrays and structures

Modifying Properties with the Property Editor

The five tools that together make up Guide are:

•The Property Editor

•The Guide Control Panel

•The Callback Editor

•The Alignment Tool

8.4 SOFTWARE TOOLS AND METHODOLOGY

8.4.1 ABOUT MATLAB 

MATLAB is a software package for high performance numerical computation


and visualization. It provides an interactive environment with hundreds of built-
in functions for technical computation, graphics and animation. Best of all, it
provides easy extensibility with its own high-level programming language. The
name MATLAB stands for MATrixLABoratory. The basic building block of
MATLAB is the matrix. The fundamental data type is the array.

MATLABs built-in functions provide excellent tools for linear algebra


computations, data analysis, signal processing, optimization, numerical
solutions of ODES, quadrature and many other types of scientific computations.

44
Most of these functions use the state-of-the art algorithms. There are numerous
functions for 2-D and 3-D course, MATLAB even provides an external
interface to run those programs from within MATLAB. The user, however, is
not limited to the built-in functions, he can write his own functions in the
MATLAB language. Once written, these functions behave just like the built-in
functions. MATLAB’s language is very easy to learn and to use. 

8.4.2 MATLAB TOOLBOXES

There are several optional ‘Toolboxes’ available from the developers of the
MATLAB. These tool boxes are collection of functions written for special
applications such as Symbolic Computations Toolbox, Image Processing
Toolbox, Statistics Toolbox, and Neural Networks Toolbox, Communications
Tool box, Signal Processing Toolbox, Filter Design Toolbox, Fuzzy Logic
Toolbox, Wavelet Toolbox, Data base Toolbox, Control System Toolbox,
Bioinformatics Toolbox, Mapping Toolbox. 

8.5 BASICS OF MATLAB

On all UNIX systems, Macs, and PC, MATLAB works through three basic
windows. They are:

a. Command window:
This is the main window. The MATLAB command prompt characterizes it
‘>>’.when you launch the application program, MATLAB puts you in this
window .All commands, including those for running user-written programs,
are typed in this window at the MATLAB prompt.

 b. Graphics window: The output of all graphics commands are typed in the
command window

45
are flushed to the graphics or figure window, a separate gray window
with(default) white background colour. The user can create as many figure
windows, as the system memory will allow.

c. Edit window:

This is where you write, edit, create, and save your own programs in files
called M-files. We can use any text editor to carry out these tasks. On the most
systems, such as PC’s and Macs, MATLAB provides its build in editor. On
other systems, you can invoke the edit window by typing the standard file
editing command that you normally use on your systems. The command is
typed at the MATLAB prompt following the special character ‘!’ character.
After editing is completed, the control is returned to the MATLAB.

  On-Line Help

a. On-line documentation:

MATLAB provides on-line help for all its built-in functions and
programming language constructs. The commands look for, help, help win, and
helpdesk provides on-line help.

 b. Demo:

MATLAB has a demonstration program that shows many of its features.


The program includes a tutorial introduction that is worth trying. Type demo at
the MATLAB prompt to invoke the demonstration program, and follow the
instruction on the screen.

Input-Output

46
MATLAB supports interactive computation taking the input from the screen,
and flushing the output to the screen. In addition, it can read input files and
write output files. The following features hold for all forms of input-output.

 a. Data type

The fundamental data type in the MATLAB is the array. It encompasses


several distinct data objects-integers, doubles, matrices, character strings, and
cells. In most cases, however, we never have to worry about the data type or the
data object declarations. For example there is no need to declare variables, as
real or complex .When a real number is entered as the variable, MATLAB
automatically sets the variable to be real.

 b. Dimensioning

Dimensioning is automatic in MATLAB. No dimensioning statements


are required for vectors or arrays. We can find the dimension of an existing
matrix or a vector with the size and length commands.

 C. Case sensitivity

MATLAB is case sensitive i.e. it differentiates between the lower case


and the Uppercase letters. Thus a and an are different variables. Most MATLAB
commands are built-in function calls are typed in lower case letters. We can turn
case sensitivity on and off with casesen command.

d. Output display

The output of every command is displayed on the screen unless


MATLAB is directed otherwise. A semicolon at the end of a command
suppresses the screen output, except for graphics and on-line help command.

47
The following facilities are provided for controlling the screen output.

i. Paged output

To direct the MATLAB to show one screen of output at a time, type more
on the MATLAB prompt. Without it, MATLAB flushes the entire output at
once, without regard to the speed at which we read.

ii. Output format

Though computations inside the MATLAB are performed using the


double precision, the appearance of floating point numbers on the screen is
controlled by the output format in use. There are several different screen output
formats. The following table shows the printed value of 10pi in different
formats.

Format short 31.4159


Format short e 3.1416e+01
Format long 31.41592653589793
Format long e 3.141592653589793e+01
Format short g 31.416
Format long g 31.4159265358979
Format hex 403f6a7a2955385e
Format rat 3550/113
Format bank 31.42

e. Command History

MATLAB saves previously typed commands in a buffer. These


commands can be called with the up-arrow key. This helps in editing previous

48

commands. You can also recall a previous command by typing the first
characters and then pressing the up-arrow key. On most Unix systems,
MATLABS command line editor also understands the standard emacs key
bindings.

 File Types

 MATLAB has three types of files for storing information

M-files: M-files are standard ASCII text files, with a .m extension to the file
name. There are two types of these files: script files and function files. Most
programs we write in MATLAB are saved as M-files. All built-in functions in
MATLAB are M-files, most of which reside on our computer in precompiled
format. Some built in functions are provided with source code in readable M-
files so that can be copied and modified.

Mat-files: Mat-files are binary data-files with a .mat extension to the file name.
Mat-files are created by MATLAB when we save data with the save command.
The data is written in a special format that only MATLAB can read. Mat-files
can be loaded into MATLAB with the load command.

Mex-files: Mex-files are MATLAB callable Fortran and C programs, with


a.mex extension to the file name. Use of these files requires some experience
with MATLAB and a lot of patience.

Platform independence

One of the best features of MATLAB is its platform-independence.


Programs written in the MATLAB language work exactly the same way on all
computers. The user interface however, varies from platform to platform. For

49

example, on PC’s and Macs there are menu driven commands for
opening, writing, editing, saving and printing files whereas on Unix machines
such as sun workstations, these tasks are usually performed with Unix
commands.

Images in MATLAB

The project has involved understanding data in MATLAB, so below is a


brief review of how images are handled. Indexed images are represented by two
matrices, a color map matrix and image matrix.

 (i)The color map is a matrix of values representing all the colours in the image.

(ii)The image matrix contains indexes corresponding to the colour map color
map.

 A color map matrix is of size N*3, where N is the number of different colors I
the image. Each row represents the red, green, blue components for a colour.

E.g. the matrix

Represents two colours, the first have components r1, g1,b1 and the second
having the components r2,g2,b2

The wavelet toolbox only supports indexed images that have linear,
monotonic color maps. Often color images need to be pre-processed into a grey
scale image before using wavelet decomposition. The Wavelet Toolbox User’s
Guide provides some sample code to convert color images into grey scale. This
will be useful if it is needed to put any images into MATLAB.

 This chapter dealt with introduction to MATLAB software which we are

50
using for our project. The 2-D wavelet Analysis, the decomposition of an image
into approximations and details and the properties of different types of wavelets
will be discussed in the next chapter.

MATLAB
 Matlab is a high-performance language for technical computing.
 It integrates computation, programming and visualization in a user-
friendly environment where problems and solutions are expressed in an
easy-to-understand mathematical notation.
 Matlab is an interactive system whose basic data element is an array that
does not
 Require dimensioning.
 This allows the user to solve many technical computing problems,
especially those with matrix and vector operations, in less time than it
would take to write a program in a scalar non-interactive language such
as C or FORTRAN.
 Matlab features a family of application-specific solutions which are
called toolboxes.
 It is very important to most users of Matlab, that toolboxes allow to learn
and apply
 Specialized technology.
 These toolboxes are comprehensive collections of Matlab functions, so-
called M-files that extend the Matlab environment to solve particular
classes of problems.
 Matlab is a matrix-based programming tool. Although matrices often
need not to be
 Dimensioned explicitly, the user has always to look carefully for matrix
51
dimensions.
 If it is not defined otherwise, the standard matrix exhibits two dimensions
n × m.
 Column vectors and row vectors are represented consistently by n × 1 and
1 × n matrices, respectively.
MATLAB OPERATIONS

 Matlab operations can be classified into the following types of operations:


 Arithmetic and logical operations,
 Mathematical functions,
 Graphical functions, and
 Input/output operations.
 In the following sections, individual elements of Matlab operations are
explained in Detail.
EXPRESSIONS

 Like most other programming languages, Matlab provides mathematical


expressions,
But unlike most programming languages, these expressions involve entire
matrices. The building blocks of expressions are

 Variables
 Numbers
 Operators
 Functions
VARIABLES

 Matlab does not require any type declarations or dimension statements.


 When a new variable name is introduced, it automatically creates the
variable and allocates the appropriate amount of memory.
52
 If the variable already exists, Matlab changes its contents and, if
necessary, allocates new storage.
 For example
 >> books = 10
 It creates a 1-by-1 matrix named books and stores the value 10 in its
single element.
 In the expression above, >> constitutes the Matlab prompt, where the
commands can be entered.
 Variable names consist of a string, which start with a letter, followed by
any number of letters, digits, or underscores. Matlab is case sensitive; it
distinguishes between uppercase and lowercase letters. A and a are not
the same variable.
 To view the matrix assigned to any variable, simply enter the variable
name.
NUMBERS

 Matlab uses the conventional decimal notation.


 A decimal point and a leading plus or minus sign is optional. Scientific
notation uses the letter e to specify a power-of-ten scale factor.
 Imaginary numbers use either i or j as a suffix.
 Some examples of legal numbers are:
7 -55 0.0041 9.657838 6.10220e-10 7.03352e21 2i -2.71828j 2e3i 2.5+1.7j.

OPERATORS

Expressions use familiar arithmetic operators and precedence rules. Some


examples are:

 + Addition
 - Subtraction
53
 * Multiplication
 / Division
 ’ Complex conjugate transpose
 ( ) Brackets to specify the evaluation order.
FUNCTIONS

 Matlab provides a large number of standard elementary mathematical


functions, including sin, sqrt, expand abs.
 Taking the square root or logarithm of a negative number does not lead to
an error; the appropriate complex result is produced automatically.
 Matlab also provides a lot of advanced mathematical functions, including
Bessel and Gamma functions. Most of these functions accept complex
arguments.
 For a list of the elementary mathematical functions, type
 >> help elfun
 Some of the functions, like sqrt and sin are built-in. They are a fixed part
of the Matlab core so they are very efficient.
 The drawback is that the computational details are not readily accessible.
Other functions, like gamma and sinh, are implemented in so called M-
files.
 You can see the code and even modify it if you want

54.
CODING

%%%%%%%%%%%%%%%%%%%%%%%%%%%% MATLAB CODE %%%%%%%%%%%% maina.m.txt


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

warning('off','vision:transition:usesOldCoordinates')

%warning control statement that enables u to indicate how u want Matlab

%to act on certain warnings

clear all

clc

% fclose(instrfindall);

delete(instrfindall);

answer=1;

arduino=serial('COM3','BaudRate',9600);

fopen(arduino);

faceDetector = vision.CascadeObjectDetector();

%Detects Objects using the Viola-Jones algorithms

%CASCADEObject means passing object on to a succession of other object

%vision.CascadeObjectDetector() is a funtion or a package to detect faces

obj =imaq.VideoDevice('winvideo', 1, 'YUY2_320x240','ROI', [1 1 320 240]);

%imaq.VideoDevice it allows MATLAB to use Video Device of the system

%It also acquires images from the Image Acquisition Device

%YUY2 is the format of the Camera supported by MATLAB

%ROI is Region Of Interest

%Get the input device using image acquisition toolbox,resolution = 640x480 to improve performance

set(obj,'ReturnedColorSpace', 'rgb');

%form is set(obj,name,value);

55

%sets the named property to the specified value for the Object obj.
%ReturnedColorSpace is a property that specifies the Color Space we want to

%the Toolbox to use when image data returns to Matlab WorkSpace

figure('menubar','none','tag','webcam');

wait=0;

while (wait<600)

wait=wait+1;

frame=step(obj);

%STEP Acquires a single frame from image acquisition Device

%frame is the Variable assined to an image which is either RGB or

%GRAYSCALE

%Acquires a single frame from the VideoDevice System Object,obj.

bbox=step(faceDetector,frame);

wait

if(~isempty(bbox))

bbox

centx=bbox(1) + (bbox(3)/2) ;

centy=bbox(2) - (bbox(4)/2) ;

c1=(centx);

c2=(centy);

c1

c2

fprintf(arduino,'%s',char(centx));

fprintf(arduino,'%s',char(centy));

end

%BBOX=Bounding Box

56
%step returns a Matrix of M-by-4 where M is some Variable to bbox
%M defines bounding boxes containing the detected objects

%Each row in Matrix has 4 element Vector [x y width height] in pixels

%The objects are detected from Image Named as 'frame'

%detected objects are from face

boxInserter = vision.ShapeInserter('BorderColor','Custom',...

'CustomBorderColor',[255 0 255]);

%It inserts shapes according to matrix dimensions

%BorderColor is to specify the color of Shape by Default is Black

%Here We set it to 'Custom' so we can use 'CustomBorderColor' to specify

%the color of the border by vector representation

videoOut = step(boxInserter, frame,bbox);

%The Step function here returns an image

%Image consists of a Bounding box for the frame

%The BoxInsert er inserts a frame around the image

%Output image is set to variable 'VideoOut'

imshow(videoOut,'border','tight');

%imshow basically displays images

%parameters 'Border','tight' indicates the compells the images to be

%displayed with out a border

f=findobj('tag','webcam');

if (isempty(f));

[hueChannel,~,~] = rgb2hsv(frame);

% Display the Hue Channel data and draw the bounding box around the face.

%%figure, imshow(hueChannel), title('Hue channel data');

57
rectangle('Position',bbox(1,:),'LineWidth',2,'EdgeColor',[1 1 0])
%Creates 2-D rectangle at Position of BBOX with width and Edgecolor

hold off

%Resets to default behaviour

%Clears existing graphs and resets axis properties to their Defaults

noseDetector = vision.CascadeObjectDetector('Nose');

%Detects nose properties from the video frame using Cascade package

%the properties are assigned to a variable noseDetector

faceImage = imcrop(frame,bbox);

%crops the Image 'Frame' with Bounding BOX

%%imshow(faceImage)

%Displays image

noseBBox = step(noseDetector,faceImage);

%Returns NoseBBOX Matrix

noseBBox(1:1) = noseBBox(1:1) + bbox(1:1);

videoInfo = info(obj);

ROI=get(obj,'ROI');

%returns the value of Specified property from the Obj image

VideoSize = [ROI(3) ROI(4)];

videoPlayer = vision.VideoPlayer('Position',[300 300 VideoSize+60]);

%Play video or display image with specified position

tracker = vision.HistogramBasedTracker;

initializeObject(tracker, hueChannel, bbox);

time=0;

while (time<600)

time=time+1;

58
% Extract the next video frame
frame = step(obj);

time

% RGB -> HSV

[hueChannel,~,~] = rgb2hsv(frame);

% Track using the Hue channel data

bbox = step(tracker, hueChannel);

% Insert a bounding box around the object being tracked

videoOut = step(boxInserter, frame, bbox);

%Insert text coordinates

% Display the annotated video frame using the video player object

step(videoPlayer, videoOut);

pause (.2)

end

time

% Release resources

release(obj);

release(videoPlayer);

%release(vidobj);

close(gcf)

break

end

pause(0.05)

end

fclose(arduino);

%release(vidobj);

59

release(obj);
%release(videoPlayer);

ARDUINO

#include<Servo.h>

Servo servoVer; //Vertical Servo

Servo servoHor; //Horizontal Servo

int x;

int y;

int prevX;

int prevY;

void setup()

Serial.begin(9600);

servoVer.attach(5); //Attach Vertical Servo to Pin 5

servoHor.attach(6); //Attach Horizontal Servo to Pin 6

servoVer.write(90);

servoHor.write(90);

void Pos()

{
if(prevX != x || prevY != y)

int servoX = map(x, 600, 0, 70, 179);

int servoY = map(y, 450, 0, 179, 95);

servoX = min(servoX, 179);

servoX = max(servoX, 70);

servoY = min(servoY, 179);

servoY = max(servoY, 95);

servoHor.write(servoX);

servoVer.write(servoY);

void loop()

if(Serial.available() > 0)

if(Serial.read() == 'X')

x = Serial.parseInt();

if(Serial.read() == 'Y')

{
y = Serial.parseInt();

Pos();

while(Serial.available() > 0)

Serial.read();

63

CONCLUSION:
In this system we have implemented an attendance system for a lecture,
section or laboratory by which lecturer or teaching assistant can record students’
attendance. It saves time and effort, especially if it is a lecture with huge
number of students. Automated Attendance System has been envisioned for the
purpose of reducing the drawbacks in the traditional (manual) system. This
attendance system demonstrates the use of image processing techniques in
classroom. This system can not only merely help in the attendance system, but
also improve the goodwill of an institution. Main aim of this prototype system is
to detect a face, track it, match it with stored Eigen faces and accordingly set
digital pin of Arduino board HIGH or LOW. Using MATLAB, face recognition
algorithm has been developed with the PCM technique. The Eigen faces are
stored first and then we take snapshot of user’s face in real time. Then we match
the user’s face with stored faces and we interfaced this Face recognition with
Arduino using Serial communication

FUTURE ENHANCEMENT:

The future work is to improve the recognition rate of our system when the

faces of the students are half covered or when they are partially visible.

64

REFERRENCES
[1] Panth Shah and Tithi Vyas (2014), “Interfacing of MATLAB with Arduino
for Object Detection Algorithm Implementation using Serial Communication”,
International Journal of Engineering Research & Technology (IJERT).

[2] Chunming Li, Yanhua Diao and Hongtao Ma (2009), “A Statistical PCA
Method for Face Recognition”,IEEE.

[3] V. Subburaman and S. Marcel. Fast Bounding Box estimation based face
detection in “Workshop on Face Detection of the European Conference on
Computer Vision (ECCV)”, 2010.

[4] Raquib Buksh, Soumyajit Routh, Parthib Mitra, Subhajit Banik, Abhishek
Mallik, Sauvik Das Gupta, “Implementation of MATLAB based object
detection technique on Arduino Board and iROBOT CREATE”, IJSRP, Vol. 4,
Issue 1, Jan 2014, ISSN: 2250-3153.

[5] Nikhil Sawake, “Intelligent Robotic Arm”, Submitted to Innovation Cell,


IIT- Bombay, July 2013.

[6] P. Jenifer Martina, P. Nagarajan and P. Karthikeyan (2013), “Hand Gesture


Recognition Based Real-time Command System”, International Journal of
Computer Science and Mobile Computing (IJCSMC).

[7] Viola Paul, J. Jones. Michael, “Rapid Object Detection Using a Boosted
Cascade on Simple Features”, IEEE CVPR, Vol.1, NO. 2, p511~518,Dec. 2001.
[8] Gary Bradski, Adrian Kaehler, Vadim Pisarevsky, ”Learning-Based
Computer Vision with Intel’s Open Source Computer Vision Library”,Intel
Technology Journal, Vol 9,Issue 2, p119~131, 19 May 2005.
65

Das könnte Ihnen auch gefallen