Beruflich Dokumente
Kultur Dokumente
ECE DEPARTMENT
CHITRA M
ANISHA T
ELANGO
TABLES OF CONTENTS
ABSTRACT
LIST OF FIGURES
1 INTRODUCTION 1
1.2 Methodology
2 LITERATURE SURVEY 9
management
System Architecture
image Enhancement
image Restoration
image Analysis
image Compression
image Synthesis
Applications
4 ARDUINO 21
4.1 Description
4.2 Features
4.3 Application
4.4
5 PAN/TILT SERVO 29
5.1 Servomotor
5.2 Features
5.3 Description
6 FACE DETECTION 33
7 FACE RECOGNITION 37
8 SOFTWARE IMPLEMENTATION 40
9 PROGRAM 55
CONCLUSION 64
FUTURE
REFERENCE
CHAPTER-1
INTRODUCTION
1.1 INTRODUCTION:
To verify the student attendance record, the personnel staff ought to have
consistently. By and large, there are two kinds of student attendance framework,
in manual system, the staff may experience difficulty in both approving and
Attendance System may decrease the managerial work of its staff. Especially,
includes the students' facial images captured at the time he/she is entering the
Generally, there are two known methodologies to deal with Human Face
brightness-based methodology.
1
The feature based methodology utilizes key point features present on the face,
called landmarks, of the face, for example, eyes, nose, mouth, edges or some
other unique attributes,. In this way, out of the picture that has been extricated
beforehand, just some part is covered during the calculation process. Then
Since the overall picture must be considered, the brightness based methodology
There are different advances that are done during the process of this face
recognition framework, yet the essential steps of these are face detection and
face recognition. Firstly, to mark the attendance, the images of students' faces
will be required. This image can be captured from the camera, which will be
visible. This image will be considered as an input to the system. For efficient
After image quality upgrade, the image will be passed to perform face detection.
The face identification process is trailed by face recognition process. There are
different strategies accessible for face recognition like Eigen face, PCA and
2
In the Eigen face, when faces are identified, they are trimmed from the
picture. With the assistance of the element extractor, different face highlights
are extracted. Utilizing these faces as Eigen features, the student is recognized
Developing the face database is required with the end goal of comparison.
There are various advances that are finished during the procedure of this face
acknowledgment system, yet the fundamental steps of these are face location
and face acknowledgment. Right off the bat, to check the participation, the
situation from where the whole study hall is noticeable. This picture will be
leveling. After picture quality redesign, the picture will be passed to perform
acknowledgment like Eigen face, PCA and LDA half breed calculation. In the
Eigen face, when countenances are recognized, they are cut from the image.
With the help of the component extractor, distinctive face features are removed.
3
Using these countenances as Eigen includes, the understudy is perceived and by
planning with the face database, their participation is checked. Building up the
1.2 METHODOLOGY
In this project we have proposed algorithm for face recognition using image
processing and manipulation of the output pin state of Arduino board with
several image processing algorithms. Using the theory of Image Acquisition and
Fundamentals of Digital Image Processing, the face of a user has been detected
in real time. By using Face Recognition and Serial data communication, the
develops a computer vision system in the real time for face detection and
by real time face detection and tracking. After tracking the face, then attendance
4
The main application of attendance system is seen in teaching
institutions, where the attendance of students has to be regularly monitored on
daily basis. The method developed provides a insecure and time consuming for
recording attendance. Attendance system uses finger pattern algorithm,
biometric device for marking the attendance in database. Sometimes, finger
print may not work properly and failure of device.
DISADVANTAGES:
5
ADVANTAGES:
BLOCK DIAGRAM:
6
CIRCUIT DIAGRAM
Firstly , code in Matlab detects a face from every frame of the live video stream
and inserts a bounding box around the Region of Interest., which is a face in this
case(by detecting some haar features present in the human faces).The project
code follows the Viola Jones algorithm for face detection. The set of frames
with bounding boxes make up the addition of a bounding box around the face in
live video.While adding a bounding box , we also calculate the coordinates of
centroid of the bounding box. These coordinates are sent as a string to the
arduino UNO microcontroller., from Matlab and these are processed according
to the code written on arduino IDE for the movement of motors. During
processing , the arduino gets the positions of PAN and TILT servo motors (that
are attached as shown in the project image).Then , arduino checks if the
centroid coordinates lie in the centre region of the screen. We are trying to
7
move the camera in such a way that the centroid lies at the center of the frame.
For this reason the frame is divided into left and right halves and also top and
bottom halves. If the centroid falls in the left half , the camera is panned right
and if it falls in the right half , camera is panned left and the same with the top
and bottom halves and tilting.
REQUIREMENTS SPECIFICATION
HARDWARE REQUIREMENTS:
SOFTWARE CONFIGURATION:
8
CHAPTER-2
LITERATURE SURVEY
DESCRIPTION:
Computers are now too smart to interact with the human in different
approaches. This interaction will be more acceptable for both human and
algorithms. Among various face recognition methods, here author use deep
learning based face recognition method. This method uses Convolutional Neural
embeddings. Then those embeddings are used to classify the person’s facial
system, building security etc. can be developed after building the system.
Iram Baig
9
DESCRIPTION:
manually wastes a lot of time. There are many automatic methods available for
this purpose i.e. biometric attendance. All these methods also waste time
because students have to make a queue to touch their thumb on the scanning
device. This work describes the efficient algorithm that automatically marks the
students, detect the faces in images and compare the detected faces with the
database and mark the attendance. Reviews the related work in the field of
and results.
DESCRIPTION:
convolutional neural networks (CNNs) for face detection and recognition tasks,
10
described in detail. This model is composed of several essential steps developed
using today's most advanced techniques: CNN cascade for face detection and
CNN for generating face embeddings. The primary goal of this research was the
recognition tasks. Due to the fact that CNNs achieve the best results for larger
datasets, which is not the case in production environment, the main challenge
was applying these methods on smaller datasets. A new approach for image
augmentation for face recognition tasks is proposed. The overall accuracy was
95.02% on a small dataset of the original face images of employees in the real-
DESCRIPTION:
11
common on these days for the people to have a mobile phone with integrated
system through the mobile phone. On this paper, android based face recognition
system has developed. Approached method that to be used are PCA (Principal
Component Analysis) i.e., Eigen Face. System testing will be done to see how
fast the capable of mobile phone to process the system. This shows the mobile
DESCRIPTION:
12
SYSTEM DESIGN:
SYSTEM ARCHITECTURE:
13
CHAPTER 3
DIGITAL IMAGE PROCESSING
3.1 INTRODUCTION
IMAGE:
Am image refers a 2D light intensity function f(x, y), where(x, y) denotes spatial
coordinates and the value of f at any point (x, y) is proportional to the
brightness or gray levels of the image at that point. A digital image is an image
f(x, y) that has been discretized both in spatial coordinates and brightness. The
elements of such a digital array are called image elements or pixels
The storage and processing requirements increase rapidly with the spatial
resolution and the number of gray levels.
15
Fig 3.1 FUNDAMENTAL STEPS IN DIGITAL IMAGE PROCESSING
Image Enhancement
Image Restoration
Image Analysis
IP
Image Compression
Image Synthesis
16
3.4.1 IMAGE ENHANCEMENT
17
decompressed when displayed. Lossless compression preserves the exact data
in the original image, but Lossy compression does not represent the original
image but provide excellent compression.
18
3.4.3 COMMUNICATION
3.4.6 DEFENSE/INTELLIGENCE
19
CHAPTER-4
ARDUINO
DESCRIPTION :
20
ARDUINO UNO
Arduino is an open-source project that created microcontroller-based kits for
building digital devices and interactive objects that can sense and control
physical devices. The project is based on microcontroller board designs,
produced by several vendors, using various microcontrollers. These systems
provide sets of digital and analog input/output (I/O) pins that can interface to
various expansion boards (termed shields) and other circuits. The boards feature
serial communication interfaces, including Universal Serial Bus (USB) on some
models, for loading programs from personal computers. For programming the
microcontrollers, the Arduino project provides an integrated development
environment (IDE) based on a programming language named Processing, which
also supports the languages C and C++
FEATURES
Microcontroller: ATmega328P
Operating voltage: 5V
Input voltage: 7-12V
Flash memory: 32KB
SRAM: 2KB
EEPROM: 1KB
APPLICATIONS
ATmega328 IC
21
DESCRIPTION:
The ATmega328 is a single-chip microcontroller created by Atmel in the mega
AVR family. The Atmel 8-bit AVR RISC-based microcontroller combines
32 kB ISP flash memory with read-while-write capabilities, 1 kB EEPROM,
2 kB SRAM, 23 general purpose I/O lines, 32 general purpose
working registers, three flexible timer/counters with compare modes, internal
and external interrupts, serial programmable USART, a byte-oriented 2-wire
serial interface, SPI serial port, 6-channel 10-bit A/D converter (8-channels
in TQFP and QFN/MLF packages), programmable watchdog timer with
internal oscillator, and five software selectable power saving modes. The device
operates between 1.8-5.5 volts. The device achieves throughput approaching
1 MIPS per MHz.
ATmega328P IC
22
PIN DIAGRAM
23
256/512/512/1KBytes EEPROM
ARCHITECTURE
24
ARCHITECTURE DIAGRAM
PIN DISCRIPTION
GND Ground.
25
Port B (PB7:0) XTAL1/XTAL2/TOSC1/TOSC2
Port C (PC5:0)
PC6/RESET
26
generate a Reset, even if the clock is not running. The minimum pulse length is
given in Table 28-3 on page 308. Shorter pulses are not guaranteed to generate a
Reset.
Port D (PD7:0)
The Port D pins are tri-stated when a reset condition becomes active, even if the
clock is not running.
AVCC
AVCC is the supply voltage pin for the A/D Converter, PC3:0, and ADC7:6. It
should be externally connected to VCC, even if the ADC is not used. If the
ADC is used, it should be connected to VCC through a low-pass filter. Note that
PC6..4 use digital supply voltage, VCC
AREF
In the TQFP and QFN/MLF package, ADC7:6 serve as analog inputs to the A/D
converter. These pins are powered from the analog supply and serve as 10-bit
ADC channels.
27
CHAPTER-5
PAN /TILT SERVO
5.1 SERVOMOTOR
Servo mechanism
1.controlled device
2.output sensor
3.feedback system
We can get a very high torque servo motor in a small light weight packages.
28
5.2 Feature:
- Two degrees of freedom Robot PTZ , high-torque servos, cost-effective small
head
- You can do two degrees of freedom movement in the horizontal and vertical
directions
29
5.3 DESCRIPTION:
Coreless motor
30
3 pole wure
31
CHAPTER-6
FACE DETECTION
The problem of face recognition is all about face detection. This is a fact
that seems quite bizarre to new researchers in this area. However, before face
recognition is possible, one must be able to reliably find a face and its
landmarks. This is essentially a segmentation problem and in practical systems,
most of the effort goes into solving this task. In fact the actual recognition
based on features extracted from these facial landmarks is only a minor last
There are two types of face detection problems:
1)the head is the small blob above a larger blob -the body
33
6.3. FACE DETECTION PROCESS:
The grey-scale differences, which are invariant across all the sample faces are
strikingly apparent. The eye-eyebrow area seem to always contain dark intensity
(low) gray-levels while nose forehead and cheeks contain bright intensity (high)
grey-levels. After a great deal of experimentation, the researcher found that the
following areas of the human face were suitable for a face detection system
based on image invariants and a deformable template.
34
6.4 FACE DETECTION ALGORITHM
35
CHAPTER-7
FACE RECOGNITION
Over the last few decades many techniques have been proposed for face
recognition. Many of the techniques proposed during the early stages of
computer vision cannot be considered successful, but almost all of the recent
approaches to the face recognition problem have been creditable. According to
the research by Brunelli and Poggio (1993) all approaches to human face
recognition can be divided into two strategies
36
This is similar the template matching technique used in face detection,
except here we are not trying to classify an image as a 'face' or 'non-face'
but are trying to recognize a face.
Whole face, eyes, nose and mouth regions which could be used in a
template matching strategy. The basis of the template matching strategy
is to extract whole facial regions (matrix of pixels) and compare these
with the stored images of known individuals. Once again Euclidean
distance can be used to find the closest match. The simple technique of
comparing grey-scale intensity values for face recognition was used by
Baron (1981). However there are far more sophisticated methods of
template matching for face recognition. These involve extensive
preprocessing and transformation of the extracted grey-level intensity
values. For example, Turk and Pentland (1991a) used Principal
Component Analysis, sometimes known as the Eigen faces approach, to
pre-process the gray-levels An investigation of geometrical features
versus template matching for face recognition by Brunelli and Poggio
(1993) came to the conclusion that template based techniques offer
superior recognition accuracy.
37
Face recognition and detection system is a pattern recognition approach
for personal identification purposes in addition to other biometric
approaches such as fingerprint recognition, signature, retina and so forth.
39
CHAPTER 8
SOFTWARE IMPLEMENTATION
The entire algorithm for face recognition is a based on image processing. The
proposed system uses MATLAB as a platform on which image processing
algorithm has been developed and tested. As an image acquisition devise,
camera is used. A camera can be an inbuilt camera of laptop or it can be a USB
camera as well. To get the details of the hardware device interfaced with the
computer, imaqhwinfo command of MATLAB is used. Entire MATLAB
program for this algorithm can be divided in parts as follows
40
MODULES AND DESCRIPTION
MATLAB
Image Processing Toolbox
Computer Vision Toolbox
Image Acquisition Toolbox
Spreadsheet Link
8.2 DESCRIPTIONS:
MATLAB:
•Morphological operations
41
•Linear filtering and filter design
•Transforms
Image registration
•De blurring
42
problem, but are nevertheless important for navigation and obstacle avoidance.
A related problem is to calibrate a camera, i.e. calculate the focal distance, the
principal point etc.,
Configuring call back functions that execute when certain events occur
43
Department of ECE, SMVITM, Bantakal Page 20 environment. With a
small number of functions to manage the link and manipulate data, the
Spreadsheet Link EX software is powerful in its simplicity. The Spreadsheet
Link EX software supports MATLAB two-dimensional numeric arrays, one-
dimensional character arrays (strings), any two-dimensional cell arrays. It does
not work with MATLAB multidimensional arrays and structures
44
Most of these functions use the state-of-the art algorithms. There are numerous
functions for 2-D and 3-D course, MATLAB even provides an external
interface to run those programs from within MATLAB. The user, however, is
not limited to the built-in functions, he can write his own functions in the
MATLAB language. Once written, these functions behave just like the built-in
functions. MATLAB’s language is very easy to learn and to use.
There are several optional ‘Toolboxes’ available from the developers of the
MATLAB. These tool boxes are collection of functions written for special
applications such as Symbolic Computations Toolbox, Image Processing
Toolbox, Statistics Toolbox, and Neural Networks Toolbox, Communications
Tool box, Signal Processing Toolbox, Filter Design Toolbox, Fuzzy Logic
Toolbox, Wavelet Toolbox, Data base Toolbox, Control System Toolbox,
Bioinformatics Toolbox, Mapping Toolbox.
On all UNIX systems, Macs, and PC, MATLAB works through three basic
windows. They are:
a. Command window:
This is the main window. The MATLAB command prompt characterizes it
‘>>’.when you launch the application program, MATLAB puts you in this
window .All commands, including those for running user-written programs,
are typed in this window at the MATLAB prompt.
b. Graphics window: The output of all graphics commands are typed in the
command window
45
are flushed to the graphics or figure window, a separate gray window
with(default) white background colour. The user can create as many figure
windows, as the system memory will allow.
c. Edit window:
This is where you write, edit, create, and save your own programs in files
called M-files. We can use any text editor to carry out these tasks. On the most
systems, such as PC’s and Macs, MATLAB provides its build in editor. On
other systems, you can invoke the edit window by typing the standard file
editing command that you normally use on your systems. The command is
typed at the MATLAB prompt following the special character ‘!’ character.
After editing is completed, the control is returned to the MATLAB.
On-Line Help
a. On-line documentation:
MATLAB provides on-line help for all its built-in functions and
programming language constructs. The commands look for, help, help win, and
helpdesk provides on-line help.
b. Demo:
Input-Output
46
MATLAB supports interactive computation taking the input from the screen,
and flushing the output to the screen. In addition, it can read input files and
write output files. The following features hold for all forms of input-output.
b. Dimensioning
d. Output display
47
The following facilities are provided for controlling the screen output.
i. Paged output
To direct the MATLAB to show one screen of output at a time, type more
on the MATLAB prompt. Without it, MATLAB flushes the entire output at
once, without regard to the speed at which we read.
e. Command History
48
commands. You can also recall a previous command by typing the first
characters and then pressing the up-arrow key. On most Unix systems,
MATLABS command line editor also understands the standard emacs key
bindings.
File Types
M-files: M-files are standard ASCII text files, with a .m extension to the file
name. There are two types of these files: script files and function files. Most
programs we write in MATLAB are saved as M-files. All built-in functions in
MATLAB are M-files, most of which reside on our computer in precompiled
format. Some built in functions are provided with source code in readable M-
files so that can be copied and modified.
Mat-files: Mat-files are binary data-files with a .mat extension to the file name.
Mat-files are created by MATLAB when we save data with the save command.
The data is written in a special format that only MATLAB can read. Mat-files
can be loaded into MATLAB with the load command.
Platform independence
49
example, on PC’s and Macs there are menu driven commands for
opening, writing, editing, saving and printing files whereas on Unix machines
such as sun workstations, these tasks are usually performed with Unix
commands.
Images in MATLAB
(i)The color map is a matrix of values representing all the colours in the image.
(ii)The image matrix contains indexes corresponding to the colour map color
map.
A color map matrix is of size N*3, where N is the number of different colors I
the image. Each row represents the red, green, blue components for a colour.
Represents two colours, the first have components r1, g1,b1 and the second
having the components r2,g2,b2
The wavelet toolbox only supports indexed images that have linear,
monotonic color maps. Often color images need to be pre-processed into a grey
scale image before using wavelet decomposition. The Wavelet Toolbox User’s
Guide provides some sample code to convert color images into grey scale. This
will be useful if it is needed to put any images into MATLAB.
50
using for our project. The 2-D wavelet Analysis, the decomposition of an image
into approximations and details and the properties of different types of wavelets
will be discussed in the next chapter.
MATLAB
Matlab is a high-performance language for technical computing.
It integrates computation, programming and visualization in a user-
friendly environment where problems and solutions are expressed in an
easy-to-understand mathematical notation.
Matlab is an interactive system whose basic data element is an array that
does not
Require dimensioning.
This allows the user to solve many technical computing problems,
especially those with matrix and vector operations, in less time than it
would take to write a program in a scalar non-interactive language such
as C or FORTRAN.
Matlab features a family of application-specific solutions which are
called toolboxes.
It is very important to most users of Matlab, that toolboxes allow to learn
and apply
Specialized technology.
These toolboxes are comprehensive collections of Matlab functions, so-
called M-files that extend the Matlab environment to solve particular
classes of problems.
Matlab is a matrix-based programming tool. Although matrices often
need not to be
Dimensioned explicitly, the user has always to look carefully for matrix
51
dimensions.
If it is not defined otherwise, the standard matrix exhibits two dimensions
n × m.
Column vectors and row vectors are represented consistently by n × 1 and
1 × n matrices, respectively.
MATLAB OPERATIONS
Variables
Numbers
Operators
Functions
VARIABLES
OPERATORS
+ Addition
- Subtraction
53
* Multiplication
/ Division
’ Complex conjugate transpose
( ) Brackets to specify the evaluation order.
FUNCTIONS
54.
CODING
warning('off','vision:transition:usesOldCoordinates')
clear all
clc
% fclose(instrfindall);
delete(instrfindall);
answer=1;
arduino=serial('COM3','BaudRate',9600);
fopen(arduino);
faceDetector = vision.CascadeObjectDetector();
%Get the input device using image acquisition toolbox,resolution = 640x480 to improve performance
set(obj,'ReturnedColorSpace', 'rgb');
%form is set(obj,name,value);
55
%sets the named property to the specified value for the Object obj.
%ReturnedColorSpace is a property that specifies the Color Space we want to
figure('menubar','none','tag','webcam');
wait=0;
while (wait<600)
wait=wait+1;
frame=step(obj);
%GRAYSCALE
bbox=step(faceDetector,frame);
wait
if(~isempty(bbox))
bbox
centx=bbox(1) + (bbox(3)/2) ;
centy=bbox(2) - (bbox(4)/2) ;
c1=(centx);
c2=(centy);
c1
c2
fprintf(arduino,'%s',char(centx));
fprintf(arduino,'%s',char(centy));
end
%BBOX=Bounding Box
56
%step returns a Matrix of M-by-4 where M is some Variable to bbox
%M defines bounding boxes containing the detected objects
boxInserter = vision.ShapeInserter('BorderColor','Custom',...
'CustomBorderColor',[255 0 255]);
imshow(videoOut,'border','tight');
f=findobj('tag','webcam');
if (isempty(f));
[hueChannel,~,~] = rgb2hsv(frame);
% Display the Hue Channel data and draw the bounding box around the face.
57
rectangle('Position',bbox(1,:),'LineWidth',2,'EdgeColor',[1 1 0])
%Creates 2-D rectangle at Position of BBOX with width and Edgecolor
hold off
noseDetector = vision.CascadeObjectDetector('Nose');
%Detects nose properties from the video frame using Cascade package
faceImage = imcrop(frame,bbox);
%%imshow(faceImage)
%Displays image
noseBBox = step(noseDetector,faceImage);
videoInfo = info(obj);
ROI=get(obj,'ROI');
tracker = vision.HistogramBasedTracker;
time=0;
while (time<600)
time=time+1;
58
% Extract the next video frame
frame = step(obj);
time
[hueChannel,~,~] = rgb2hsv(frame);
% Display the annotated video frame using the video player object
step(videoPlayer, videoOut);
pause (.2)
end
time
% Release resources
release(obj);
release(videoPlayer);
%release(vidobj);
close(gcf)
break
end
pause(0.05)
end
fclose(arduino);
%release(vidobj);
59
release(obj);
%release(videoPlayer);
ARDUINO
#include<Servo.h>
int x;
int y;
int prevX;
int prevY;
void setup()
Serial.begin(9600);
servoVer.write(90);
servoHor.write(90);
void Pos()
{
if(prevX != x || prevY != y)
servoHor.write(servoX);
servoVer.write(servoY);
void loop()
if(Serial.available() > 0)
if(Serial.read() == 'X')
x = Serial.parseInt();
if(Serial.read() == 'Y')
{
y = Serial.parseInt();
Pos();
while(Serial.available() > 0)
Serial.read();
63
CONCLUSION:
In this system we have implemented an attendance system for a lecture,
section or laboratory by which lecturer or teaching assistant can record students’
attendance. It saves time and effort, especially if it is a lecture with huge
number of students. Automated Attendance System has been envisioned for the
purpose of reducing the drawbacks in the traditional (manual) system. This
attendance system demonstrates the use of image processing techniques in
classroom. This system can not only merely help in the attendance system, but
also improve the goodwill of an institution. Main aim of this prototype system is
to detect a face, track it, match it with stored Eigen faces and accordingly set
digital pin of Arduino board HIGH or LOW. Using MATLAB, face recognition
algorithm has been developed with the PCM technique. The Eigen faces are
stored first and then we take snapshot of user’s face in real time. Then we match
the user’s face with stored faces and we interfaced this Face recognition with
Arduino using Serial communication
FUTURE ENHANCEMENT:
The future work is to improve the recognition rate of our system when the
faces of the students are half covered or when they are partially visible.
64
REFERRENCES
[1] Panth Shah and Tithi Vyas (2014), “Interfacing of MATLAB with Arduino
for Object Detection Algorithm Implementation using Serial Communication”,
International Journal of Engineering Research & Technology (IJERT).
[2] Chunming Li, Yanhua Diao and Hongtao Ma (2009), “A Statistical PCA
Method for Face Recognition”,IEEE.
[3] V. Subburaman and S. Marcel. Fast Bounding Box estimation based face
detection in “Workshop on Face Detection of the European Conference on
Computer Vision (ECCV)”, 2010.
[4] Raquib Buksh, Soumyajit Routh, Parthib Mitra, Subhajit Banik, Abhishek
Mallik, Sauvik Das Gupta, “Implementation of MATLAB based object
detection technique on Arduino Board and iROBOT CREATE”, IJSRP, Vol. 4,
Issue 1, Jan 2014, ISSN: 2250-3153.
[7] Viola Paul, J. Jones. Michael, “Rapid Object Detection Using a Boosted
Cascade on Simple Features”, IEEE CVPR, Vol.1, NO. 2, p511~518,Dec. 2001.
[8] Gary Bradski, Adrian Kaehler, Vadim Pisarevsky, ”Learning-Based
Computer Vision with Intel’s Open Source Computer Vision Library”,Intel
Technology Journal, Vol 9,Issue 2, p119~131, 19 May 2005.
65