Sie sind auf Seite 1von 7

CHAPTER THREE

1. RESEARCH METHODOLOGY
1.1. System Architecture

The system architecture will design for Recognition of Isolated Amharic Sign Language in
Different Background. It will be selected the appropriate system architecture which consists
image acquisition, image processing, color segmentation, skin detection, image segmentation,
image filtering, classification, and improve the accuracy and recognition.

An efficient and accurate technique for Amharic sign language recognition is presented in this
chapter. The proposed technique undergoes three stages to recognize Isolated Amharic Sign
Language in Different Background.

First stage is pre-processing where the sample images are processed to make it more suitable
for further processing.

Second stage is the feature extraction, which extracts the required feature vectors from the
output obtained from the first stage. Features such as solidity, eccentricity, perimeter, convex
area, major axis length, minor axis length, and orientation are used to obtain the shape.

Third phase is the classification where Naive Bayes (NB) classifier, K-Nearest Neighbor
(KNN), and Proximal Support Vector Machine (PSVM) are used to recognize the signs from
trained set of gestures. The best classifier is identified by comparing their performance. Fig.
3.1 gives an overview of the proposed work.

9
Training input
video Testing input
video
Frame
Extraction Frame
Extraction
Noise/blur
Elimination Noise/blur
Elimination
Morphological
Operation Morphological
Operation

Edge Detection
Edge Detection

Feature Feature
extraction extraction

Assign/label
Amiharic text to Detect text
sign Database

submit

Figure 1: Proposed System Architecture


Sources ([26])

A vision based analysis is used in this work. Vision based analysis, is based on the way human
beings perceive information about their surroundings, yet it is probably the most difficult to
implement in a satisfactory way.

One method is to build a three-dimensional model of the human hand. The model is matched
to images of the hand and parameters corresponding to palm orientation and joint angles are
estimated. These parameters are then used to perform gesture classification.

Second method is capturing image using a camera and then extract features and those features
are used as input in a classification algorithm for classification. In this research work, second

10
method is used for modeling the system. Figure 3.2 shows the supervised learning pipeline for
static Amharic gesture recognition.

1.2. Data collection

The appropriate data will be gathered using devices (mobile frontal camera or webcam) data
handling directly from the signers.

1.3. Implementation

Tools

- OpenCV-Python- to develop the front end component of the prototype/user interface


- xml or csv files – to train and test the digital image

1.4. Models

There will be different algorithms and models will employ. CNN algorithmuses for extraction
features of signs. It is adopted from [22]

The above methodology will be achieved by the following steps: -

 Image processing
o Image Scaling from the captured image
o Skin segmentation process (hand and face)
o For this canny edge detection will be applied
o We will have training and testing image files separately.
o The haarcascade xml files will be used for hand and face detection to get the
right skin segmented parts from online database(optional)
o Real time image will be also tested
 Feature extraction–based on contour analysis and feature vector we will extract the
important features to make easy classification process. Like perimeter, width, height
and other features.

11
o After we have ROI (region of Interest) it will have training and testing images files
using CSV or xml format
 Classification- to classify the extracted signs we will employK-Nearest-Neighbours,
Logistic Regression, Support Vector Machines and convolutional neural network
algorithms.
 Recognition- based on the classification we will have correctly interpreted signs.

1.5. Result and Discussion

This section covers detailed about dataset we need to used, the environmental conditions, the
results of experiments and will compare with previous works in Sign Language Recognition.

Dataset

The proposed system may consist: -

- Six signers
- All should be colored digital images/videos
- Not more than 400x400 pixels
- Some pictures in the same background and other picture in different background
- While the pictures, they were processed the images should have the same pixels and
distances
- 15 words for each signer, 20 times signer signs for each words, 1800 videos in total,
75% for training and 25% for testing will be used.
- Accessible through web-cam

We will evaluate the performance of the entire Sign Language Recognition System in terms of
accuracy. The accuracy of proposed system will be calculated as follows:

𝑐𝑜𝑟𝑟𝑒𝑐𝑡𝑙𝑦 𝑐𝑙𝑎𝑠𝑠𝑖𝑓𝑖𝑒𝑑 𝑠𝑖𝑔𝑛𝑠


𝐴𝑐𝑐𝑢𝑟𝑎𝑐𝑦 = ( ̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ )
𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑠𝑖𝑔𝑛𝑠

We may also use Weka software to calculate the confidence and accuracy of the model. This
is take place after the CSV file is generated using OpenCV-Python code implementation.

12
REFERENCES
[1] G. Tefera, “Recognition of Isolated Signs in Ethiopian Sign Language Tefera Gimbi
Recognition of Isolated Signs in Ethiopian Sign Language,” ADDIS ABABA, Ethiopia,
2014.

[2] M. Mohandes and M. Deriche, “Image based Arabic Sign Language recognition,” Proc.
- 8th Int. Symp. Signal Process. its Appl. ISSPA 2005, vol. 1, no. 3, pp. 86–89, 2005.

[3] T. Khan and H. Pathan, “Hand Gesture Recognition based on Digital Image Processing
using MATLAB,” Int. J. Sci. Eng. Res., vol. 6, no. 9, pp. 338–346, 2015.

[4] WHO, “Deafness and Hearing Loss,” World Health Organization, 2018. [Online].
Available: https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-
loss. [Accessed: 12-Jul-2018].

[5] O. Aran, “VISION BASED SIGN LANGUAGE RECOGNITION: MODELING AND


RECOGNIZING ISOLATED SIGNS WITH MANUAL AND NON-MANUAL
COMPONENTS,” 2008.

[6] D. A. S. Jalal, “Automatic Recognition of Dynamic Isolated Sign in Video For Indian
Sign Language,” 2015.

[7] Y. F. Admasu and K. Raimond, “Ethiopian sign language recognition using Artificial
Neural Network,” Proc. 2010 10th Int. Conf. Intell. Syst. Des. Appl. ISDA’10, pp. 995–
1000, 2010.

[8] A. S. Ghotkar and G. K. Kharate, “Study of vision based hand gesture recognition using
indian sign language,” Int. J. Smart Sens. Intell. Syst., vol. 7, no. 1, pp. 96–115, 2014.

13
[9] Dagnachew Feleke Wolde, “Machine Translation System for,” ADDIS ABABA,
Ethiopia, 2011.

[10] M. Tesfaye, “Machine Translation Approach to Translate Amharic Text to Ethiopian


Sign Language,” ADDIS ABABA, Ethiopia.

[11] Z. Z. Daniel, “Amharic Sentence to Ethiopian Sign Language Translator,” ADDIS


ABABA, Ethiopia, 2014.

[12] gebretinsae beyene Eyob, “Vision based finger spelling recognition for Ethiopian sign
language,” ADDIS ABABA, Ethiopia, 2012.

[13] T. A. Samuel, “Isolated Word-Level Ethiopian Sign Language Recognition,” ADDIS


ABABA, Ethiopia, 2013.

[14] K. Daniel, “Computational Models for the Automatic Learning and Recognition of Irish
Sign Language,” Maynooth, Co.Kildare, Ireland, 2010.

[15] Z. Legesse, “Ethiopian Finger Spelling Classification: A Study To Automate Ethiopian


Sign Language,” ADDIS ABABA, Ethiopia, 2008.

[16] G. Anirudh, “CONVERTING AMERICAN SIGN LANGUAGE TO VOICE USING


RBFNN,” 2012.

[17] T. E. Masresha, “Automatic Translation of Amharic Text To Ethiopian Sign Language,”


ADDIS ABABA, Ethiopia, 2010.

[18] Aynie Belete, “School of Graduate Studies College of Education and Behavioral Studies
Department of Special Needs Education,” Addis Ababa,Ethiopia, 2016.

[19] L. Trigueiros, Paulo; Ribeiro, Fernando; Reis, “Vision-Based Portuguese Sign


Language Recognition System. Advances in Intelligent Systems and Computing.,”
2014.

[20] M. R. ;D. Kyatanavar and P. P. R. Futane, “Comparative Study of Sign Language


Recognition Systems,” Int. J. Sci. Res. Publ., vol. 2, no. 6, pp. 1–3, 2012.

14
[21] B. S. Parton, “Sign Language Recognition and Translation : A Multidisciplined
Approach From the Field of Artificial Intelligence,” Oxford Univ. Press, no. September,
pp. 94–101, 2005.

[22] S. Harish, Chandra Thuwal;Adhyan, “REAL TIME SIGN LANGUAGE GESTURE


RECOGNITION FROM VIDEO SEQUENCES,” NEW DELHI, 2017.

[23] M. Dhiman, “Sign Language Recognition,” p. 23, 2017.

[24] M. I. U. Khan;, “毕业论文Hand Gesture Detection & Recognition System,” Master’s

Thesis,Computer Eng. Dalrana Univ. Sweden, 2011.

[25] N. N. R. Priyadharsini, “Sign Language Recognition Using Convolutional Neural


Networks,” Int. J. Recent Innov. Trends Comput. Commun., vol. 5, no. 6, pp. 625–628,
2017.

[26] R. B. Hiremath and R. M. Kagalkar, “Methodology for Sign Language Video


Interpretation in Hindi Text Language,” Int. J. Innov. Res. Comput. Commun. Eng., vol.
4, no. 5, pp. 9891–9899, 2016.

15

Das könnte Ihnen auch gefallen