Sie sind auf Seite 1von 10

[Downloaded free from http://www.jr.ietejournals.org on Friday, March 14, 2014, IP: 49.248.8.171] || Click here to download free Android application for this journal

Design and Development of Tamil Sign Alphabets using Image Processing with Right Hand Palm to Aid Deaf‑Dumb People

P. Subha Rajam and G. Balakrishnan 1

Department of IT, J.J. College of Engineering and Technology, 1 Department of CSE, Indra Ganesan College of Engineering, Trichy, Tamilnadu, India

ABSTRACT

Hand recognition is a recent active area of research in the computer vision for the purpose of Human ‑ Computer Interaction. This paper mainly concentrates on Tamil sign alphabets (TSL) into speech which could be helpful for deaf‑dumb people. In this paper, a set of 32 (2 5 ) combinations of binary number sign images are introduced to propose a system to recognize Tamil sign alphabets. These Tamil alphabets have 12 vowels, 18 consonants, and one Aayutha Ezhuthu. The proposed system is based on four main stages: Pre‑processing method, Training phase, Sign detection, and Conversion of Binary to voice. The binary sign images are loaded at a run time or static as 310 images which are taken ten times in different distances at the same position. The five fingertip positions represent (‘1’ or ‘0’) and are identified by using image processing techniques with proposed right hand palm angular‑based analysis. Then, the binary values are assigned to the corresponding Tamil letters and voice. The experiments were performed with ten different signer palms and the results demonstrated that the system could successfully recognize Tamil sign alphabets with better accuracy with 99.35% of static images and 98.36% of dynamic images (runtime).

of static images and 98.36% of dynamic images (runtime). Keywords: Feature extraction point, Human‑computer

Keywords:

Feature extraction point, Human‑computer interaction, Image processing technique, Pattern recognition, Sign detection, and Tamil sign alphabet recognition system.

INTRODUCTION

1.

people [2]. It includes a hardware, i.e. single or multiple USB Web camera which is used for image acquisition to extract the features of the signings and a decision making system to recognize the sign language.

Most of the researchers use special data acquisition tools like data gloves, micro controller gloves, finger tip color gloves, location sensors, or wearable cameras in order to extract features of the signs. In some of the existing models like template matching, the time is delayed for identifying the sign which is considerably high. Some researchers used a variety of techniques such as Fuzzy logic [3], neural networks [3,4-5], PCA method [6,7], and Hidden Markov Models (HMMs) [8,5,9-10] to recognize hand gestures. To contrast these existing approaches, the proposed system is designed to recognize Tamil sign alphabets by using image processing techniques with angular-based analysis with right hand palm.

This paper deals with a heuristics-based approach system which recognizes the Tamil sign alphabets for human – computer interaction. To aid people with such disabilities, this proposed methodology is able to recognize a set of 31 binary number signs of Tamil letters that consists of 12 vowels, 18 consonants, and one Aayutha Ezhuthu using right hand palm side of images.

Normally, deaf-dumb people communicate with the help of lip reading, writing down words, finger spelling, sign language, etc., The sign language is a major technique for deaf-dumb communication. There are number of signs including numbers, alphabets, words, sentences as phrases, and visual form of communication with the combination of hand shapes, movements and orientation of the hands, the body, and facial expressions instead of voice. The finger spelling is used for names and places because there is no sign for them. This is physically spelling out of a word by performing a different hand shape positions for every letter of the word. This language may not be recognized by blind people and even most of the normal people. They face some difficulties in their way of communication when they come across certain areas like banking, booking counters, hospitals, etc. To facilitate their communication, a system is needed to translate sign language into spoken language. In this particular case, human – computer interaction [1] methods are used to facilitate communication between deaf-dumb people and other people.

Sign language recognition system is used as a key point of a communication between deaf-dumb and normal

IETE JOURNAL OF RESEARCH | VOL 59 | ISSUE 6 | NOV-DEC 2013

709

[Downloaded free from http://www.jr.ietejournals.org on Friday, March 14, 2014, IP: 49.248.8.171] || Click here to download free Android application for this journa

Rajam PS and Balakrishna G: Design and Development of Tamil Sign Alphabets using Image Processing with Right Hand Palm to aid Deaf‑Dumb People

2. EARLIER WORKS

Nadgeri, S.M. et al. [2] developed an algorithm for computer vision color tracking toward tracking human hand. 26 hand gestures which represent the alphabets from A to Z in ASL by this system. Orientation histogram of the image which was robust against the change in color of the image for different skin colors for gesture classification was considered by feature vector. A simple perceptron neural network which percepted the gesture within short time was used for gesture perceptron. Implementation of CAMSHIFT ignored the use of any special hand-tracking software in the gesture recognition system for hand-tracking technique. If the number of images stored in database was high, the recognition would be accurate.

described with an illustration of an angular-based analysis of right hand palm and also six phases in detail. In section 4, the experiment results are discussed with illustrations of 31 combinations of binary number signs to corresponding Tamil letters. The design and development of Tamil alphabet recognition system is described with final output. The section 5 concludes this research paper with brief description.

3. PROPOSED METHODOLOGY

In the proposed method, the set of 31 combinations of binary image signs are produced by using right hand palm images. Each image has five fingers with the aspects of binary “UP” and “DOWN” position.

Each one of the type image is loaded at static or run time through USB Web camera in different distances at the same position [15]. An image captured at runtime

or static is scanned to identify the straight finger tip positions of the five fingers in the order of little, ring, middle, index, and thumb fingers of right hand palm. The straight finger tip positions are recognized by measuring their angle among the finger tip position to the reference point at the bottom of the palm and the line of the horizontal to the reference point. The identified finger positions are assigned values ‘1’ or ‘0’. ‘1’ represents straight finger position and ‘0’ represents bend finger position which are stored in an array format. This array value is converted into decimal value by using binary-decimal algorithm. These decimal values are transformed into 12 Vowels, 18 Consonants, and one Aayutha Ezhuthu of Tamil letters. The proposed method broadly consists of seven phases, namely creating input images, data Acquisition, pre-processing method, sign detection, training phase, testing phase, and conversion of binary to Tamil letters, as shown in the Figure 1. This proposed method may be manipulated in two different methods like

a. Static Tamil Sign Alphabets Recognition System

b. Dynamic Tamil Sign Alphabets Recognition System.

Igorevich, R.R. et al. [11] implemented a tracking algorithm based on gray scale histogram values of images for detecting hand motion by simple stereo camera. The hand was tracked and recognized which could choose distance from camera. This algorithm used flexible threshold for the disparity map and for the manipulating robots by hand movement.

map and for the manipulating robots by hand movement. Ruiduo Yang et al . [12] used

Ruiduo Yang et al. [12] used four types of data sets like simple back ground, complex back ground, signer wearing short sleeves, and across signer in ASL data sets.

Didier Coquin et al. [13] computed a set of sample signatures for each gesture from the gesture 2108 EURASIP Journal on applied signal processing alphabet. The dynamic signature of the gesture was obtained by superposing hand skeletons of each posture which provided a single image. This algorithm was performed on a ten-gesture alphabet with recognition rate of 100%.

Salma Begum and Hasanuzzaman, MD. [7] used Principal Component Analysis (PCA) to recognize Bengali 6 vowels and ten numbers in computer vision based Bangladesh sign language recognition system (BDSL).

Bhuyan, M.K. et al. [14] presented for fingertip detection and hand pose recognition with inaccurate color segmentation. A skeletal hand model had been constructed based on the information of fingertips positions and MP joints for recognizing the finger types of hand gestures. The feature extraction of the hand is used by distance and angle as 3D Gaussian distributions. The experimental results were performed with 20 users and a predefined set of 8 gestures with 93.25% recognition rate of different gesture patterns.

The rest of the paper is arranged as follows. In the following section, the proposed methodology is

In static Tamil sign alphabets recognition system, each image should be taken ten times; thus, we obtain 310 images. These images are stored in separate directory. These images are processed by training phase algorithm which is used to find the minimum and maximum values of angle of each straight finger tip position that are stored in angle threshold array variables. They are ANGLE MIN and ANGLE MAX. The stored 310 images are tested with one by one by using pre-processing method and sign detection algorithm. After finding the sign detection of each straight finger tip position as binary number, this value is converted into decimal value by using conversion of binary-decimal algorithm and then assigned to

710

IETE JOURNAL OF RESEARCH | VOL 59 | ISSUE 6 | NOV-DEC 2013

[Downloaded free from http://www.jr.ietejournals.org on Friday, March 14, 2014, IP: 49.248.8.171] || Click here to download free Android application for this journal

Rajam PS and Balakrishna G: Design and Development of Tamil Sign Alphabets using Image Processing with Right Hand Palm to aid Deaf‑Dumb People

3.2 Data Acquisition
3.2 Data Acquisition

Figure 1: Block diagram of the proposed methodology with angular‑based analysis of right hand palm image.

corresponding Tamil letter and displayed as Tamil letter and voice.

USB Web camera is used to capture the input image sign by using right hand palm with certain distance between camera and signers about 0.5 m. Black background color is to be maintained. A black band is to be worn on the wrist before capturing the input image using USB Web camera of LG smart cam which is connected with Matlab program. Input images are taken in different lighting. For the training phase, each image is taken ten times from previously defined 31 signs. Thus, we obtained a maximum of 310 images. For the testing phase, five images from predefined 31 sings are loaded at a run time; thus, we get a total of 160 images. Normally, the images are captured at a resolution of 640 × 480 pixels.

3.3 Pre‑processing Method

The captured image of right hand palm is converted into edge image by using Pre-processing method. The captured image of right hand palm is resized to a resolution of 128 × 128 pixels as shown in Figure 3. After resizing the original, the image in RGB color is converted into gray scale image which is in turn converted into black and white (binary) image. The image is then processed using Canny Edge Detection technique to extract outline images (edge) of palm as shown in Figure 3. It is easier

In dynamic Tamil sign alphabets recognition system, each image is tested with five times; thus, we obtain 160 images. An image is captured and loaded at run time, i.e. dynamically through USB Web camera. After capturing an image, it is converted into edge image using pre-processing method.

This edge image is scanned from left to right and right to left to identify the straight finger tip position and calculate angle between each finger. This angle value is compared with angle threshold array variables which

are ANGLE MIN and ANGLE MAX . These are obtained from training phase of static recognition system. After finding the sign detection of each straight finger tip position

as binary number, this value is converted into decimal

value by using conversion of binary-decimal algorithm and then assigned to corresponding Tamil letter and displayed as Tamil letter and voice.

3.1 Creating Images

A total of 31 combinations of binary input image signs

are developed by using right hand palm, as shown in Figure 2.

IETE JOURNAL OF RESEARCH | VOL 59 | ISSUE 6 | NOV-DEC 2013

711

[Downloaded free from http://www.jr.ietejournals.org on Friday, March 14, 2014, IP: 49.248.8.171] || Click here to download free Android application for this journal

Rajam PS and Balakrishna G: Design and Development of Tamil Sign Alphabets using Image Processing with Right Hand Palm to aid Deaf‑Dumb People

1. (a) 2. (a) (b)
1.
(a)
2.
(a)
(b)

and their corresponding minimum and maximum angle values are stored. The testing phase is used to classify the straight each finger to correspond binary result as ‘1’. According to the finger tip positions, the result will be produced in binary format.

3.4.1 Feature Extraction Point

After completion of pre-processing method, an edge image with the size of 128 × 128 is obtained. Scan processing procedure is applied in two methods.

Bottom-Up approach

Left to Right Scan

Top-Down approach

Left to Right Scan Right to Left Scan

This bottom-up approach is used to identify the bottom most white pixel from left to right scan as reference point (x 0 , y 0 ) and marked by red color in resized image.

In the top-down approach, the maximum three finger tip positions are identified from left to right scan as (x 1 , y 1 ), (x 2 , y 2 ), and (x 3 , y 3 ) marked by green color in resized image. The remaining finger tip positions are identified from right to left scan as (x 4 , y 4 ) and (x 5 , y 5 ) marked by blue color in resized image. During the top-down approach, the left to right scan starts from a pixel point (0, 0) in the left most position to the (0,127) in the right most position. The top most white pixel is identified as (x 1 , y 1 ) and marked by green color in resized image. After receiving the top most, white pixel (x 1 , y 1 ), we determine the angle between the line joining fingertip with reference point (x 0 , y 0 ) and the horizontal line passing through the reference point as (x r , y r ). The difference angle is obtained from each of the five fingers by Equation 2. Fingers to the left of the ‘MIDDLE’ finger make obtuse angles and those fingers to the right of ‘MIDDLE’ finger makes acute angles. The reference point (x 0 ,y 0 ) is in almost all the cases fixed at the center of the bottom most scan line of the palm irrespective of the signer.

Figure 2: 31 combinations of binary number signs to Tamil letters using right hand palm. (a) 12 Vowels. (b) 1 Aayutha Ezhuthu. (c) 18 Consonants.

1

0

2

1

0

Euclidean_Distance = ( x x ) + ( y y ) 2

Angl e

= 2 * ta n

a

(1)

to use those edge images for extracting the finger tip positions for further processing [16-18].

3.4 Sign Detection

Sign detection is important part of this proposed methodology. This phase consists of two methods namely feature extraction point and testing phase. In the feature extraction point, scanning process is essential to count the number of the straight finger tip positions of edge image from left to right scan and right to left scan

( a + b − c ) * ( a − b + c )
( a
+ b −
c
) * (
a
− b +
c
) / (
a
+ b +
c
) /(
− a +
b
+
c )

(2)

Where (x 1 , y 1 ) is the first white pixel finger tip position, (x 0 , y 0 ) is wrist hand position as reference point and (x r , y r ) is the point in the horizontal line passing through the reference point. Figure 4 shows ‘a’ is the distance between (x 0 , y 0 ) and (x r , y r ), ‘b’ is the distance between (x r , y r ) and (x 1 , y 1 ), and ‘c’ is the distance between (x 1 , y 1 ) and (x 0 , y 0 ). ‘a’, ‘b’, and ‘c’ values are calculated by Euclidean distance measurement by Equation 2. This angle value forms the element q 1 of array Angle.

712

IETE JOURNAL OF RESEARCH | VOL 59 | ISSUE 6 | NOV-DEC 2013

[Downloaded free from http://www.jr.ietejournals.org on Friday, March 14, 2014, IP: 49.248.8.171] || Click here to download free Android application for this journal

Rajam PS and Balakrishna G: Design and Development of Tamil Sign Alphabets using Image Processing with Right Hand Palm to aid Deaf‑Dumb People

Processing with Right Hand Palm to aid Deaf‑Dumb People Figure 3: Sample results of resized images

Figure 3: Sample results of resized images and edge images using right hand palm.

of resized images and edge images using right hand palm. by using pre-processing and then applied

by using pre-processing and then applied in feature extraction method. The angle of the straight fingertip positions in an edge image is found by feature extraction method and stored into an array Angle. After finding the top most, white pixel, the angle is calculated between the line joining fingertips (x 1 , y 1 ) with reference point (x 0 , y 0 ) and the horizontal line transient through the reference points (x r , y r ). q 1 is stored in Angle (0). Similarly, second and third white pixel of corresponding angles q 2 and q 3 are stored in Angle (1) and Angle (2). During Left-Right scanning process, an array Angle stores only the angle of straight fingertip positions. It may store in maximum three finger angles from left to right scan. For example, if an edge image has only one fingertip position from left to right scan, then it stores Angle (0) as q 1 and other two angle values such as Angle (1) as q 2 and Angle (2) as q 3 . They are set to zero. During Right-Left scanning process, the remaining angles of finger tip positions are found as q 4 and q 5 . It stores Angle (3) as q 4 and Angle (4) as q 5 . This process may store maximum two finger tip positions.

In the testing phase after finding array Angle, i.e. (Angle (0), Angle (1), Angle (2)) in the Left-Right scanning process, each value is compared with all minimum and maximum angle values such as ANGLE MIN and ANGLE MAX .

This proposed algorithm is used to identify only the straight finger positions from left – right scan and right – left scan. If they produced wrong identification of finger tip positions from left – right scan or right - left scan, then this problem is rectified by using sign detection algorithm. So the green color is changed into black color in the resized image. And just ignore the identified location in the bent finger and then continue the left to right scan process. So real-time Tamil sign recognition alphabets system are developed and tested with stored 310 images as static and obtained better 100% of results.

Figure 4: Angle between three points.

Once the angle q 1 is noticeable, then on the right most edge of the scanning width is advanced by a slight distance approximately equal to the width of a middle finger that is eight pixel points. Hence now, the right most edge of scanning is (x 1 + 1, y 1 -8), where y 1 is the column corresponding to the angle q 1 .

Hereafter, the scanning process proceeds in the same way as above to calculate the angles q 2 and q 3 . These angle values form in the elements of array Angle. An image pattern after being scanned from Left to Right scanning process is then subjected to a Right to Left scanning process. This scanning process starts from the point (x 1 + 1,127) in the right most position to the point (x 1 + 1, y 1 + 8) in the left most position with a margin of eight pixels from y 1 . Hereafter, the scanning process proceeds in the same procedure as left to right scan process to calculate the angles q 4 and q 5 . These angle values are to be stored in the element of array Angle.

3.4.2 Testing Phase

An image sign is captured at runtime and loaded at static from stored 310 images directory. After the image is captured and loaded, it is converted into edge image

3.5 Training Phase

Training Phase is used to find the data preparation of minimum and maximum values of angle for each

IETE JOURNAL OF RESEARCH | VOL 59 | ISSUE 6 | NOV-DEC 2013

713

[Downloaded free from http://www.jr.ietejournals.org on Friday, March 14, 2014, IP: 49.248.8.171] || Click here to download free Android application for this journal

Rajam PS and Balakrishna G: Design and Development of Tamil Sign Alphabets using Image Processing with Right Hand Palm to aid Deaf‑Dumb People

fingertip positions as shown in Table 1. Then, the range of angle values of minimum and maximum for each finger is calculated by using training phase algorithm. These range values are found differently for each finger. Training images (310 signs) are classified into five categories. Each category is formed by fixed fingertip position of all five fingers. Example of TYPE 3 category is shown in Table 1. Middle finger (M) is fixed in “UP”

position. Remaining fingers (S) are not fixed (i.e. 4). So

2 S (2 4 = 16) numbers of pattern images are possible.

3.5.1 Training Phase Algorithm

Find Minimum and Maximum Angle value for each finger position and store the values to threshold array variables as ANGLE MIN and ANGLE Max

ANGLE MIN = [T1, I1, M1, R1, L1] and ANGLE Max = [T2, I2, M2, R2, L2]

Step 1: //Initialize the variable Min = 100; Max = 0;

Step 4: Find out the total number of filenames of

currently selected directory; this value is stored to the

contents.

Step 5: For I = 3 to contents

Read the I th filename (Read an image from selected directory)

This image is converted into edge image by using

pre-processing method.

Find the reference point of edge image as (x 0 , y 0 ) using Bottom-up scan approach.

Start from the left to right scan. After finding first white pixel to calculate the angle between three

points ((x i ,

stored to the variable Angle.

IF Angle is greater than Max THEN Max = Angle; END.

),(x 0 , y 0 ),(x r , y r )) and this angle value is

b.

a.

c.

y j

d.

e.

IF Angle is less than Min THEN Min = Angle; END.

Display an image.

f.

END

Step 2: First, we create folders T1, I1, M1, R1, and L1 in currently working directory which are the elements of ANGLE MIN . Each folder consists of corresponding sign images that are selected from stored 310 images directory.

Step 6: Display the Min and Max values. These values are to the corresponding elements of the array of variables which are ANGLE MIN and ANGLE Max. Currently selected dir_name is same as the element of these array variables.

Step 7: End. 3.6
Step 7: End.
3.6

Step 3: Display a dialog box enabling the user to browse through the current directory structure and select a directory and set to dir_name which is a string containing the path to the directory selected in the dialog box. If the user presses the cancel button or closes the dialog window, dir_name is returned as the number zero.

Conversion of Binary to Tamil Letters and Voice

The element of binary numbers in the F array is converted into Decimal numbers by Conversion of Binary to Decimal algorithm as shown below. The decimal number

Table 1: Calculation of minimum and maximum value of angles

Table 1: Calculation of minimum and maximum value of angles 7 1 4 IETE JOURNAL OF

714

IETE JOURNAL OF RESEARCH | VOL 59 | ISSUE 6 | NOV-DEC 2013

[Downloaded free from http://www.jr.ietejournals.org on Friday, March 14, 2014, IP: 49.248.8.171] || Click here to download free Android application for this journal

Rajam PS and Balakrishna G: Design and Development of Tamil Sign Alphabets using Image Processing with Right Hand Palm to aid Deaf‑Dumb People

is assigned to corresponding Tamil letter as shown in Table 2.

Step 6: END.

The result is obtained originally as binary numbers in which the least significant bit represents the ‘LITTLE’ and the most significant bit represents the ‘THUMB’ finger. This binary number [01111] is coded into the corresponding decimal number 15 which displayed as TEXT ‘FIFTEEN’ by using conversion of binary to decimal algorithm. Each decimal number thus represents a sign image and is assigned to a corresponding Tamil letter using right palm, as shown in Figure 2. The conversion of 31 sign images into 12 Tamil vowels such as Kuril (short sound) and Nedil (lengthy sound) are done by the following procedure. Each vowel Kuril is assigned to a corresponding sign using right palm images. For vowel Nedil, it is assigned to a Kuril image along with thumb finger in straight position. The Tamil letter ‘«’ Kuril is assigned to a sign image [01000], e.g. for decimal number 8, the index finger is in the “UP” position. For ‘¬’ Nedil is assigned to a sign image [11000], e.g. for decimal number 24, both the Index and Thumb fingers are in “UP” position. So all Nedil must have Thumb finger in “UP” position. The letter Kuril ‘þ’ middle finger is in “UP” position and Nedil ‘®’ the middle and Thumb fingers are in “UP” position. All other letters are tabulated with a symbol, as shown in Table 2. The 18 Consonants and one Aayutha Ezhuthu are assigned to remaining sign images.

3.6.1 Algorithm Description

Step 1:

Establish F, the binary number is converted into

Step 2:

the decimal number //Initialize the variables Initialize the decimal variable decimal_no to zero. Set the power variable P is set to zero. Initialize the variable I is set to 5.

Step 3:

While I is greater than or equal to 1 do IF F(I) is equal to 1 THEN Compute the decimal_no by adding 2 ^ P to the most recent decimal_no. END Increment ‘P’ by one. Decrement ‘I’ by one.

END.

Step 4:

Write out decimal_no.

Step 5: //Conversion of Decimal number to Tamil

letters. Open a database which consists of 31 records and four fields’ namely binary number, decimal number, Text of Tamil letter, and media player filename. Search a decimal_no in the open database. A matched record will be displayed on the screen and media player filename is played by using wavread and sound function which contains the particular Tamil letter voice recorded through micrecorder and wavewrite function.

5a.

5b.

4.
4.

EXPERIMENTAL RESULTS AND DISCUSSIONS

Each image sign has been taken ten times on stretched right palm at the same position in different distances.

Figure 5 shows the sample results of an image sign in different distances and of the same finger position for

Table 2: Conversion of binary numbers into decimal numbers and 31 Tamil alphabets

binary numbers into decimal numbers and 31 Tamil alphabets IETE JOURNAL OF RESEARCH | VOL 59

IETE JOURNAL OF RESEARCH | VOL 59 | ISSUE 6 | NOV-DEC 2013

715

[Downloaded free from http://www.jr.ietejournals.org on Friday, March 14, 2014, IP: 49.248.8.171] || Click here to download free Android application for this journal

Rajam PS and Balakrishna G: Design and Development of Tamil Sign Alphabets using Image Processing with Right Hand Palm to aid Deaf‑Dumb People

Processing with Right Hand Palm to aid Deaf‑Dumb People Figure 5: Sample results of binary number

Figure 5: Sample results of binary number sign recognition in three different distances and at the same finger positions for Tamil letters. (a) Resized image (b) Gray scale image (c) Preprocessed binary image (d) Edge image (e) Image with color fingertip positions. (f) Displayed Text and Tamil Letter. (g) Displayed each finger position value in the binary form and angle values.

Tamil letters as ‘ ø ’. In Figure 5, (a) shows the resized image, (b) shows the grayscale image, (c) shows the preprocessed binary (black and white) image, (d) shows the edge image, (e) shows the image with fingertip positions identified by the proposed method, (f) shows displayed decimal number text and Tamil letter, and (g) shows displayed each finger position value (0/1) in the binary form and corresponding angle values.

In Figure 6, the fingertip positions are found by the feature extraction method and the vector F value is calculated by sign detection algorithm. After calculation of vector F and Angle, the corresponding vector F (binary) value is converted into a decimal number by using conversion of binary to decimal number. This decimal number is assigned to a corresponding Tamil letter. Experimentation results were conducted for several combinations of binary number and tested with stored 310 sign images as static. It is tested with 160 sign images which are loaded at run time using the right hand palm placed in different distances at the same position.

When it is tested through correct static image Tamil sign alphabets recognition system, 308 images of 310 were

obtained using right hand palm. The accuracy rate is 99.35%. When it is tested with ten different right hand palm in different lightings through dynamic image Tamil sign alphabets recognition system images, 305 of 310 were obtained. The accuracy rate is 98.38%.

5.

CONCLUSION

Tamil sign alphabets recognition system to aid deaf- dumb people is developed and the proposed method is able to make better results when compared with other current researchers’ contributions. Proposed algorithm performed very well on a 31 Tamil sign alphabet, leading to a recognition rate of 99.35% in static and 98.38% in dynamic. This method has simplified conditions like uniform black background color limited alphabets in order to facilitate the hand palm extraction. On the other hand, the experiments are analyzed with ten different signer right hand palms with black background color as shown in Figure 7. In future, experimentation requires to be performed to create the results for the remaining Tamil letters 216 with different user and with different background environment. This proposed recognition method is very useful for deaf-dumb people to communicate with normal people in Tamil letters.

716

IETE JOURNAL OF RESEARCH | VOL 59 | ISSUE 6 | NOV-DEC 2013

[Downloaded free from http://www.jr.ietejournals.org on Friday, March 14, 2014, IP: 49.248.8.171] || Click here to download free Android application for this journal

Rajam PS and Balakrishna G: Design and Development of Tamil Sign Alphabets using Image Processing with Right Hand Palm to aid Deaf‑Dumb People

Processing with Right Hand Palm to aid Deaf‑Dumb People Figure 6: Sample results binary image to

Figure 6: Sample results binary image to corresponding Tamil letter using right hand palm images.

to corresponding Tamil letter using right hand palm images. Figure 7: Ten different signer right hand

Figure 7: Ten different signer right hand palms with different lightings.

Accuracy =

Number of pattern images Number of false result p aattern images

× 100

Number of pattern images

REFERENCES

1. V I Pavlovic, R Sharma, and T S Huang, “Visual Interpretation of Hand Gestures for Human Computer Interaction: A Review,” In IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), Vol. 19, no. 7, pp. 677‑95, 1997.

IETE JOURNAL OF RESEARCH | VOL 59 | ISSUE 6 | NOV-DEC 2013

2.

S

M Nadgeri, S D Sawarkar, A D Gawande, R R Igorevich, P Park, and

D

Min, et al., “Hand gesture recognition using CAMSHIFT algorithm”.

Emerging Trends in Engineering and Technology (ICETET), IEEE 13 th International conference on Emerging Trends in Engineering and

Technology, pg 37‑41, 2010.

3.

E

J Holden, R Owens, and G Roy, “Adaptive fuzzy Expert System for

Sign Recognition,” In proc. International Conference on Signal and Image Processing (SIP’2000), Las Vegas, USA, pp. 141‑6, 2000.

4.

P

Vamplew, and A Adams, “ Recognition of Sign Language Gestures

using Neural Networks, ” Australian Journal of Intelligent Information

Processing Systems, Vol. 5, no. 2, pp. 94‑102, 1998.

717

[Downloaded free from http://www.jr.ietejournals.org on Friday, March 14, 2014, IP: 49.248.8.171] || Click here to download free Android application for this journal

Rajam PS and Balakrishna G: Design and Development of Tamil Sign Alphabets using Image Processing with Right Hand Palm to aid Deaf‑Dumb People

5.

C Vogler, and D Metaxas, “Adapting Hidden MarkovModels for ASL Recognition by Using three‑Dimensional Computer Vision Methods,” In Proc. IEEE International Conference on Systems, Man and Cybernetics SMC97, IEEE Computer Society, Orlando, Florida, pp. 156‑61, 1997.

6.

H Birk, T B Moeslund, and C B Madsen, “Real‑time Recognition of Hand Alphabet Gesture Using Principal Component Analysis,” In Proc. Scandinavian Conference on Image Analysis, Finland, 1997.

7.

S Begum, and M Hasanuzzaman, “ Computer Vision‑based Bangladesh Sign Language Recognition System,” In Proc. 12 th International Conference on Computer and Information Technology, Dhaka, Bangladesh, December 2009.

8.

L R Rabiner, “A tutorial on hidden Markov models and selected applications in speech recognition,” In Proc. IEEE, Vol. 77, no. 2, pp. 257‑85, Feb. 1989.

9.

J Yamato, J Ohya, and K Ishii, “Recognizing human action in time sequential images using hidden Markov model,” In Proc. IEEE Int. Conf. Comput. Vis. Pattern Recogn., Champaign, pp. 379‑85, 1992.

10.

F Samaria, and S Young, “HMM‑based architecture for face identification”, Image Vis. Comput., Vol. 12, no. 8, pp. 537‑43, 1994.

11.

R R Igorevich, P Park, D Min, Y Park, J Choi, and E Choi, et al., “Hand gesture recognition algorithm based on grayscale histogram of the image,” IEEE, 4 th International Conference of Application of Information and Communication Technologies (AICT), AICT), Vol. 1, no. 4, pp 12‑14, 2010.

12.

R Yang, S Sarkar, and B Loeding, “Handling movement epenthesis and hand segmentation ambiguities in continuous sign language

recognition using nested dynamic programming,” IEEE Transactions on pattern analysis and machine intelligence, Vol. 32, no. 3, pp. 462‑77, Mar. 2010.

13.

B Ionescu, D Coquin, P Lambert, and V Buzuloiu, ”Dynamic hand gesture recognition using the skeleton of the hand,” EURASIP journal on advances of signal processing, Hindawi Publishing Corporation, Vol. 2005, no. 13, pp. 2101‑9, 2005.

14.

M K Bhuyan, D R Neog, and M K Kar, “Hand Pose Recognition Using Geometric Features,” In Proc. IEEE, National Conference Communications (NCC), pp. 1‑5, 2011.

15.

P Subha Rajam, and G Balakrishnan, “Recognition of the Tamil Sign Alphabet using Image Processing Technique with Angle to aid Deaf‑Dumb People,” presented at International Conference on Information and Communication Technology, Interscience Research Network, Singapore, pp. 57‑62, 2011.

16.

P Subha Rajam, and G Balakrishnan, “Indian Sign Language Recognition System to aid Deaf – Dumb People,” IEEE International Conference on Computing Communication and Networking Technologies (ICCCNT), Karur, pp. 1‑9, Jul. 2010.

17.

P Subha Rajam, and G Balakrishnan, “Real Time Indian Sign Language Recognition System to aid Deaf – Dumb People,” IEEE ICCT:

13 th International Conference on Communication Technology, China, pp. 737‑742, Sep. 2011.

18.

P Subha Rajam, and G Balakrishnan, “Recognition of the Tamil

Sign

Based Analysis of Left hand to aid Deaf‑Dumb People,” Int. J. Information and Communication Technology, Vol. 4, no. 1, pp. 76‑88, 2012.

Angular

Alphabet

using

Image

Processing

Technique

with

2012. Angular Alphabet using Image Processing Technique with AUTHORS P. Subha Rajam, M.E., is the Assistant

AUTHORS

P. Subha Rajam, M.E., is the Assistant Professor, J.J. College of Engineering and Technology. She received her B.E. (CSE) degree in Government College of Engineering, Tirunelveli from the Madurai Kamaraj University, in 1992 and her M.E. (CSE) degree in J.J. College of Engineering, Tiruchirappalli affiliated to Anna University – Chennai, in 2006 respectively. At present she is doing her Ph.D., degree under Anna University of Technology Tiruchirappalli, in the area of Image Processing, under the guidance of Dr. G. Balakrishnan, co‑author of this paper. She has published 4 research papers in International Conferences and 3 research papers in International Journal. She has 16 years of teaching experience in J.J. College of Engineering and Technology, Tiruchirappalli.

J.J. College of Engineering and Technology, Tiruchirappalli. G. Balakrishnan , M.E., Ph.D., is the Director, Indra

G. Balakrishnan, M.E., Ph.D., is the Director, Indra Ganesan College of Engineering. He completed his B.E.,(CSE) from Bharathidasan University, Trichy, M.E.,(CSE) from PSG College of Technology, Coimbatore and Ph.D.(Image Processing) from University Malaysia Sabah, Malaysia. He has more than ten years of Academic and Industrial experience. He has more than 50 publications in International Journals and Conferences. His area of specialization is image processing. He is the advisory council member for several international and national conferences. He has won silver medals for his research contribution in various national, international research competitions.

E‑mail: balakrishnan.g@gmail.com

E‑mail: subha8892@yahoo.co.in

DOI: 10.4103/0377‑2063.126969; Paper No JR 802_12; Copyright © 2013 by the IETE

718

IETE JOURNAL OF RESEARCH | VOL 59 | ISSUE 6 | NOV-DEC 2013