Sie sind auf Seite 1von 2

P2-13

Recognizing Hand Gestures using Wrist Shapes1


Jeong-Mook Lim, Dong-Woo Lee, Bae-Sun Kim, Il-Yeon Cho and Jae-Cheol Ryou Wearable Computing Research Team Electronics and Telecommunications Research Institute, Daejeon, Korea Dept. of Computer Engineering Chungnam National University, Daejeon, Korea

Abstract-- We introduce a new hand gesture recognition interface, called virtual button. The virtual button utilizes patterns of the wrist shape. It can recognize a pinch motion and it can be used as a generic button event. Using the pinch motion is favorable in multi-touch based interactions or games because it can express picking, moving, and releasing (or putting) activities in a natural way. We use a small-sized IR optic sensor to get patterns of finger flexor tendons on wrist cause by moving fingers. These patterns are used to recognize finger/hand movements. Our method can also measure applied forces at the picking operation where using a vision-based approach may encounter difficulties. Finally, our experimental result shows that using the IR-optic sensor can be an effective solution to implement the virtual button. Our system also shows that it can recognize effectively various kinds of other hand motions such as a shaking of hands or bending of fingers.

I. INTRODUCTION Hand gesture is being used widely in various application fields such as from sign language to mobile/wearable device controls or gesture/motion based GUIs. Recently, the hand gesture technology is advancing as the technology is being applied to interactive TVs or game consoles such as Nintendo Wii or MS Xbox360. A surface-less multi-touch is an interface without having any surface for touching and it is one of the eye-catching applications that use users motion data. Multi-touch interface is rather becoming popular through Jeff Hans demonstration of his works, Apples iPod Touch, and/or MSs Surface. Among this ballgame, Johnny Lee introduced a concept design of a 3-D surface-less multi-touch using Nintendos Wiimote and reflective IR. Implementing the surface-less multi-touch requires two primary technologies- a multi-point tracking and hand gesture recognition technique. For developing the multi-point tracking and hand gesture recognition system, vision-based recognition is one of the common techniques to use [1]. The vision-based recognition is one of the promising methods that provide fairly accurate recognitions. However, the technique requires of equipping cameras, ensuring the line of sight and extensive processing, which make it inappropriate to be used in a mobile environment. Using markers that is another approach of vision-based recognition can reduce the processing effort and guarantee high recognition rate, although it requires the user to put on markers every time. In order to overcome the inconvenience of the vision recognition, there are many ongoing researches of an on-body sensing using inertial [2], EMG (Electro-MyoGraphy) or piezoelectric sensors. For recognizing hand gestures, sEMG (surface EMG) is one of the popular techniques that use the
1 This work was supported by the IT R&D program of MKE/KEIT, [2008-F048, Wearable Personal Companion for u-computing collaboration]

on-body sensing. Using the sEMG technique can achieve relatively accurate recognition results, however the recognition rate can be varied depends on an attaching location of the sensor [3]. Meanwhile, the piezoelectric sensors, which is used to monitor bone-conducting sounds from the body movement can be an alternative solution to recognize hand gestures [4][5]. Primarily function of the virtual button is to generate button events, a press and release, by recognizing hand gestures of holding and releasing, respectively. The virtual button can also recognize other kinds of hand gestures and generate proper commands. Objectives of this research are 1) to provide a user-friendly hand gesture UI, the virtual button, and 2) to develop a hand gesture recognition system that is simple and robust to the location of attaching sensors. II. IMPLEMENTATION The IR-optic sensor generates voltage values according to the amount of IR radiation. This sensor is used widely for measuring the blood pulse (from the wrists arteries) or the oxygen saturation of blood (from fingertips) [7]. In our system, they are used to monitor different patterns of the wrist shape resulting from the movement of finger flexor tendons in the wrist when fingers are moving (Fig. 1).

IR-optic sensor

IR-emitter

Fig. 1. IR-emitter and IR-optic sensor

We attached the IR emitter and IR-Optic sensor on the bottom of the wrist because the area consists of finger flexor tendons that sensitively react to finger movements. It performed better on the skin area where the finger flexor tendons were resided closely. This characteristic was similar to the sEMG method, however our method was better in a sense that the sensor placement on the skin did not require meticulous adjustments. This was because our method can be applied to a larger skin region than that of the EMG method. System responses were greater when the user moves his hand or fingers than his whole forearm. Sample data that were generated for the various hand gestures are shown in Figure 2. By comparing the data with one of our previous works [5] that uses piezoelectric sensor, the data generated illustrates interesting fact. In the previous work, the piezoelectric sensor measured minute sounds that are generated by pinching two fingers. This resulted in recognizing the pinch gesture as it was one command. In contrast, by using the IR-optic sensor, it can break down the

978-1-4244-4316-1/10/$25.00 2010 IEEE

pinch gesture into two commands because the sensor generates distinguishably different values for the holding phase and the releasing phase of the pinch gesture. Considering the fact that GUI users commonly utilize operations of selecting, moving and putting an object, our capability of distinguishing the holding and releasing during the pinching action can be greatly beneficial in most of the GUI environments.

gesture, the higher voltage was observed through the sensor.


TABLE 1. Recognition Result of the 5 commands Bye Hi Hold 0 18 0 0 0 Release 13 0 0 2 0 Wave 0 1 0 11 4 %
Correct

Release 0 0 Hold 0 0 Bye 15 1 Wave 1 0 Hi 0 14 Overall correctness 88.82%

100 94.7 93 78.6 77.8

C. Evaluation Hidden Markov Model [6] was used to recognize five hand gestures of Bye, Hi, Hold, Release and Wave. Recognizing the Hold, Release and Bye gesture was satisfying while some gestures such as Hi and Wave showed lower recognition rate than others. (Table. 1)
Fig. 2. Sample wave forms for 5 hand gestures; HOLD, REASE, BYE, HI and WAVE. The sensor output value was ranged from -1.4 v to -0.2 v.

IV. CONCLUSION We suggested a virtual button that used a small-sized IRoptic sensor. Various forms of hand and finger gestures were performed, and different patterns resulted from the wrists flexor tendon movements were measured. We used these signals to recognize hand gestures. Through the experiment, we showed that the virtual button could be a robust solution that recognized various hand gestures. Virtual button users can be mostly beneficial because it can distinguish motions of holding and releasing, which is more natural and easy UI for manipulating an object such as selecting, dragging, moving, and putting, in the open space, e.g. surface-less multi-touch. For the future research, we will measure the amount of the force applied for hand gestures which are not being expressed through the shape of the hand. We expect this capability can offer important information to applications such as games and etc. We will apply the virtual button to one of surface-less multi-touch systems to conduct usability tests. Usability tests will also be conducted to control common mobile devices such as MP3 players, PDAs or cell phones. V. REFERENCES
[1] V. I. Pavlovic , R. Sharma , T. S. Huang, Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review, IEEE Trans. on Pattern Analysis and Machine Intelligence, v.19 n.7, p.677-695, July 1997 J. Lementec and P. Bajcsy. Recognition of arm gestures using multiple orientation sensors: gesture classification. Proceedings of the 7th International IEEE Conference on Intelligent Transportation Systems, pp 965-970, 2004 S. Saponas , D. Tan , D. Morris , R. Balakrishnan, Demonstrating the feasibility of using forearm electromyography for muscle-computer interfaces, Proc. of the 26th annual SIGCHI conference on Human factors in computing systems, 2008 B. amento, W. Hill, L. Terveen, The Sound of One Hand: A Wristmounted Bio-acoustic Fingertip Gesture Interface, CHI2002 I. Cho et. al, A Distributed Wearable System Based on Multimodal Fusion, Springer Berlin, Vol 4523, pp 369-378 L.R. Rabiner, A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition. Proc. IEEE, Vol.77, 257-286, Feb. 1989 H. Asada, P. Shaltis, A. Rhee, R. Hutchinson, Mobile Monitoring with Wearable Photoplethysmographic Biosensors, IEEE Engineering in Medicine and Biology Magazine, Vol. 22(3), pp 28-40, 2003.

III. HAND GESTURE RECOGNITION A. Preprocessing At this stage, we analyzed the time that took to make hand gestures and performed a feature analysis on the producing raw data. Through a user test, the time took for performing each hand gesture was collected. Initial objective was to evaluate two gestures (of holding and releasing), but later we defined and included three additional gestures (of waving fingers, Hi, and Bye). Total of five hand commands were measured in terms of the completion time. Most gestures showed the meaningful data within the first 400ms of performing one gesture. Depends on the orientation of the hand, the output of IR-Optic sensor was tended to produce slightly different saturation values. B. Data Acquisition and Feature Selection Due to the fact that the saturation voltage values were changed for different hand orientations, we started to collect the sensor data after triggering a meaningful data transition. The sensor data was sampled in 200Hz. To make our system independent to the hand orientation, we used the voltage differences respect to the voltage observed at the triggering point, and used resulting values to be our first feature element. The gradient per unit time was the second feature element that was used for the recognition. According to graphs shown in Figure 2, amounts of IR radiations were distinct and showed different characteristics for each of the five hand gestures. Overall, hand gestures that use two fingers such as Hold and Release generated signals of having a single bulging or pitting region, while when moving all five fingers (such as Bye, Hi and Wave all fingers) produced frequent changes in the voltage level. Both the Wave and Bye gesture generated an oscillating signal. However they could be distinguished by looking at the peak regions of the signal- one had flat peaks while the other was not. Hi gesture generated a distributed signal lying on the low voltage range. We also found that as the user applied more force to perform a

[2]

[3]

[4] [5] [6] [7]

Das könnte Ihnen auch gefallen