Sie sind auf Seite 1von 9

Skinput The Human Arm Touch screen

Manisha Nair S8CS Mohnadas College Of Engineering And Technology

Abstract
Devices with significant computational power and capabilities can now be easily carried on our bodies. However, their small size typically leads to limited interaction space (e.g., diminutive screens, buttons, and jog wheels) and consequently diminishes their usability and functionality. Since we cannot simply make buttons and screens larger without losing the primary benefit of small size, we consider alternative approaches that enhance interactions with small mobile systems. One option is to opportunistically appropriate surface area from the environment for interactive purposes. For example, it describes a technique that allows a small mobile device to turn tables on which it rests into a gestural finger input canvas. However, tables are not always present, and in a mobile context, users are unlikely to want to carry appropriated surfaces with them (at this point, one might as well just have a larger device). However, there is one surface that has been previous overlooked as an input canvas and one that happens to always travel with us: our skin. Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception our sense of how our body is configured in three-dimensional space allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area. Skinput, a technology that appropriates the human body for acoustic transmission, allows the skin to be used as an input surface. In particular, we resolve the location of finger taps on the arm and hand by analyzing mechanical vibrations that propagate through the body. We collect these signals using a novel array of sensors worn as an armband. This approach provides an always available, naturally portable, and onbody finger input system. We assess the capabilities, accuracy and limitations of our technique through a two-part, twenty-participant use study.
interface. It uses a different and novel technique: It listens to the vibrations in your body. It could help people to take better advantage of the tremendous computing power and various capabilities now available in compact devices that can be easily worn or carried. The diminutive size that makes smart phones, MP3 players and other devices so portable also severely limits the size, utility and functionality of the keypads, touch screens and jog wheels typically used to control them. Thus, we can use our own skin-the bodys largest organ as an input canvas because it is always travels with us and makes the ultimate interactive touch surface. It is a revolutionary input technology which uses the skin as the tracking surface or the unique input device and has the potential to change the way humans interact with electronic gadgets. It is used to control several mobile devices including a mobile phone and a

Introduction
Touch screens may be popular both in science fiction and real life as the symbol of next-gen technology but an innovation called Skinput suggests the true interface of the future might be us. This technology was developed by Chris Harrison, a third year Ph.D. student in Carnegie Mellon Universitys HumanComputer Interaction Institute (HCII), along with Desney Tan and Dan Morris of Microsoft Research. A combination of simple bio-acoustic sensors and some sophisticated machine learning makes it possible for people to use their fingers or forearms and potentially any part of their bodies as touch pads to control smart phones or other mobile devices. Skinput turns your own body into a touch screen

121

portable music player. Skinput system listens to the sounds made by tapping on parts of a body and pairs those sounds with actions that drive tasks on a computer or cell phone. When coupled with a small projector, it can simulate a menu interface like the ones used in other kinds of electronics. Tapping on different areas of the arm and hand allow users to scroll through menus and select options. Skinput could also be used without a visual interface. For instance, with an MP3 player one doesnt need a visual menu to stop, pause, play, advance to the next track or change the volume. Different areas on the arm and fingers simulate common commands for these tasks, and a user could tap them without even needing to look. Skinput uses a series of sensors to track where a user taps on his arm. This system is simply amazing and accurate.

Primary Goals
Always-Available Input: The primary goal of Skinput is to provide an always available mobile input system that is, an input system that does not require a user to carry or pick up a device. A number of alternative approaches have been proposed that operate in this space. Techniques based on computer vision are popular. These, however, are computationally expensive and error prone in mobile scenarios (where, e.g., non-input optical flow is prevalent). Speech input is a logical choice for always-available input, but is limited in its precision in unpredictable acoustic environments, and suffers from privacy and scalability issues in shared environments. Other approaches have taken the form of wearable computing. This typically involves a physical input device built in a form considered to be part of ones clothing. For example, glove-based input systems allow users to retain most of their natural hand movements, but are cumbersome, uncomfortable, and disruptive to tactile sensation. Post and Orth present a smart fabric system that embeds sensors and conductors into fabric, but taking this approach to always-available input necessitates embedding technology in all clothing, which would be prohibitively complex and expensive. The Sixth Sense project proposes a mobile, always available input/output capability by combining projected information with a color-marker-based vision tracking system. This approach is feasible, but suffers from serious occlusion and accuracy limitations. For example, determining whether, e.g., a finger has tapped a button, or is merely hovering above it, is extraordinarily difficult. Bio-Sensing:

Skinput leverages the natural acoustic conduction properties of the human body to provide an input system, and is thus related to previous work in the use of biological signals for computer input. Signals traditionally used for diagnostic medicine, such as heart rate and skin resistance, have been appropriated for assessing a users emotional state. These features are generally subconsciously driven and cannot be controlled with sufficient precision for direct input. Similarly, brain sensing technologies such as electroencephalography (EEG) and functional nearinfrared spectroscopy (fNIR) have been used by HCI researchers to assess cognitive and emotional state; this work also primarily looked at involuntary signals. In contrast, brain signals have been harnessed as a direct input for use by paralyzed patients, but direct brain computer interfaces (BCIs) still lacks the bandwidth required for everyday computing tasks, and require levels of focus, training, and concentration that are incompatible with typical computer interaction. Researchers have harnessed the electrical signals generated by muscle activation during normal hand movement through electromyography (EMG). At present, however, this approach typically requires expensive amplification systems and the application of conductive gel for effective signal acquisition, which would limit the acceptability of this approach for most users. The input technology most related to our own is that of Amento et al, who placed contact microphones on users wrist to assess finger movement. However, this work was never formally evaluated, as is constrained to finger motions in one hand. The Hambone system employs a similar setup. Moreover, both techniques required the placement of sensors near the area of interaction (e.g., the wrist), increasing the degree of invasiveness and visibility. Finally, bone conduction microphones and headphones now common consumer technologies - represent an additional biosensing technology that is relevant to the present work. These leverage the fact that sound frequencies relevant to human speech propagate well through bone. Bone conduction microphones are typically worn near the ear, where they can sense vibrations propagating from the mouth and larynx during speech. Bone conduction headphones send sound through the bones of the skull and jaw directly to the inner ear, bypassing transmission of sound through the air and outer ear, leaving an unobstructed path for environmental sounds. Acoustic Input: Our approach is also inspired by systems that leverage acoustic transmission through (non-body) input surfaces. Paradiso et al. measured the arrival time of a sound at multiple sensors to locate hand

122

taps on a glass window. Ishii et al. use a similar approach to localize a ball hitting a table, for computer augmentation of a real-world game. Both of these systems use acoustic time-of-flight for localization, which we explored, but found to be insufficiently robust on the human body, leading to the fingerprinting approach described in this paper.

How Skinput Achieves The Goals


Skin: To expand the range of sensing modalities for always available input systems, we introduce Skinput, a novel input technique that allows the skin to be used as a finger input surface. In our prototype system, we choose to focus on the arm (although the technique could be applied elsewhere). This is an attractive area to appropriate as it provides considerable surface area for interaction, including a contiguous and flat area for projection. Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception (our sense of how our body is configured in three-dimensional space) allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyesfree input characteristic and provide such a large interaction area. Also the forearm and hands contain a complex assemblage of bones that increases acoustic distinctiveness of different locations. To capture this acoustic information, we developed a wearable armband that is non-invasive and easily removable. In this section, we discuss the mechanical phenomenon that enables Skinput, with a specific focus on the mechanical properties of the arm. Bio-Acoustics: When a finger taps the skin, several distinct forms of acoustic energy are produced. Some energy is radiated into the air as sound waves; this energy is not captured by the Skinput system. Among the acoustic energy transmitted through the arm, the most readily visible are transverse waves, created by the displacement of the skin from a finger impact. When shot with a high-speed camera, these appear as ripples, which propagate outward from the point of contact. The amplitude of these ripples is correlated to both the tapping force and to the volume and compliance of soft tissues under the impact area. In general, tapping on soft regions of the arm creates

higher amplitude transverse waves than tapping on boney areas (e.g., wrist, palm, fingers), which have negligible compliance. In addition to the energy that propagates on the surface of the arm, some energy is transmitted inward, toward the skeleton. These longitudinal (compressive) waves travel through the soft tissues of the arm, exciting the bones, which are much less deformable then the soft tissue but can respond to mechanical excitation by rotating and translating as a rigid body. This excitation vibrates soft tissues surrounding the entire length of the bone, resulting in new longitudinal waves that propagate outward to the skin. We highlight these two separate forms of conduction transverse waves moving directly along the arm surface and longitudinal waves moving into and out of the bone through soft tissues because these mechanisms carry energy at different frequencies and over different distances. Roughly speaking, higher frequencies propagate more readily through bone than through soft tissue, and bone conduction carries energy over larger distances than soft tissue conduction. While we do not explicitly model the specific mechanisms of conduction, or depend on these mechanisms for our analysis, we do believe the success of our technique depends on the complex acoustic patterns that result from mixtures of these modalities. Similarly, we also believe that joints play an important role in making tapped locations acoustically distinct. Bones are held together by ligaments, and joints often include additional biological structures such as fluid cavities. This makes joints behave as acoustic filters. In some cases, these may simply dampen acoustics; in other cases, these will selectively attenuate specific frequencies, creating location specific acoustic signatures. Sensing: To capture the rich variety of acoustic information, we evaluated many sensing technologies, including bone conduction microphones, conventional microphones coupled with stethoscopes, piezo contact microphones, and accelerometers. However, these transducers were engineered for very different applications than measuring acoustics transmitted through the human body. As such, we found them to be lacking in several significant ways. Foremost, most mechanical sensors are engineered to provide relatively flat response curves over the range of frequencies that is relevant to our signal. This is a desirable property for most applications where a faithful representation of an input signal, uncolored by the properties of the transducer, is desired. However, because only a specific set of frequencies is conducted through the arm in response to tap input, a flat response curve leads to the capture of irrelevant

123

frequencies and thus to a high signal- to-noise ratio. While bone conduction microphones might seem a suitable choice for Skinput, these devices are typically engineered for capturing human voice, and filter out energy below the range of human speech (whose lowest frequency is around 85Hz). Thus most sensors in this category were not especially sensitive to lower-frequency signals, which we found in our empirical pilot studies to be vital in characterizing finger taps. To overcome these challenges, we moved away from a single sensing element with a flat response curve, to an array of highly tuned vibration sensors. Specifically, we employ small, cantilevered piezo films (MiniSense100, Measurement Specialties, Inc.). By adding small weights to the end of the cantilever, we are able to alter the resonant frequency, allowing the sensing element to be responsive to a unique, narrow, low-frequency band of the acoustic spectrum. Adding more mass lowers the range of excitation to which a sensor responds; we weighted each element such that it aligned with particular frequencies that pilot studies showed to be useful in characterizing bio-acoustic input. Additionally, the cantilevered sensors were naturally insensitive to forces parallel to the skin (e.g., shearing motions caused by stretching). Thus, the skin stretch induced by many routine movements (e.g., reaching for a doorknob) tends to be attenuated. However, the sensors are highly responsive to motion perpendicular to the skin plane perfect for capturing transverse surface waves and longitudinal waves emanating from interior structures. Finally, our sensor design is relatively inexpensive and can be manufactured in a very small form factor (e.g., MEMS), rendering it suitable for inclusion in future mobile devices (e.g., an arm-mounted audio player).

Tiny Pico projectors: Handheld projector (also known as a pocket projector or mobile projector or Pico projector) is an emerging technology that applies the use of a projector in a handheld device. It is a response to the emergence of compact portable devices such as mobile phones, personal digital assistants, and digital cameras, which have sufficient storage capacity to handle presentation materials but little space to accommodate an attached display screen. Handheld projectors involve miniaturized hardware and software that can project digital images onto any nearby viewing surface, such as a wall. The system comprises four main parts: the electronics, the laser light sources, the combiner optic, and the scanning mirrors. First, the electronics system turns the image into an electronic signal. Next the electronic signals drive laser light sources with different colors and intensities down different paths. In the combiner optic the different light paths are combined into one path demonstrating a palette of colors. Finally, the mirrors copy the image pixel by pixel and can then project the image. This entire system is compacted into one very tiny chip. An important design characteristic of a handheld projector is the ability to project a clear image, regardless of the physical characteristics of the viewing surface. An Acoustic Detector: An acoustic detector can detect the acoustic signals generated by such actions as flicking and convert them to electronic signals enabling users to perform simple tasks as browsing through a mobile phone menu, making calls, controlling portable music players, etc. It recognizes skin taps on corresponding locations of the body based on bone and soft tissue variation. It detects the ultralow frequency sounds using 10 sensors. The sensors are cantilevered piezo films which are responsive to a particular frequency range and are arranged as two arrays of five sensing elements each.

Technologies Used
Skinput, the system is a marriage of two technologies: the ability to detect the ultralow frequency sound produced by tapping the skin with a finger, and the microchip-sized Pico projectors now found in some cell phones. The system beams a keyboard or menu onto the users forearm and from a projector housed in an armband. An acoustic detector, also in the armband, then calculates which part of the display you want to activate. It turns your largest organ: skin into a workable input device. Tiny Pico projectors display choices onto your forearm and an acoustic detector in an armband detects the ultralow frequency sounds produced by tapping the skin with your finger. These sensors capture sound generated by such actions as flicking or tapping fingers together, or tapping the forearm.

Armband Prototype
Our final prototype, shown in the figures, features two arrays of five sensing elements, incorporated into an armband form factor. The decision to have two sensor packages was motivated by our focus on the arm for input. In particular, when placed on the upper arm (above the elbow), we hoped to collect acoustic information from the fleshy bicep area in addition to the firmer area on the underside of the arm, with better acoustic coupling to the Humerus, the main bone that runs from shoulder to elbow. When the sensor was placed below the elbow, on the forearm,

124

one package was located near the Radius, the bone that runs from the lateral side of the elbow to the thumb side of the wrist, and the other near the Ulna, which runs parallel to this on the medial side of the arm closest to the body. Each location thus provided slightly different acoustic coverage and information, helpful in disambiguating input location. Based on pilot data collection, we selected a different set of resonant frequencies for each sensor package. We tuned the upper sensor package to be more sensitive to lower frequency signals, as these were more prevalent in fleshier areas. Conversely, we tuned the lower sensor array to be sensitive to higher frequencies, in order to better capture signals transmitted though (denser) bones.

two to calibrate the system for each new user. That is done by choosing one of the six tasks that the prototype can now handle- up, down, left, right, enter, cancel and pairing the choice with a tap on the arm or the hand. This explains the basic working of the present prototype of Skinput system.

Analysis
To evaluate the performance of our system, a trial involving 20 subjects was done. We selected three input groupings from the multitude of possible locations; combinations to test. Subjects were from different ages and sex. From these three groupings, five different experimental conditions were derived. They are fingers (five locations), whole arm (five locations) and forearm (ten locations). Fingers (five locations) The participants were asked to tap on the tips of each of their five fingers. The fingers provide clearly discrete interaction point, exceptional finger to finger dexterity and are linearly ordered, which is potentially useful for interfaces like number entry, magnitude control(e.g.: volume) and menu selection. At the same time, the fingers are among the most uniform appendages on the body with all but the thumb sharing a similar skeletal and muscular structure. This drastically reduces acoustic variation and makes differentiating among them difficult. Additionally acoustic information must cross as many as five fingers and wrist joints to reach the forearm, which further dampens signal. Despite these difficulties the finger flicks could be identified with 97% accuracy. Whole arm (five locations) The participants were asked to tap on five input locations on the forearm and hand: arm, wrist, palm, thumb and middle finger. These locations were selected because they are distinct and named parts of the body, so could be accurately tapped without training and are acoustically distinct. Forearm (ten locations) The participants were asked to tap on ten different locations on the forearm. This relied on an input surface with a high degree of physical uniformity and had a large and flat surface area with immediate accessibility. This also makes an ideal projection surface for dynamic interfaces. Accuracy depended in part on proximity of the sensors to the input; forearm taps could be identifies with 96%accuracy when sensors were attached below the elbow and 88% accuracy when the sensors were above the elbow.

How Skinput Works


Skinput is a technology which transforms a human body into a display and input surface that can interact with electronic gadgets. To see how Skinput performs this functionality, an armband prototype is required. The user needs to wear an armband, which contains a very small Pico-projector that projects a menu or keypad onto a persons hand or forearm. It also contains an acoustic sensor which makes unique sounds based on the tapping on different parts of the body owing to the areas bone density, soft tissues, joints and other factors. The sounds are not transmitted through the air, but by transverse waves through the skin and longitudinal or compressive waves through the bones. This armband is connected to a computer via a large receiver to process the sounds. When the different sounds are analyzed by the computer different wave patterns are formed. By analyzing186 different features of the acoustic signals, including frequencies and amplitude a unique acoustic signature is created for each tap location. Controls are then assigned to each location. The projector projects the menu or keypad of the gadget which is to be controlled by the system onto the persons hand. The user then taps on different parts of the body. Various acoustic signals are generated from different parts. A bio-acoustics sensing array is used to pick up the different signals and deliver to the computer where they are monitored. The custom built software is capable of listening to the different acoustic variations; determine which button the user has just tapped. Wireless Bluetooth Technology then transmits the information to the device and controls it. So if you have tapped out a phone number, the wireless technology would send the data to your phone to make the call. The system has achieved accuracies ranging from 81.5 to 96.8 % and enough buttons to control many devices. It takes a minute or

125

The system was thus able to classify the inputs with 88% accuracy overall. It produces a unique acoustic signature for each tapped location, that machine learning programs could learn to identify.

Additional Analysis
Walking and Jogging As discussed previously, acoustically-driven input techniques are often sensitive to environmental noise. In regard to bio-acoustic sensing, with sensors coupled to the body, noise created during other motions is particularly troublesome, and walking and jogging represent perhaps the most common types of whole-body motion. This experiment explored the accuracy of our system in these scenarios. Each participant trained and tested the system while walking and jogging on a treadmill. Three input locations were used to evaluate accuracy: arm, wrist, and palm. Participants only provided ten examples for each of three tested input locations. Furthermore, the training examples were collected while participants were jogging. Thus, the resulting training data was not only highly variable, but also sparse neither of which is conducive to accurate machine learning classification. Single-Handed Gestures In the experiments discussed thus far, we considered only bimanual gestures, where the sensor-free arm, and in particular the fingers, are used to provide input. However, there are a range of gestures that can be performed with just the fingers of one hand. We conducted three independent tests to explore onehanded gestures. The first had participants tap their index, middle, ring and pinky fingers against their thumb and then there were flicks. This motivated us to run a third and independent experiment that combined taps and flicks into a single gesture set. Participants re-trained the system, and completed an independent testing round. Even with eight input classes in very close spatial proximity, the system was able to achieve a remarkable 87.3% accuracy. This result is comparable to the aforementioned ten location forearm experiment (which achieved 81.5% accuracy), lending credence to the possibility of having ten or more functions on the hand alone. Furthermore, proprioception of our fingers on a single hand is quite accurate, suggesting a mechanism for high-accuracy, eyes-free input. Surface and Object Recognition During piloting, it became apparent that our system had some ability to identify the type of material on which the user was operating. Using a similar setup

to the main experiment, we asked participants to tap their index finger against 1) a finger on their other hand, 2) a paper pad approximately 80 pages thick, and 3) an LCD screen. Results show that we can identify the contacted object with about 87.1% (SD=8.3%, chance=33%) accuracy. This capability was never considered when designing the system, so superior acoustic features may exist. Even as accuracy stands now, there are several interesting applications that could take advantage of this functionality, including workstations or devices composed of different interactive surfaces, or recognition of different objects grasped in the environment. Identification of Finger Tap Type Users can tap surfaces with their fingers in several distinct ways. For example, one can use the tip of their finger (potentially even their finger nail) or the pad (flat, bottom) of their finger. The former tends to be quite boney, while the latter more fleshy. It is also possible to use the knuckles (both major and minor metacarpophalangeal joints). We evaluated our approachs ability to distinguish these input types. A classifier trained on this data yielded an average accuracy of 89.5during the testing period. This ability has several potential uses. Perhaps the most notable is the ability for interactive touch surfaces to distinguish different types of finger contacts (which are indistinguishable in e.g., capacitive and vision-based systems). One example interaction could be that double-knocking on an item opens it, while a padtap activates an options menu. Segmenting Finger Input A pragmatic concern regarding the appropriation of fingertips for input was that other routine tasks would generate false positives. For example, typing on a keyboard strikes the finger tips in a very similar manner to the finger-tip input we proposed previously. Thus, we set out to explore whether finger-to-finger input sounded sufficiently distinct such that other actions could be disregarded. As an initial assessment, we asked participants to tap their index finger 20 times with a finger on their other hand, and 20 times on the surface of a table in front of them. This data was used to train our classifier. This training phase was followed by a testing phase, which yielded a participant wide average accuracy of 94.3%

Example Interfaces And Interactions


We conceived and built several prototype interfaces that demonstrate our ability to appropriate the human

126

body, in this case the arm, and use it as an interactive surface. While the bio-acoustic input modality is not strictly tethered to a particular output modality, we believe the sensor form factors we explored could be readily coupled with visual output provided by an integrated Pico-projector. There are two nice properties of wearing such a projection device on the arm that permit us to sidestep many calibration issues. First, the arm is a relatively rigid structure the projector, when attached appropriately, will naturally track with the arm. Second, since we have fine-grained control of the arm, making minute adjustments to align the projected image with the arm is trivial (e.g., projected horizontal stripes for alignment with the wrist and elbow). To illustrate the utility of coupling projection and finger input on the body (as researchers have proposed to do with projection and computer vision-based techniques), we developed three proof-of-concept projected interfaces built on top of our systems live input classification. In the first interface, we project a series of buttons onto the forearm, on which a user can finger tap to navigate a hierarchical menu. In the second interface, we project a scrolling menu, which a user can navigate by tapping at the top or bottom to scroll up and down one item respectively. Tapping on the selected item activates it. In a third interface, we project a numeric keypad on a users palm and allow them to tap on the palm to, e.g., dial a phone number. To emphasize the output flexibility of approach, we also coupled our bio-acoustic input to audio output. In this case, the user taps on preset locations on their forearm and hand to navigate and interact with an audio interface.

or permanent augmentation of surfaces-you can set your phone down on a table at your local coffee house, and youve instantly got an ad hoc gestural finger-input surface. When youre done, simply pick up your phone and off you go. MINPUT Minput was born from a desire to experiment with high-precision spatial tracking. Specifically, it incorporates optical tracking sensors in the back of a device-the same cheap, small, high-precision sensors used in optical mice. Two sensors capture not only up, down, left, and right motions, but also twisting gestures. This configuration lets a device track its own relative movement on surfaces, especially large ad hoc ones like tables, walls, and furniture, but also your palm or clothes if nothing else is around. Minput can be used in several ways. One is gestural a user can grasp a device like a tool and gesture with it. Like brushstrokes on a canvas, these gestures can be big and bold and in general arent limited by the devices diminutive form. This also keeps the users fingers off the tiny display, eliminating interface occlusiona problem in touch-screen interaction. The Minput technique can also be used as a peephole display. This effect is somewhat like reading a newspaper in a dark room with only a small flashlight for illumination. Although only a fraction of the entire canvas is visible at any given moment, the whole document is immediately accessible similar to scrolling through a webpage on a smart phone. We augment this interaction by using twisting gestures to zoom, an analog motion to which its well suited. Finally, Minput can transform a devices sensor data into a cursor, which could allow small devices to run very complex widget-driven interfaces. Much like a mouse, the control-device gain can be manipulated. This enables extremely precise, pixellevel accuracy. Minput provides low cost and high precision pointing for gadgets.

History
Scratch Input Scratch Input, which allows mobile devices to appropriate horizontal surfaces for gestural finger input. It works by placing a specialized microphone on the backside of devices; gravity provides just enough force to acoustically couple the device to whatever hard surface its resting on. Lots of things happen on tables we want to ignore; the system filters this out by listening exclusively to the frequency range human fingernails produce when running over a textured surface- wood, paint, linoleum, and many other materials (not glass or marble, however, which are too smooth). Taps and flicks are easily detected as well. The sensor is very small-just a single microphone-and can be easily integrated into even the smallest devices. This means Scratch Input capability goes wherever the device goes; no infrastructure is necessary. It also requires no special

How Skinput Is Better


Skinput is a revolutionary input technology which uses the skin as the tracking surface or the unique input device and has the potential to change the way humans interact with electronic gadgets. It is used to control several mobile devices including a mobile phone and a portable music player. It could help people to take better advantage of the tremendous computing power now available in compact devices that can be easily worn and carried. The diminutive size that makes smart phones, MP3 players and other devices so portable also severely limits the size, utility and functionality of the keypads, touch screens

127

and jog wheels typically used to control them. It uses the largest organ of the human body as an input canvas which always travels with us and makes the ultimate interactive touch surface. Appropriating the human body as an input device is appealing not only because we have roughly two square meters of external surface area, but also because much of it is easily accessible by our hands (e.g., arms, upper legs, torso). Furthermore, proprioception (our sense of how our body is configured in three-dimensional space) allows us to accurately interact with our bodies in an eyes-free manner. For example, we can readily flick each of our fingers, touch the tip of our nose, and clap our hands together without visual assistance. Few external input devices can claim this accurate, eyes-free input characteristic and provide such a large interaction area. Thus, Skinput can also be used without a visual interface. Skinput doesnt require any markers to be worn and it is more suitable for persons with sight impairments, since it is much easier to operate it even with your eyes closed. The system can even be used to pick up very subtle movements such as a pinch or muscle twitch. The amount of testing can be increased and accuracy likely would improve as the machine learning programs receive more training under different conditions. It analyzes 186 different features of the acoustic signals and thus, can produce a unique acoustic signature for various locations on the body. It works with good accuracy even when the body is in motion. It takes a minute or two to calibrate the system for each new user. Its future prospects can reduce the bulky prototype and scale it down to be watch-sized on your wrist.

functionality would be increased to control many more electronic gadgets more effectively and efficiently. Mr. Harrison said he envisages the device being used in three distinct ways. Firstly, the sensors could be coupled with Bluetooth to control a gadget, such as a mobile phone in a pocket. It could be used to control a music player strapped to the upper arm. Secondly he said, the sensors could work with a Pico projector that uses the forearm or hand as a display surface. This could show buttons, a hierarchical menu, a number pad or a small screen. Finally, Skinput can even be used to play games such as Tetris by tapping on fingers to rotate blocks. Thus it has all the capability to become a commercial product some day.

Conclusion
In this paper I have presented the approach to appropriating the human body as an input surface. A novel, wearable bio-acoustic sensing array is described which is built into an armband in order to detect and localize finger taps on the forearm and hand. Results from the experiments are shown that the system performs very well for a series of gestures, even when the body is in motion. Its accuracy though affected by age and sex can be improved as the machine learning programs receive more training under such conditions. The system can even be used to pick up very subtle movements such as pinch or muscle twitch. Additionally, certain initial results demonstrate other potential uses of the approach which include single handed gestures, taps with different parts of the finger and differentiating between materials and objects. Several approaches were made in this field of technology like Sixth Sense etc. While Sixth Sense could perform better in loud environments and offers more features, the Skinput doesnt require any markers to be worn and it is more suitable for persons with sight impairments since it is much easier to operate it even with eyes closed and uses proprioception. To conclude with, several prototype applications have been described that demonstrate the rich design space we Skinput enables. This system is quite amazing and certainly shows what can be achieved with a bit of thought.

Future Enhancements
This technology is unique and simple in its current prototype but is enclosed in a bulky cuff. The future prospects would be to easily miniaturize the sensor array, scale them down and put them in a gadget which could be worn much like a wrist watch. In the future your hand could be your iPhone and your handset could be watch-sized on your wrist. The miniaturization of the projectors would make Skinput a complete and portable system that could be hooked up to any compatible electronics no matter where the user goes. Besides being bulky, the prototype has a few other glitches that need to be worked out. For instance, over time the accuracy of interpreting where the user taps can degrade. This happens because the system requires being re-trained occasionally. As we collect more data and make the machine learning classifiers more robust, this problem will hopefully reduce. It would be made more usable and its

References
Chris Harrison, Carnegie Mellon University http://computingnow.computer.org www.chrisharrison.net/projects/skinput/ research.microsoft.com/enus/um/.../cue/.../HarrisonSkinputCHI2010.pdf www.physorg.com/news186681149.html

128

www.msnbc.msn.com/id/35708587/ - United States www.inhabitat.com/.../microsofts-skinput-systemturns-skin-into-a-touchscreen

www.cmu.edu/homepage/computing/2010/winter/ski nput.shtml

129

Das könnte Ihnen auch gefallen