Sie sind auf Seite 1von 73

DPM297THSENSE:THEVIRTUALSENSE

ThomvanBoheemen PepijnFens

TommieKerstens AttalanMailvaganam

Coach: MichaelCruzRestrepo

Department of Industrial Design University of Technology Eindhoven Published on 10 June 2010 Project DPM29 7th Sense Project Proposer: Michael Cruz Restrepo Published by: Thom van Boheemen B2.1 s081414 Pepijn Fens B2.1 s080237 Tommie Kerstens B2.1 s071992 Attalan Mailvaganam B2.1 s081101 Coach: Michael Cruz Restrepo Copyright 2010

01 Introduction
There is a lot of information in the nature; we humans can not perceive all these information, we can only perceive some of this. Animal can also perceive information in the nature, this is different information compared with humans. This project is about letting humans see more of the invisible information. The goal of this project is to design a device/object that can help us to sense information with movement which was previously not sensible for us. The project was started off with explorations; the important terms that were given in the project description were researched. This was done to see the directions and possibilities in the project 7th sense. The interesting directions were researched and with more direction the brainstorming phase was entered. The brainstorm sessions gave more direction, but this direction was still too broad. Using a different strategy, prototyping a device and finding a possible use for it in the project, helped us to set a better direction. By performing users session on two different users helped us to complete the process of finding a possible direction. This concluded in three concepts based on ways to feel virtual information in real life situations. The choice was made to develop a platform for experiencing virtual objects for these three concepts. This platform is able to track the 3D space and let users feel virtual object with Haptic feedback methods. Finally a few concepts were generated as possible uses of this 3D platform system. The project process can be seen in the appendix Part A. Setting directions Concepts

Table of Content-

02
4 6 7 8 10 13 14 15 16 18 18 22 28 65

Designing a Platform Technology Aesthetics Scenario Value of Project Application for platform Future developments Appendix Part A: Process Part B: Research Part C: Co-Reflection Part D: Code Software

03 SettingDirections
RESEARCH The project was started off by researching important terms of it (phenomenology, ethology, bionics, biomimicry, the human senses, tools currently used to measure invisible information, movement as a communication tool, dynamics, mechanisms). After knowing the possibilities in the project a more direct research was performed on the subjects: animal senses, invisible human senses, movement as communication tool and micro expressions. The research report can be seen in the appendix part B. This research gave our project more direction: the final concept should be based on showing people a new sense and not learning them a new sense. BRAINSTORM SESSIONS The brainstorm phase was meant to get more directions in our project. By using different brainstorm strategies, a clearer goal was set up. We used two brainstorm techniques. The first one was based on rephrasing the project description in order to find a possible goal in our project. The project description had to be described in one sentence and each word in that sentence had to be replaced by a synonym. The second strategy was about combining the different senses with each other and generating ideas about it. Both these strategies gave us more directions: Is it possible to let people feel what they hear; create possibilities to combine senses (synesthesia)? seen in the Co-reflection report at the end of iteration 1 and 2.

PROTOTYPING The current direction was still too broad, so the decision was made to make a device that was able to transfer what a user feels to what he/she hears and find a use for it in the project. This strategy was used because of two reasons. The first reason was that the project came stuck and this strategy could bring the project back on trail. The second reason was that the device would be used for finding uses in the project, basically a tool to generate ideas for the project, what will help to set a clear direction. The device that was build was a glove that could show the distance between a user and an object. When the user comes closer to an object the glove will start sending a higher sound, a way to hear what you feel. A distance sensor was used to calculate the distances between object and person. This distance sensor would send information via the Arduino chip to a sound box that would give a sound. The Arduino chip, the distance sensor and sound box were implemented in a glove.

USER SESSIONS Users sessions were performed on two different user groups: Blind user and Average users. This was done to find possible uses of this distance device in different user groups. The Co-reflection theory was used for the user sessions, the users session can be seen in the second and third iteration of the Co-reflection report in the appendix part C. The sessions resulted into possible uses in the project for different user groups, what can be read in the next chapter. The conclusions of each user session can also be seen in the Co-reflection report at the end of iteration 2 and 3. seen in the Co-reflection report at the end of iteration 1 and 2.

04 Concepts
After discussing the result of the Co-reflection session, the best possible use for the device would be for blind people. This created the first concept called Vision for Blind people. By abstracting the possibilities of the device it was possible to create more concepts in a different context. By abstracting the possibility of feeling/shaping objects that are in a visual world and not in the real world the next concepts were created: Virtual Shaping and Virtual Desktop. EXPLANATION OF THE CONCEPTS: Virtual Shaping concept This concept makes it possible to shape virtual objects using your hands. The virtual objects are only visual in the virtual world, and can be shaped, touched and felt using the gloves. The gloves will be equipped with specially designed pressure areas that can give the hand the sensation/feedback of touching a real object, while this object is only virtual. In this way a new sensation is created: A virtual Shaping Sensation.

Virtual Desktop This concept is a symbiosis of the Vision for blind people virtual shaping concept and a WorkThis concept will provide the visual space. The user will have a virtual handicap an extra sense, by helpdesktop and be able to sense the ing them to perceive more of the actions that he or she does with environment. With a glove they will this desktop. Objects on the Virtual feel the path that they are walking, desktop give feedback like a real because of virtual placed tunnels life object would, using the same that they can feel with these gloves. gloves as in the Virtual Shaping These tunnels will help them to concept and in addition to this we walk outside without colliding make use of 3D visual technology to against obstacles. The gloves will get depth in the working area; we give them information by vibrating will bring all applications on the pc on the fingertips of the user, when to life. he/she comes close to the sides of the tunnel.

05 DesigningaPlatform
The concepts received a lot of positive feedback during the midterm exhibition. About which concept was the best, the opinions were really different. Each concept was liked for a certain reason. This also depended on the background of the visitors. For example a coach of Wearable senses liked the Blind man concept, because we used a glove (wearable device). In this way the exhibition did not help in our choice for choosing a final concept. However almost every visitor was interested in the thought behind the concepts, this thought was the same for each 3 concepts: feeling virtual information in real life. The visitors were really interested in the part how to receive Haptic feedback, when touching a virtual object. After discussing the results of the exhibition in the group, the decision was made to explore ways to touch virtual objects. The first reason for this decision was that we couldnt choose a final concept; the opinion of each team member was too different for this and the feedback during the midterm exhibition did also not help in this choice. The second reason was that we were really interested in the way to feel virtual objects; the vision of the three concepts was based on this issue and that was the point that was liked by each team member. The process for designing a platform started with exploring. There was thought about ways to track 3D positions of the hand that makes it able to feel virtual objects. We even build a small installation that could track the hand. The installation had three planes: one for x, one for y and one for z. By placing aluminum foil one each plane, we could calculate the 3D hand positions. Unfortunately this platform had a lot of negative points. It was not precise, it was too small and if a metal object was close to the installation the device wouldnt work anymore. So the decision was made not to go on with this platform. After doing some research another way to track 3D positions was explored. By using two Wii remotes and an Infrared Led it was possible to track 3D space. This technique didnt have a lot of negative points, so the choice was made to go on with this technology. At this moment the group

was split. One part of the group focused on the technology of the platform, this can be read in the paragraph about technology. The other part focused on the more Aesthetical side of the platform, focusing more on how to show real 3D objects, how to give haptic feedback to users and implementing the technology in a simple glove. This part will be written in the paragraph about aesthetics.

05.1 Technology
THE SOFTWARE wiiMotes a lot easier, so the focus could be more on the concept inWe have programmed a special stead of the programming. software that serves as a platform Communicating with the Arduino for our concept. The software is goes through a standard COM port. able to calculate were a point is in The software sends out the values 3D space, and is able to calculate that correspond with vibration distances from this point to our Vir- values, and Arduino puts 5V on the tual Objects. These Virtual Objects corresponding pins. are all programmed in the software as well. When the 3D coordinates GUI and Visual Feedback match with the coordinates of the In order to let the user really expevirtual objects, the software will rience the virtual objects, a visual send this to the arduino, and the part was also needed. The software arduino microcontroller will turn on is able to provide visual feedback as the vibration motors. well. The reasoning behind this part will be described further in the TECHNICAL SPECIFICATIONS aesthetical part. The software is programmed in C#, because this programming language is in good balance between the ease of programming, available function libraries and performance. The only negative point is that C# software can only run on Windows PCs. For communication with the wiiMotes, there is made use of the free WiimoteLib library (http:// wiimotelib.codeplex.com/). This made communicating with the two

User
Vibration Motors

IR LED

Wiimotes

Laptop

Arduino

THE CALCULATIONS In order to make a working prototype we had to track a infrared light in a 3D space. A Wii-mote is able to sense infrared light with a camera. It can measure the distance between the source of the infrared light and the camera in the X and Y direction. The problem is that the Wii-mote is unable to sense depth. Because depth is necessary for our device to work correctly, another Wii-mote was added to the set-up. With two Wii-motes, set-up pointing to the front with the same angle, with a distance D between them, we were able to formulate a formula which was able to calculate the X, Y and Z position of the infrared light.

can be calculated as follows:

- 33 is the viewing angle of the Wiimote camera. - 1024 is the width of the resolution of the Wii-mote camera in pixels - EH is the distance between the infrared light and the middle of the Wii-mote A camera in pixels. This is measured by the Wii-mote is the angle between the infrared light as seen by Wii-mote B and the line of the Wii-mote. can be calculated as follows:

With two Wii-motes it is therefore possible to calculate the depth, but another problem was that it also gave two different values for the X coordinate. The right value for the X coordinate can be calculated as follows.

Both Wii-motes are installed at the same height in the set-up. Therefore they give both the same value for the Y coordinate. With these formulas it is possible to calculate the X, Y and Z coordinate of the infrared light, and therefore we are able to track this light in a 3D space.

- 33 is the viewing angle of the Wiimote camera. - 1024 is the width of the resolution AB is the distance between the Wii- of the Wii-mote camera in pixels motes, which has to be measured - EI is the distance between the beforehand and this can be insert- infrared light and the middle of the ed in the program. Wii-mote B camera in pixels. This is measured by the Wii-mote. is the angle of the Wii-motes compared to AB. This has to be DE is the distance between the line measured beforehand and can be of the Wii-motes and the infrared inserted in the program. light. That means that DE is the Z coordinate.This has to be calculated is the angle between the infrared with the following formula. light as seen by Wii-mote A and the line of the Wii-motes.

05.2 Aesthetics
This paragraph is about the aesthetic development of the platform. It will describe the process of how we managed to create real 3D object for the users, how the Haptic feedback is created and how it all is implemented in a simple glove. images was explored. This technology also use 3D glasses but with using polarized technology. We tried to create these images, but it was impossible, because we needed special beamers, what we didnt have. So we chose the Stereoscopic images technology as our final techCREATING 3D IMAGES nique to show Virtual objects. In real life when we see an object, we see different positions of this After exploring on the web about how to show real 3D images, three object on each eye. The stereoscopic image technology is based interesting ways were discovered: Holograms, Stereoscopic images on this fact. The images will consist and Polarized images. Holograms of two different positions of a cerwas a really interesting way to show tain object; the glasses will ensure virtual 3D objects, because the that each eye will receive a differobject looked like real objects, but ent position. The glasses have a red it had two problems. First proband a cyan filter. So by making one lem was that it was too expensive position red and the other position and the other problem was that cyan, the eye will receive the difthe technology wasnt developed ferent positions of the object, what enough. Basically said the hologram will ensure the possibility to see technology wasnt feasible for the real 3D. platform. The second technique that was discovered had to do with To simulate this situation Autodesk stereoscopic images. By putting 3D studio Max has been used. In special 3D glasses on, the images this program we could place two would come out of the screen. The different cameras on one object. only downside of this technology This created different renders of was that the colors of the pictures the object. With the software Stebecame ugly. That is why Polarized reoPhoto maker we could merge

the two images in one single image. A lot of tests were performed to create the perfect 3D image. The experiments were about bringing the object really out of the screen. This was done by putting the focus

of the cameras in the objects. The other tests were about the distances between the two cameras. After some test we found the most ideal measurement and settings for a good 3D image.

10

HAPTIC FEEDBACK The most realistic way to give feedback to users at this moment is by using vibration motors. We tested two types of vibration motors. The first vibration motor has a smooth gentle vibration, while the second vibration motor had a rougher, hard vibration. After experimenting with both the motors, the next things could be said: The soft motor referred to a smooth surface and the rough one referred to a hard surface. With these motors it was possible to feel the reliefs of a surface. Some drawings were made to show some possibilities of how the motors will be implemented in the glove. The idea for this glove is that 5 vibration motors will be placed inside the glove right against the fingertips. Each different vibration motor will give a different feedback; they will be connected with 5 different positions. Infrared leds will be placed on the top of the middle finger, because this is the longest finger; so the system will easily see the led. After discussing the drawing with the technical part of the project group, we saw that it was not possible to build it like the drawing. The reason was that the software

that was used was too limited to create five different feedbacks on each different finger. Only one type of vibration could be generated, because the software could only track one point on the hand and not five different points. So it was no use to put five vibration motors on each finger with the same feedback. One motor for all the five fingers will also work.

11

IMPLEMENTING IN A GLOVE Because of the reason that five motors on each separate finger had the same effect as one motor on five fingers, we thought about ways to equally divide the vibration over all fingers. When the motors were tested on a plastic plate, the vibrations were quite well divided to each finger. The decision was made to only use four fingers (except the thumb), because it was hard to divide the vibration to all the five fingers. On this plate two different motors will be placed (Soft & Hard). This makes it possible to feel the differences of reliefs on a surface. Design one describes the arrangement of the technology that will be put in the glove. Design two is the final design of the glove; this one is a lot more detailed. The implementation of the motors is showed, how the Velcro connection is done and how the wires are hide is described.

12

05.3 Scenario

13

05.4 ValueofProject
This project is based on designing a platform to track the 3D hand position and send Haptic feedback to the hand when touching a virtual object. What we basically do is extending the human senses in the virtual world. Because of currently technical development the virtual world is an important factor in our society, so it belongs to a part of our lives. The gap between shaping an object in real life (shaping a wax model) and shaping an object in the virtual world (3D model program) is very big. Our platform makes this gap smaller, because the actions that the user has to perform with our device are far more natural plus the feedback is also more natural. The reason for this is that senses get more extended, by receiving feedback when shaping/feeling an object. This is not the only project based on feeling virtual objects. When comparing our platform with other project, our platform is a lot simpler, less expensive and use tracking that can track position in the air. The expensive other projects use big machines, that dont give that perfect feeling of virtual objects. While our platform is just at the beginning of its design and by adding more technology this system can grow to a quite well working platform.

14

05.5 ApplicationforPlatform
UNIVIBE This application could serve for educational purposes. Users could experience the universe like they never could before. The textures of the planet can be experienced by the different vibration motors. For example a very rough terrain could be represented by a very hard vibration where a soft and gentler vibration would represent a smoother surface. The planets surface could also be described into greater detail by using combinations of the different vibration motors. Some planets in the universe are not rocky but consist of gas. When such a planet is touched, less friction and less solidness is represented by a smooth vibration. In the future this application could be upgraded by adding more feedback components. For example a heating and cooling elements could give an experience of different temperatures.

15

06 FutureDevelopment
Because we have designed a platform, there is no specific guideline for our system. Applications that work on our platform now, will continue to work as improvements in our platform that will be implemented. These improvements can be the following: of polarized glasses would improve the quality of the visuals a lot. On top of this, the hardware for providing haptic feedback can be improved. We currently use different type of vibration motors to provide different sorts of haptic feedback, but in the future, the system would have an advanced system for providing very precise haptic SOFTWARE/HARDWARE feedback at specific places on the hand. This system would work with At this point, a combination of very small pins, which can put presthird party hardware and a basic software program is used. This sure on the hand through regulated combination is optimal because the air pressure. This would allow us to third party hardware (wiiMotes) are simulate even more complicated forms through the platform. powerful and are easy to program Software improvements can include software for. The software at the moment is a demonstration of our programming a real framework for our platform, allowing other develplatform in a relatively simple apopers to write their own applicaplication. tions for our system. Possibilities for future applications that use the Improvements on the hardware side would include dedicated hard- platform will be discussed below. ware for Blob-Tracking, higher resolution cameras and an improved IRfilter for improved IR point tracking. We would also improve the visual representation. We currently use Red-Blue glasses for simulating 3D images on a flat screen, but creating a 3D simulation through the use

16

FUTURE APPLICATIONS

having the dangers of particular actions. For example, building a large Digital Wardrobe wooden tower can be a fun activity, Online shopping has grown explobut the risk of getting hurt by the sively over the last year. One nega- wooden blocks is present. tive point is the lack of testing or Other educational purposes might experiencing products up front, this include training simulations for a is especially true for clothing. An wide range of professions including application for our platform could physical movement in three dimenbe experiencing different textiles a sions. Practicing these movements person can buy online, through our with our platform (that provides platform. This application would visual and haptic feedback) can inneed improvements in the vibracrease the precision while carrying tion motors, for simulating cloth out these tasks. through vibration. Extra-Sensory Entertainment There is already some 4D entertainment, including fog and water effects during a 3D movie, these technologies are very expensive and are only available for large groups of people. Our platform can provide the same enhanced experience of a movie, but then for 1 person. The user would be able to virtually feel what is happening on the screen with their hands, in sync with the 3D imagery. Education Experiencing virtual objects can also have educational purposes. Early in their development, children can learn how to interact with different forms in 3D space, without

17

Process_

07 Appendix

18

19

A Process
PROJECT PROCESS Exploring words phenomenology, ethology, bionics, biomimicry, the human senses, tools currently used to measure invisible information, movement as a communication tool, dynamics, mechanisms See research report (appendix) anteeing your safety). Especially the touch sense became dominant in the tasks we performed. Interesting would it maybe to also engage the sense of taste and smell while audio and vision are blocked; but no tests were performed on this. Exploring documentaries To become more aware of the already existing senses beyond those possessed by humans, documentaries about animals and their senses Exploring user group To get more involved into the sub- were studied. These documentaries ject of our project, a small practical gave some very interesting insights exercise was done. By blocking out into the boundaries and possibilitwo senses (vision and audio) of ties of the project. All the senses a group member he was asked to addressed in the documentaries perform navigational tasks in space. were reported, discussed and later In this way other senses had to take presented. over the job of the blocked senses Findings of the documentaries can and were given more bandwidth. In be found in the appendix. this way the world was experienced in a different manner. Movement workshop A Movement workshop was folThe results were very interesting; lowed by all group members. In this all the group members experienced workshop we were taught about becoming very in touch with their the way the nerve system develops other senses while at the same over lifetime and how it functions. time feeling very vulnerable (even Brainstorming in a controlled environment and In an attempt to give more directhree other group members guar-

tion to the project brainstorms were performed. All four group members had to each come up with one different brainstorming technique. The brainstorming gave interesting results but did not give enough direction. So we started a discussion with the group and the coach.

gets to the glove (distance sensor) the higher the tone of the sound becomes. After this first prototype we could easily replace the speaker for a vibrating component.

User testing The glove was tested to find out if our process was going into an interesting direction. We tested the Discussion prototype on students and on a In the discussion we came to the blind user, the results of these tests conclusion that we wanted to make can be found in this appendix. something that could make a user perceive something directly instead Transition from physical objects to of a product that could make it virtual objects. possible to teach someone an extra After the tests we came up with the sense in long term. This decision idea to make it possible to instead was made with regards to our inof sensing physical objects with dividual goals, we wanted to make the glove to enable sensing virtual and test instead of researching and objects. speculating. Also was decided that we should push the design process Exhibitions preparations and conforward by building something cepts. and explore for capabilities in the On the exhibitions we wanted to project. show at least 3 concepts related to our idea. The 3 concepts were: Envisioning and building 1: A glove that which enables peoStarting of very basically we wanted ple to sense virtual portals on the to make something that would street this concept couls assist blind enable a user to feel visual informa- people navigating on the street. 2: The same glove could be used tion by using the sense of touch. The first prototype used a distance to transform and create virtual sensor in combination with a small objects. This could be compared to claying; the hands can be used to speaker. Both were implemented into a glove. The nearer something make virtual objects. This concept

20

could also become a new way for artists to express themselves. 3: The third object would use the same technology to make our desktop pc come to live. Virtual objects that normally rest on your desktop or tools you use in applications would come to live and could be used much more intuitively.

gies already exist it was decided to use the wii-motes as movement sensors.

The group was split up in 2 sections, one part would develop the platform with the wii-motes (technical team) the other section would develop the visuals and a way to display the virtual objects (aesthetFeedback processing ics team). In the end we would The feedback we received at the bring the two results from the exhibitions were handled with great different sections together to form care to optimize the results, every one single prototype. comment received from experts and fellow students was discussed and thought of. Refocus and ideating for new directions (meeting cruz, gave us an assignment to built device) It was not completely clear to us which of the concepts would be most suiting to continue developing on. After a meeting with our coach it was decided that we had to push the design process forward, so we started building a prototype that would make it possible to feel the virtual objects so we could experiment with this. Exploring ways for prototype. A way had to be found to track 3d motion. After trying a few different ways and exploring which technolo-

21

Research Report_

22

B Research
PHENOMENOLOGY Phenomenology is a philosophical movement with the purpose of providing a firm basis for all human knowledge . Phenomenology is concerned with the world how we see it. Instead of asking about what we really are, it focusses on the world how we perceive it with our senses. An example is described in the book Being and Nothingness from JeanPaul Satre. He writes about meeting a friend at a cafe. However, the friend, named Pierre, never shows up. Sartre writes that he can feel the absence of Pierre in the cafe. Because of this, he says that non-Pierre (or the lack of Pierre) is something that exists, at least for him. It exists because he can feel it. ETHOLOGY Ethology is the scientific study of the behavior of animals. BIONICS The study of mechanical systems that function like living organisms Hearing or audition is the sense of sound perception. Since sound is vibrations propagating through a medium such as air, the detection of these vibrations, that is the sense of the hearing, is a mechanical sense because these vibrations are mechanically conducted from or parts of living organisms BIOMIMICRY Biomimicry is an emerging discipline that studies natures best ideas and then imitates these designs and processes to solve human problems. Non-toxic adhesives inspired by geckos, energy efficient buildings inspired by termite mounds, and resistance-free antibiotics inspired by red seaweed are examples of biomimicry happening today. THE HUMAN SENSES Sight or vision is the ability of the brain and eye to detect electromagnetic waves within the visible range (light) which is why people see interpreting the image as sight.

the eardrum through a series of tiny bones to hair-like fibers in the inner ear which detect mechanical motion of the fibers within a range of about 20 to 20,000 hertz,[4] with substantial variation between individuals.

in the tongue, throat, and mucosa. A variety of pressure receptors respond to variations in pressure (firm, brushing, sustained, etc).

Tools currently used to measure invisible information The most common examples are Taste or gustation is one of the two Geiger-counter: used to measure main chemical senses. There are radioactivity. at least four types of tastes that Infrared Sensor: used to see infrabuds (receptors) on the tongue red light. detect, and hence there are anato- Infrared can be used to see in the mists who argue[that these consti- dark, to forecast the weather and to perceive heath by vision. tute five or more different senses, There are also other examples to given that each receptor conveys measure invisible man-made information to a slightly different information. region of the brain. The most common of those are Wifi-adapter, used in phones and Smell or olfaction is the other computers to send and receive chemical sense. Unlike taste, there are hundreds of olfactory re- information. ceptors, each binding to a particular Bluetooth-adapter, used in molecular feature. Odor molecules phones and computers to send and receive information. possess a variety of features and thus excite specific receptors more Telephone antenna, used to send or less strongly. This combination of and receive telephone signals. excitatory signals from different receptors makes up what we perceive as the molecules smell. Touch, also called tactition or mechanoreception, is a perception resulting from activation of neural receptors, generally in the skin including hair follicles, but also

23

MOVEMENT AS A COMMUNICATION TOOL Non verbal communication (NVC) You can understand clothing, hairstyles, architecture, symbols and info graphics as form of non verbal communication.

use more body language when they communicate with each other. E.g. Dogs show their teeth when they threaten someone.

Dancing Dancing is a way to communicate through movement. You got proxemics, haptics, and kinesics. When people speak you can also Procemics: Space between humans see non verbal communication Haptics: this is the human touch look e.g. to their voice quality and Kinetics: Facial movement and Eye etc also called paralanguage. Also in Contact handwritten parts you can find NVC (this is not really possible in text Non-verbal communication is dediwritten on the computer). cated by space, touch, facial expressions and eye contact. Body Language People communicate 93 % via their Movements that influences our body and only 7% with their words behavior (these number difference in differ- Trees that move from the left to ent researches, it can also be 60 % the right can show us that there is a to 40 %). The body language can strong wind. show someones emotion. If you see a group of birds fly away from a tree far from you, you We can specify Physical Expression know someone or somewhat is out of Body language. Under Physi- close to that three. cal Expression can be understood: If theres a group of people runwaving, pointing, touching and ning away from a place, you will slouching. know that there is danger. We can also specify Kinetics. Under Kinetics can be understood: body movements and expression. Body language of animals. Animals Trees that move from the left to the right can show us that there is a strong wind First your EYE sees the tree mov-

ing. Second the EYE will send this information to a so-called Comparator. Third the Comparator perceives that this is a movement. Fourth this information will go to the Sensory part of the brain. Till this step you perceived a movement of a tree that is moving. Now you have the steps to recognizing that the trees are moving because there is a strong wind. Because of past experience, your Memory, you know that the trees move because of a strong wind. You actually learned this in the past.

etc). They didnt manage to create a fully new modality, but they succeed in creating a meta-modality. They reached this goal by giving test subjects a skill based training, what created a meta-modality and not a fully new modality. This is proof that human can learn new perceptual experiences.

Simple explanation about the test: They put a belt on the subject that could detect the magnetic north, the belt vibrated on the side were the magnetic north was. After the skill-based training sessions (6 weeks), the subjects were able to Learning a new sense get better in navigating, without A research/test was done by the institute of Cognitive Science to find using the belt. But this didnt mean out if it was possible to train a new that the subject were able to sense modality (vision, audio, touch and a magnetic field.

24

DYNAMICS The study of forces (why objects move). Forces cause acceleration and have the ability to cause acceleration. If an object is accelerating, the net force is not zero. To find the net force, take the vector sum of all component forces To find the acceleration, use the equation Fnet=ma. If forces act at angles, trigonometry is needed to solve: [Fx=Fcos(0)] [Fy=Fsin(0)] If forces act along the axes, no trigonometry is needed to solve MECHANISMS

PERCIEVING MICRO-EXPRESSIONS Too fast for Conscious mind (up to 1/30sec!) Reaches our awareness via a gut feeling of how the other feels processed in the unconscious part of the brain. Training Micro Expressions Recognition Series of images of faces (shown on the right) with a short expression then rating the expressions shown by the faces after this a series of same images appears but then in stop frame; so the brain can reengineer itself .

ceive; we can maybe find a way to train our brains to make us aware of this information by transmitting this information from the unconscious mind to the conscious mind.

Examples for Sense-X

Micro-Expressions

?
Become Aware

New senses: Megnetoreception Broadening Senses: See Infra Red

Unconscious Mind

Unconscious Mind
Find a suitable way of training this transmission

Conscious Mind

Conscious Mind

Unconscious Mind In case of Perceiving Micro-ExpresThe branch of physics concerned sions: with the behavior of physical bod Highly Trainable ies when subjected to forces or One hour of training -> Accuracy displacements, and the subsequent 40% 90% effect of the bodies on their environment. This points out that the Neural Circuits in Unconscious mind eager to SYNAESTHESIA learn providing trained in the right way A condition in which one type of stimulation evokes the sensation of Percieving new senses or extendanother, as when the hearing of a ing existing ones sound produces the visualization of If we can find that in the human a color. body more information reaches the unconscious mind the we can per-

25

SPECIAL ANIMAL SENSES Magneto-reception Birds and fish use magneto-reception to navigate. Birds are able to see the north trough magnetic cells in the retina, fish are able to see magnetic landmarks underwater. Electroreception Sharks use electroreception to see electrical currents, which are emitted by lifeforms. Ants also use this technique, that is also the reason why they are attracted to electrical devices. UV-Perception Insects see UV light that reflects of flowers. Urine reflects UV light as well, therefore territorial borders which are marked by urine, are visible for animals who can perceive UV light.

REFERENCES Sartre, Jean-Paul (1969). Being and Nothingness: An Essay on Phenomenological Ontology. Routledge. Perez, M. (2006). Human development. Retrieved June 6, 2006, from http://www.health.org/ Lester, E. (1997) What is Phenomenology? Retrieved March 2010, from http://www.phenomenologycenter.org/phenom.htm Biomimicry Institute. What is Biomimicry? Retrieved March 2010, from http://www.biomimicryinstitute.org/about-us/what-is-biomimicry.html Bellarmine University Department of Biology. Introduction to Ethology (The Zoological Study of Animal Behavior) Retrieved March 2010, from http://cas.bellarmine.edu/tietjen/ Ethology/introduction_to_ethology. htm Nicole, D. (2010) The basics about Bionics. Retrieved March 2010, from http://www.brighthub.com/ science/medical/articles/10893. aspx Zamora, A. (2004) Anatomy and Structure of Human Sense Organs.

Retrieved March 2010, from http://www.scientificpsychic.com/ workbook/chapter2.htm The National Oceanic and Atmospheric Administration. NOAA GOES Eastern US SECTOR Infrared Image. Retrieved March 2010, from http:// www.goes.noaa.gov/ECIR4.html American Technologies Network Corp. How Night Vision Works. Retrieved March 2010, from http:// www.atncorp.com/HowNightVisionWorks Merchant, J. Infrared Temperature Measurement Theory and Application Retrieved March 2010, from http:// www.omega.com/techref/iredtempmeasur.html Nonverbal communication. (2010). In Wikipedia, The Free Encyclopedia. Retrieved March 2010, from http://en.wikipedia.org/w/ index.php?title=Nonverbal_com munication&direction=prev&old id=350108164 Body language. (2010). In Wikipedia, The Free Encyclopedia. Retrieved March 2010, from http://en.wikipedia.org/w/index. php?title=Body_language&directio n=prev&oldid=351198068

26

Kurtus, R. (2006). Communication Among Animals . Retrieved March 2010 from http://www.school-forchampions.com/communication/ among_animals.htm Peick, M. (2005) Dance as Communication: Messages Sent and Received Through Dance. UW-L Journal of Undergraduate Research VIII. Bairstow, P.J.; Laszlo, J.I. (1978), Perception of movement pattern Recognition from visual array of distorted pattern. the Quarterly Journal of Experimental Psychology. Saskia K Nagel et al (2005) Beyond sensory substitutionlearning the sixth sense. Journal of Neural Engineering. Dynamics (mechanics). (2010). In Wikipedia, The Free Encyclopedia. Retrieved March 2010, from http://en.wikipedia.org/w/index. php?title=Dynamics_(mechanics)& direction=prev&oldid=352054408 Sabbatini R.M.E. (2003) What is synaesthesia? Retrieved March 2010, from http://www.cerebromente.org.br/n17/mente/synaesthesia.htm

Goleman, D. P. (2007) Human Intelligence. Cornerstone. Supernatural, DVD, BBC1, UK, 1999. Supersense, DVD, BBC1, UK, 1988.

27

Co-Reflection Report_

28

Table of Content
Introduction Iteration I Strategies Strategies combined Pilot tests Final Strategy for the User Involvement Session User Involvement Sessions Conclusions of the User Involvement Tests 3. 5. 7. 10. 12. 20. 21. 24. 28. 29. 31. 33. 34. 35. 36. Iteration II Final Strategy for the User Involvement Session User Involvement Sessions Conclusions & Implications Iteration III Final Strategy for the User Involvement Session User Involvement Session Conclusions & Implications Reflections Iteration 1 Iteration 2 Iteration 3 2.

Introduction
Project Description
We as a group have to design a product that will sense invisible information; this is information that we cant sense. Examples are: UV light, radioactivity and etc. It will also show what the level of this invisible information is through a movement.

Vision
Iteration 1 In this iteration we didnt really set up a vision yet in our project. This is why we came up with a vision only for this assignment: We have to make a product that will sense UV, this product then will make a movement that will show the level of this UV. This will prevent people to get burned by the sun and it will also prevent skin cancer. Iteration 2

In this iteration we had chosen more direction in our project. We made a vision for this direction:

Help blind people by giving them an extra sense. The users will be warned when they come close to an object, so that collusions will be impossible. We only had a lot of problems with finding visual handicapped people on time before the final presentation, so we changed the vision of our project:

We found technology that can be useful for improving the human senses. The only thing is that we want to discover in what kind of situation the user finds it useful. But we also have found a way to use it in our society, what do the users think about that? Iteration 3

We couldnt show a user involvement session with a blind user in our final presentation, but we put this session in this final report. We already showed you this vision

Goal for this assignment


This assignment will be done by two persons, while others in this assignment work on their own. We are doing this because we are in the same project group. So in the first iteration we decided to make each a different strategy and present it. For the second week we would combine the different strategies and also look at the feedback of our assignor. We would both perform a pilot test on a different person and reflect on it. After this we would do a session with 8 people (because we are with two people). In the second iteration we will come up with the strategies for the session with the visual handicap and for the session with the non-handicapped users and we will perform sessions with the non-handicapped users. In the third iteration we will perform a session with the visual handicap.

Strategies aim of the strategies is: Find out what kind of solutions are preferred the most by the users for this vision.

ITERATION I

Iteration 1

Exploration:

Iteration 2

Ideation:

Iteration 3

Confrontation:

ections

Strategies Strategy Attalan Mailvaganam:

ITERATION I

Iteration 1

Exploration:

Iteration 2

Ideation:

Iteration 3

Confrontation:

Re ections

Strategies combined presentation we decided to combine our both strategies together. main feedback of After our phases exploration, ideation and confrontation on our assignor was that we had to build the each other a lot better. Exploration:

ITERATION I

Iteration 2

Iteration 1

In this phase we will ask the user question to understand what their basic understanding is of UV light. questions are: 1) 2) 3) Do you try to prevent to get sunburned and how do you do that? What do you do when you got sunburn? Do you know the risks of too much UV light? Why do you still sunbath even though you are aware of the risks? 4) 5) 6) Ideation: Have you been burned during a sunbath session? Do you sunbath? Why?

Re ections

Iteration 3

Strategies combined out how In this phase we will ask the user to imagine that he/she is on the beach sunbathing. To they can be persuaded to perform an action we ask them to think about situations and how they would act on that.

ITERATION I

Iteration 1

Situations: 1) 2) 3) 4) 5) kind of touch interaction (pinch, caress, vibration etc.) to make it stop (within and out of reach) 6) Confrontation: Play loud noise, which has to be shut Perform Place drinks nearby, just within reach, but movement is required to get it Place food nearby, just within reach, but movement is required to get it Placing drink nearby, but out of reach Placing food nearby, but out of reach

Iteration 2 Iteration 3

In this phase we will give users our solution. Our solution is a belt that we vibrate when the UV level is too high. level of this vibration can be changed from a soft to high vibration. We will asks the users to their insight, ideas and feedback about the project.

Re ections

Pilot tests Thom van Boheemen test pilot Exploration Answer on the Questions: 1. 2. 3. 4. 5. 6. Ideation If its good weather, to relax, read a book. Yes If its very hot, a bit of sunscreen, less long in the sun Just wait till its over, stay out of the sun Yes, skin cancer, rash, allergies etc Its safe if you dont overdo it.

ITERATION I

Iteration 2

Iteration 1

1. 2. 3. 4. 5. 6.

He would get the drinks He would get the food. Rather drink than food He would also get the drinks He would also get the food Caress: move if it keeps going on, vibration depends on position and place, pinch move to make it stop Try to ignore the ignore the sound

Iteration 3

Confrontation If the device gives a soft vibration as a signal for showing that there is a high level of UV, he maybe would ignore the signal and lay a bit longer. While when the device gives a more hard vibration (pinching), he thinks hes forced to leaves, because else it wont stop vibrating.

Reflections

Pilot tests Attalan Mailvaganams test pilot

ITERATION I

Iteration 1

Exploration Answers on the Questions: 1. 2. 3. 4. 5. 6. Certain, sunbathing give me a nice feeling. I burned a lot, it has to do with me skin. But I think that I can deal with the sun. I prevent myself by putting sun cream on me, but also by wearing a t-shirt or by putting a cap on. I will put Aftersun on me, but I noticed that I always know after that I burned that I burned myself. You can get skin cancer from it, an aging of your skin. I think that sunbathing can also something good, because it is healthy because of vitamin D. But I sometimes think about skin cancer, but this will not prevent me from sunbathing.

Iteration 2

Ideation 1. 2. 3. 4. 5. 6. Confrontation He likes our vision that were trying to make a device for helping people to get aware of the UV level. He will only use the belt if it gives a soft vibration. When it is a high vibration (pinch) he wont use it. He thinks because he will easily stand up, a soft vibration is enough. He doesnt like the idea that the device is a belt; he would prefer a bracelet, because it is a lot smaller. A belt is maybe to notable. When Im thirsty I will stand up and get my drink. The same goes for this one, when Im hungry I will get food. If its inside my reach I would find it nice, because I dont really have to stand up to get drinks. Again the same goes for this one; it is easier when I dont have to stand up. I dont really like the hard pinches, but I like the soft vibrations. When its irritation me, I will stand up and put it off.

Iteration 3 Reflections

Pilot tests Conclusion of both pilot tests

ITERATION I

Iteration 1

Attalans test person was aware of the risks of UV light, but he still sunbathed. You can see that this person has no problem with standing up and going away from the sunlight. You could see this in the confrontation part of this test; he could easily be persuaded from going away from the sun light. The other test person was also aware of UV light, but this person had problems with standing up. It was more difficult to persuade him; you have to force him more. Reflection on the pilot tests The test went quite nice. The results on the exploration and ideation phase were quite nice. The only problems are in the ideation phase. It has to be more concrete and has to build more upon the exploration phase.

Reflections

Iteration 3

Iteration 2

Final Strategy for the User Involvement Session Exploration:

ITERATION I

Iteration 1 Iteration 2

1) 2) 3) 4) 5) 6) Ideation:

In this phase we will ask the user question to understand what their basic understanding is of UV light. questions are: Do you sunbath? Why? Have you been burned during a sunbath session? Do you try to prevent to get sunburned and how do you do that? What do you do when you are sunbathing? Do you know the risks of too much UV light? Why do you still sunbath even though you are aware of the risks?

Iteration 3 Re ections

10

Final Strategy for the User Involvement Session In this phase we will ask the users the lay down and imagine that they are sunbathing. We will ask them to think about situations that caused or will cause that they will move themselves to perform a action. We will also ask what they will do when there is a annoying music playing behind the user. Confrontation:

ITERATION I

Iteration 2

Iteration 1

In this phase we will give users our solution. Our solution is a belt that we vibrate when the UV level is too high. level of this vibration can be changed from a soft to high vibration. We will ask the users to their insight, ideas and feedback about the project.

Re ections

Iteration 3

11

User Involvement Sessions User 1

ITERATION I

Iteration 1

Exploration: Answer to the questions: 1. 2. 3. 4. 5. 6. Ideation: I would stand up or move my body when I see a nice chick, a brawl, when I am thirsty or when I want to go swimming. I am saying all these things because I really stood up for these things in the past. When somebody has very annoying music behind me, I will put it out. Yes, I like the sun very much. I burned myself a few times, but not too much. When I think that the power of the sun is too high I will put on a t-shirt or put on sun crme. I wont do a lot during a sunbath session; there will be not a lot of activity, I mainly sleep. But still sometimes I read a book. Some risks are skin cancer or aging of the skin I do it because I just want to lie down and it gives me a nice color.

Iteration 2 Iteration 3

Confrontation: When the belt gives a soft vibration, I think that I would listen to him. But I expect that the belt knows that I had put on sun crme. I am not really in favor of vibration around the middle, because it is close to some private areas. I will throw the belt away when it gives a hard, painful signal.

Reflections

12

User Involvemetn Sessions User 2 Exploration: Answer to the questions: 1. 2. 3. 4. 5. 6. Ideation: When I get bored I will stand up, when it is too warm I will go to the sea, when someone asks me to come play football. When there is annoying music behind me I will try to not listen to it, it will not really bother me that much too stand up. I dont really go on purpose to places like the beach to go sunbathing, but I sometimes like to lie down when the sun is shining, so yes I sunbathed. I am a easy burner, but I didnt really burned myself when I was laying down, but more when I am outside playing. I always use sun crme. I like to listen to the sound of nature when I am lying down. I dont really do a lot of things during a sunbath session. You will get wrinkles, it can even cause that you wont get any vitamin D anymore. I know the risk and that is why I try to do it so less as possible.

ITERATION I

Iteration 3

Iteration 2

Iteration 1

Confrontation: When there is a soft vibration, I think that I will try to ignore it; I will keep lying down. When there is a more higher/painful vibration, I will certainly react to it. My opinion is that you have to be forced to get away from the sun. This gives me some past experience about my mom: she always screamed to me, when I had to put sun crme on and I really went to her.

Reflections

13

User Involvement Sessions User 3

ITERATION I

Iteration 1

Exploration: Answer to the questions: 1. 2. 3. 4. 5. 6. Ideation: When somebody asks me to stand up I will stand up, for example to go swimming or to play football. When somebody is listing to annoying music I will ignore it. So I will not stand up for it. Yes of course, I like sunbathing Yes, mostly not. I mostly get burned when I am doing something else I try to keep myself safe by putting sun crme on me. I just like it to lie down and maybe read something. You will get skin cancer; I know that you cant really lay more than a quarter in the sun. I know a lot of things about this issue, because my mom tells me a lot about it. I really like to get more colored

Iteration 2 Iteration 3

Confrontation: When I am lying down in the sun and a soft vibration show me that I am laying down a long time in the sun, I will stand up. When a hard vibration shows me that I have to stand up, I will ignore this signal by throwing the belt away. I dont really like to get forced.

Reflections

14

User Involvemetn Sessions User 4 Exploration: Answer to the questions: 1. 2. 3. 4. 5. 6. I sometimes go sunbathing. I dont really plan a sunbath session. I think that I burned myself each summer one time. I tried to put sun crme on myself one or two times each sunbath session. I like to read something, I loved to listen to music when I am sunbathing and the main thing about sunbathing is that I just want to chill. You can get skin cancer from it. At the moment that I am sunbathing, I will not really think about risks. I just want to enjoy that moment and not let this feeling go away by thinking about the risks.

ITERATION I

Iteration 2

Iteration 1

Ideation: When I want to get cooled down, when I want to get an ice-cream, do something active because I am bored, get something to snack or to drink. I am someone who will put of the music, if I dont like it.

Iteration 3

Confrontation: When the belt gives me a soft vibration, I will try to ignore it. When the belt gives me a more hard vibration I will throw it away. I dont really like to get forced. But this doesnt mean that I dont react to the belt. In both situations I maybe will put on sun crme.

Reflections

15

User Involvement Sessions User 5

ITERATION I

Iteration 1

Exploration: Answer to the questions: 1. 2. 3. 4. 5. 6. Ideation: Again this depends on the location where I am. But I think that I will stand up, when somebody calls me to go swimming, when I get bored I will stand up, when it is too hot or when I get bored I will stand up. When there is music that ignores me, I will put it off. Yes, of course I like to go sunbathing. No, because I have a skin that doesnt really can get burned. I certainly use sun crme, but again my skin prevent me from getting burned by the sun. This is different in certain situation. But when I am on the beach I just want to relax and enjoy the sun. You can get skin cancer from it. I do it because it is form of relaxation, I dont really think about the risks when I am relaxing.

Iteration 2 Iteration 3

Confrontation: I think that when I am sunbathing and a soft vibration shows me that I have to leave the sun, I will ignore this signal. When I get forced in some way by the belt, by a more higher/painful vibration I will certainly react to this by standing up.

Reflections

16

User Involvemetn Sessions User 6 Exploration: Answer to the questions: 1. 2. 3. 4. 5. 6. Yes, when I go to the beach or chill outside in the sun I dont really get burned a lot by the sun I sometimes put sun crme on The things that I do during a sunbathing session are sitting behind a laptop, read a book, talking to people or listening to music. These things relax me. You of course can get burned or you can get skin cancer from it I do it because it is chill, I will get a color. Also because I can enjoy the good weather outside by sunbathing

ITERATION I

Iteration 2

Iteration 1

Ideation: I am someone who really stands up easily. I really get bored easily. If somebody asks me to go doing something (playing football, swimming) I will stand up. I dont really like to lay down on the same place all the time. I will put of the music when it is annoying music

Iteration 3

Confrontation: My reaction to a soft or hard vibration will be the same I think. I will certainly listen to the belt when it gives me a signal when I am lying down that the UV level is too high. And it doesnt matter if this is soft or hard vibration

Reflections

17

User Involvement Sessions User 7

ITERATION I

Iteration 1

Exploration: Answer to the questions: 1. 2. 3. 4. 5. 6. Ideation: I am a guy that doesnt really like to lie down a long time. I will easily stand up to go kiting, to play games like football or volleyball; I will also stand up to get something to eat like an ice-cream. I just want to do thing instead of lying down. When somebody has annoying music, I will certainly go to it and stop it. Yes, not really plan to go sunbathing, but still I sunbath Yes, not when I am lying down, but more when I go hiking. I try to prevent myself by putting sun crme on. When I am sunbathing I try to relax, mainly I just lie down and enjoy the sun You can get skin cancer from it I love to go to the beach and enjoy that sun and I dont do it because it gives me a nice color

Iteration 2 Iteration 3

Confrontation: When there is a soft vibration I wont really react to it, because the signal is too weak. When there is a hard/painful vibration I will throw it away or I will react to it and after that I will throw it away. It depends on the mood that I have at that moment.

Reflections

18

User Involvemetn Sessions User 8 Exploration: Answer to the questions: 1. 2. 3. 4. 5. 6. Yes, certain I like the sun I burned myself quite a lot I put on sun crme or I use a parasol I just relax, I mainly lie down and talk to other people You can get skin cancer from it The chance to get skin cancer is very low, the risk is very low. This will not prevent me from not going to sunbath.

ITERATION I

Iteration 1

Iteration 2

Ideation: When it is too warm I will stand up to drink something cool or I go into the sea, or to play football with friends. I think that I am somebody who will stand up very easily. I think when I hear annoying music I will react to it by not listing to it.

Iteration 3

Confrontation: When the belt gives me a low vibration, I will certainly react to it by going away from the sun. When the belt gives me a hard vibration I will throw it away, because I dont like to get forced. And you dont have to force me, you just can ask me.

Reflections

19

User Involvement Sessions Conclusions of the User Involvement Tests

ITERATION I

Iteration 1

You can see some pattern in the discussions that we had with the users. The pattern is that the when a user doesnt have problems with standing up, he doesnt have to be forced by our device to leave the sun, a soft vibration is enough. When a person doesnt easily stand up, he or she has to be forced by the device to leave the sun, with a harder vibration. But you also have some special users. One person says that he will react the same on the soft and high vibration; another one would not leave the sun when the device gives a signal, but he would put on sun crme. When you look at the pattern that we found this device would certainly help users to not get sunburned, but there are exceptions, there are always exceptions. Maybe we have to look at the group of exceptions, how to prevent them to get sunburned. If we can find a solution for these users, we maybe can solve it for the average users.

Iteration 2 Iteration 3 Reflections

20

Final Strategy for the User Involvement Session

ITERATION II

Iteration 1

The vision for this strategy is: We found technology that can be useful for improving the human senses. The only thing is that we want to discover in what kind of situation the user finds it useful. But we also have found a way to use it in our society, what do the users think about that? This technology that we found is build into a working prototype. This prototype is a tool that can calculate the distance between a user and an object. When the user gets closer to the object the tool will start making a faster sound. We together set-up a strategy for the user session in the second iteration, the aim for the user session is: Find out of our solution for this problem is useful in our society.

Reflections

Iteration 3

Iteration 2

21

Final Strategy for the User Involvement Session

ITERATION II

Iteration 1

Exploration

Iteration 2

In this part we will ask the user to describe yesterday. In this way we will hear certain situations one a day that could be useful for our technology Ideation

Iteration 3

In this part we will go deeper into the certain situation that we saw in the exploration phase. We will ask the user how they deal with these situation, did they use a tool for this.

Re ections

22

Final Strategy for the User Involvement Session

ITERATION II

Iteration 1

Confrontation

Iteration 3

Iteration 2

In this part we will show them a tool that we designed by using technology that we thought could be part of the Confrontation we will ask them to experience the tool and useful for the society. In the think back to their day and give a use to it. In the second part we will give them a use: tool makes it possible to feel object from a distance. Imagine that you can feel the grass in a park from a distance. It also makes it possible to feel 3D model in a virtual space. When you design a 3D model on the computer, you can feel this model in the real world with a tool After explaining our use, we will ask about the users insights, ideas and feedback.

Re ections

23

User Involvement Sessions

ITERATION II

Iteration 1

User 1 Exploration: He stoodup in the morning, he showered and after that he ate some breakfast. After that he had a coach meeting, later he started to make a prototype for his project. After being busy the whole day, he finished the day with playing a game of Call of Duty. He went to the city, went to the station and got the train to home. At home he ate cold nasi, watched a movie, went to town with some friends, played pool en won, he scored a high score with the game find the difference. At last he went home and went to sleep. Ideation: Alarm clock (To wake up) Magnetron (To warm up his food, the cold nasi) Bike (To go to school) Laptop (To work on school) OV chip card (To travel) DVD player, laptop, TV (for the entertainment) Cue (To play pool) Beer glass (To get drunk and of course to enjoy) IPod (To listen before sleeping) Confrontation: Part1: He would find it useful to use it in the dark when he is sleeping in his bed. When he gets thirsty and has to search for his glass of water, the glass sometimes falls down. Our tool could make it possible to find the glass of water without letting it fall. Part2: He is a type of guy who likes to touch things. He would like it if he could touch the grass from a distance to feel if the grass is wet; if you could do this with more things you will be able to feel the weather from a distance.

Iteration 2 Iteration 3 Reflections

24

User Involvement Sessions

ITERATION II

Iteration 1

User 2 Exploration: He woke up and stood up, he went into the shower, eat some breakfast, watch the news, went to a shop to get some food, he went to the University, on the University he did some brainstorms and chose direction for his project. When the day was over he went to the Zwarte Doos for some drinks, after this he went to the Albert Hein to get dinner, after this he cooked dinner, after dinner he went to the caf to relax, at last he went home and went asleep. Ideation: Towel (To shower) Soap (To shower) Telephone alarm clock (To wake up) TV (To watch the news) Remote control (To zap) Bike (To go to school) Shopping basket (To shop) Wallet (To pay) Whiteboard (To brainstorm) Post-its (To brainstorm) Laptop (To work on school) Beer glass (To relax) Coins (To pay beer) Kitchen tools (To cook) Confrontation: Part1: He would use our tool in a caf. If you take beer, the other people can hear you when you come closer, so that the beer doesnt fall. Part2: He thinks that it would become very personal, you as a user can only feel the signal, and this can be vicious. He advises us to look if it is esthetical to create this use. He would use the device in a museum to touch object that are not allowed to get touched.

Reflections

Iteration 3

Iteration 2

25

User Involvement Sessions

ITERATION II

Iteration 1

User 3 Exploration: He woke up, he ate some breakfast, went to the University, he decided what to do on the project, they didnt do a lot, he had lunch, after this he went to mathematics department, played games there, went to Maastricht to a friend, ate dinner there, went to a football match, after the match he went out for some drinks, at last he went home and went asleep. Ideation: Mother (The wake up tool) Knife (To eat) Fork (To eat) Plate (To eat) Bike (To go to school) Pen (To write) Car (To go to Maastricht) Radio (To listen when driving) Cell phone GPS (To navigate to Maastricht)

Iteration 2

Confrontation: Part 1: This user would also use it in the dark. When he has to search the switch he would use it so he would not bump onto a wall

Iteration 3

Part 2: He couldnt find a use for our device, the reason for this is that he says that he isnt guy who touches a lot of things.

Reflections

26

User Involvement Sessions

ITERATION II

Iteration 1

User 4 Exploration: He stood up in the morning at 8 oclock, he showered, he ate some breakfast, after this he went to school, ate lunch outside, at 5 oclock he went to home and ate some dinner. After this a girlfriend came by till half past 11 and he went a sleep after that. Ideation: Plate (to eat) Cutlery (to eat) Telephone alarm clock (to wake up) Tooth brush (to brush the teeth) Iron waffle (To eat) Bike (to go to school) Confrontation: Part 1: He also sees a use in the dark for our tool. He would prefer that the tool is placed in his shoe, so he could feel when there would be a hole in the floor when he walks on a place where it is dark. Part 2: On the first hand he couldnt find a use for our device but after thinking about it he maybe would use it for a gas cooker, to feel if the pan is hot. He also thinks that this tool is unnecessary because we as people can feel a lot of things already. Why would you feel it fake when you can feel it real? He certainly sees a use in the virtual side of this tool. He would find it useful to feel the material of a 3D render on the computer with our tool.

Reflections

Iteration 3

Iteration 2

27

User Involvement Sessions Conclusions of the User Involvement Tests

ITERATION II

Iteration 1

In the first part you can clearly see that the users will use the tool in their regular life when it is dark, only one user would use it in a caf, but still it has to do with the darkness in there. In the project when we discussed about our device we only could find one real use for it and that is in the dark. So you can see that this is correct, but this user session was meant to discover more opinions for tool, unfortunately we didnt found any real other use. In the second part we gave them a use and asked them how they would use it. Different things came out of this that was quite interesting. One users said that he want to touch the grass and find out what the weather was outside, another wants to feel an art object, what you cant touch and another user wants to feel a 3d renders, to feel the material on that render. These are 3 different kinds of uses that we didnt really think about for this device. Implications If we look at the aim of this test and to the result of the first part of the confrontation, you can clearly see that this tool doesnt really provide help for non-handicapped users. If you look at the second part of the confrontation part, you can clearly find some uses for this device. This means that the technology of the device has to be changed to feel thing from a distance, like grass and/or art object; or changed in a way that you can feel 3d model on a computer program. If you look at our vision you can clearly say that we found some uses for the tool. The next step is to research if the uses that we have found are possible to work out by doing research about the technology and finding arguments for the value of the discovered uses.

Iteration 2 Iteration 3 Reflections

28

Final Strategy for the User Involvement Session

ITERATION III

Iteration 1

that collusions will be impossible. We will use the same prototype that is used in iteration 2. Strategy Exploration:

Iteration 2

In this part we will ask our user to tell us what he did on a regular day. We will ask him to describe certain situation that he had to deal with that are interesting for us. yesterday. In this way we can

Re ections

Iteration 3

Ideation:

In this part we will go further into the situations that we found and ask about the tools that they used to assist them in these situations.

29

Final Strategy for the User Involvement Session Confrontation:

ITERATION III

Iteration 1 Iteration 2

them our own tool and ask them to experience this tool. Finally we will ask In this part we will main thing is to if our tool adds them about their ideas, insights and feedback about the tool. value to the life of a visual handicap.

Iteration 3 Re ections

30

User Involvement session

ITERATION III

Iteration 1

The user that we used was a visual handicaped man who lost his vision 25 years ago, because of the syndrome of Usher. Exploration: He went to an empty garage in Waarle to carve some sculptures. After doing this sculpting he went home in the afternoon, he cleaned his garden with his partner. His neighbor also helped him to make holes in some tiles. After this he read something and made some music.

Iteration 2

Ideation: To make sculptures he uses hammer and chisel as tools, he said that he used the tools with intuition. His main tool is his hand. He also uses the keyboard to type mail. He said that he knows the blind typing on a regular keyboard. To read mails he has a speaking machine that can read his mails to him. To read books he listens to a CD book, he sometimes read Braille, but he finds this difficult; so he only uses it sometimes to read small texts. He also uses a blind stick when hes outside, he also used GPS for blind people, but this didnt really work well. He also experimented with a guide dog. It doesnt really work for him, because a dog cant really estimate the danger of the traffic. He says that people use a guide dog, because they have a buddy.

Reflections

Iteration 3

31

User Involvement session

ITERATION III

Iteration 1

Confrontation: After trying our device he told us what he thought about our tool. He finds it difficult to use, you first have to practice with it. He says that the sound has to come a lot earlier. Maybe we could use ticks instead of tones. He also says that you cant anticipate on the tool. He also founds it unnatural to walk with it. He would like it if we put more sensors init to get a more sensitive tool. Also he would like it if he could feel a big famous building (Brandenburg Tor). He isnt able to see it, but if he could feel the building on a smaller scale, he could imagine how it looks like.

Iteration 2 Iteration 3 Reflections

32

User Involvement session

ITERATION III

Iteration 1

Conclusions The main question of this test was to find if our tool would provide value in a life of visual handicapped. After doing the session the tool didnt direct provide a lot to a life of a visual handicapped user. He clearly showed us what he would see as a handy tool in his life. We had to look if we could help him to feel big famous buildings. This looks like a whole new concept, but this concept relates to the vision of the second iteration. So by combining the two visions we found a tool that could possible provide value to a visual handicap. Implication We know what we have to do to make a tool that provides value in a life of visual handicapped user. This caused that the visions of the last two iterations are changed into one vision, namely:

Iteration 2

How can we let users feel an object from a distance in any size or feel an object that is placed in virtual world, in the means of a new device/tool? The next step in the project is that we are going to look for ways how to solve this answer by doing research about technology, by also brainstorming to get some ideas and work one out.

Reflections

Iteration 3

33

Iteration 1 First User Involvement Session Reflection with the questions Situated Creativity of Action

REFLECTIONS

Iteration 1

We dont really make room for the users to create their own goals. We dont really tell them what kind of project that we have, we just ask them about their experiences (especially in the exploration & ideation phase). But in the confrontation part we make it possible for the users to create and show their goals to us, by letting them experience with our prototype and explaining about our project. Openness of experience In the exploration phase we ask the users some specific questions. The questions were not difficult, in a way that they didnt have to think about it for a long time before answering. In this phase we didnt really make room for the users to reflect on their answers, the questions were quite direct. In the ideation phase we asked them a very open question and the users certainly had room to reflect on their actions/answers. Also in the confrontation part their room for the users to reflect on their actions.

Iteration 2

A weight of answerability We didnt really give the users a lot of information about our project. But I think that the questions in the exploration phase gave the users a good understanding of our topic. In the three part the question were clear to the users, the understood it quite nice. Holism and unity In the three parts the questions we asked were personal. We really asked them to imagine past situations and think about the experience of that. Especially in the Ideation part we did this. We asked the users to think about a sunbath session and all of the users really thought about past experiences. In the confrontation part we asked them to think about future experiences, they based this on their past experiences. Sensory engagement The first two part of the session were really realistic, because we asked them question about their own life. Of course there is always a possibility that the lie, but this is hard to control. In the confrontation part we give them a low-fi prototype, what was less realistic, but we think that the users imagined this in their own life, what made it more realistic. Emotional-volitional character We placed the three steps behind each other quite quickly. We think that we build the session quite nice, but did this help the users to see the bigger picture? I think that the confrontation part shows the bigger picture; with only the confrontation and ideation part the users wouldnt have seen the bigger picture. We didnt make room for the users to summarize what happened to them, this could have given us more valuable information.

Iteration 3 Reflections

34

Iteration 2

REFLECTIONS

Iteration 1
Situated Creativity of Action In this iteration we really told the users what our project is about. But we think that we dont really let them create and show their goals in the first two parts, these two parts are really about the experience. But in the final part, the exploration, we certainly let them show their goals to us, by letting them experience our prototype. Openness of experience In the exploration part we really ask a very open question. They had time to think about the answers, so this gave them possibility to reflect on their answers. In the ideation phase the questions were more direct; this gave them less time to reflect on their answers. In the confrontation part their was certainly room for the users to reflect on their actions. A weight of answerability This time we first explained what our project was about. We didnt really explain why we asked certain questions in the first two phases. But at the end the users understood why we asked these certain questions. Holism and unity The first two parts of the user involvement were very personal. The really had to think about their past experience. Especially in the Exploration part they really had think about yesterday. In the Confrontation part we asked them to think about the future by basing experience of their past.

Second User Involvement Session Reflection with the questions

Iteration 2 Iteration 3
Emotional-volitional character The three phase of the session were quite quickly behind each other. The way that we build up the phases really showed the users the bigger picture. If a phase would be missing, the users will have more problems with seeing the bigger picture of it. We didnt really make room for the users to summarize what happened to them, but we could see that they had think back to the exploration and ideation phase in the confrontation part. They somehow summarize here. Sensory engagement The first two part of the session were realistic, because we asked them question about their own life. Maybe they didnt tell us everything, because it could be private and there is always a possibility of lying. In the confrontation part we gave a quite good working prototype, what was realistic. They could really experience it, this was very realistic.

Reflections

35

Iteration 3 Third User Involvement Session Reflection with the questions Situated Creativity of Action

REFLECTIONS

Iteration 1

We already contacted with our user and we told him about our project, so he knew were our project was about. Again we didnt really let the user create or set his own goals in the exploration and ideation phase. But in the confrontation phase we made it possible for the user to experience the tool and set/ create his own goals. Openness of experience Our user really reflected in each phase, we gave him really time to think about his answers. In the exploration and confrontation phase we asked open question and you could see that our user really thought back about his answers. Even in the ideation phase, with more direct question, you could see that our user also reflected on the answers.

Iteration 2

A weight of answerability We gave a lot of information about our project to the user. Maybe we didnt really tell why we asked the questions in the exploration and ideation phase. But at the end it all came together and he understood were the session was about. Holism and unity This was a really personal session. The exploration, ideation and confrontation part were really based on past experiences. In these phases he really had to think about situation that happened to him yesterday. In the confrontation phase a future experience was based with a past experience. Sensory engagement This is test was very realistic. The question had to do with the experiences of his life; it is hard to lie about these items. But still there is a possibility of not telling certain information. But you will always have these risks. The confrontation part was also very realistic, because it was a real working prototype. Emotional-volitional character We placed the three phases behind each other in a slow tempo (in total 60 minutes in total). Without a phase missing the user wouldnt really see the big picture. Because of the large time amount that we spend to talk with the user, he had a lot of time to summarize his actions.

Iteration 3 Reflections

36

Code software_

65

MultipleWiiMoteForm.cs using System; using System.Collections.Generic; using System.Windows.Forms; using WiimoteLib; namespace WiimoteTest { public partial class MultipleWiimoteForm : Form { // map a wiimote to a specific state user control dealie Dictionary<Guid,WiimoteInfo> mWiimoteMap = new Dictionary<Guid,WiimoteInfo>(); WiimoteCollection mWC; totalCanvas drawingCanvas; public MultipleWiimoteForm() { InitializeComponent(); } private void MultipleWiimoteForm_Load(object sender, EventArgs e) { // find all wiimotes connected to the system mWC = new WiimoteCollection(); int index = 1; try {

WiimoteInfo wi = new WiimoteInfo(wm); //tp.Controls.Add(wi); // setup the map from this wiimotes ID to that control mWiimoteMap[wm.ID] = wi; // connect it and set it up as always wm.WiimoteChanged += wm_WiimoteChanged; wm.Connect(); wm.SetReportType(InputReport.IRAccel, IRSensitivity.Maximum, true); wm.SetLEDs(index++); } System.Console.Out.Write(index - 1 + Wiimotes Connected \n); } void wm_WiimoteChanged(object sender, WiimoteChangedEventArgs e) { drawingCanvas.updateCanvas(e); } private void MultipleWiimoteForm_FormClosing(object sender, FormClosingEventArgs e) { foreach(Wiimote wm in mWC) wm.Disconnect(); }

mWC.FindAllWiimotes(); } catch(WiimoteNotFoundException ex) { MessageBox.Show(ex.Message, Wiimote not found error, MessageBoxButtons.OK, MessageBoxIcon.Error); } catch(WiimoteException ex) { MessageBox.Show(ex.Message, Wiimote error, MessageBoxButtons.OK, MessageBoxIcon.Error); } catch(Exception ex) { MessageBox.Show(ex.Message, Unknown error, MessageBoxButtons.OK, MessageBoxIcon.Error); } drawingCanvas = new totalCanvas(); Controls.Add(drawingCanvas); foreach(Wiimote wm in mWC) //Leave all the technical stuff as it is {

private void fullScreen_Key(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.F9) { drawingCanvas.goFullScreen(); } } }

ArduinoCom.CS using System; using System.Collections.Generic; using System.Windows.Forms; using System.Linq; using System.Text; using System.IO.Ports; namespace WiimoteTest { class ArduinoCom { private char[] buff = new char[1];

// create a new user control

private static System.IO.Ports.SerialPort arduinoPort; public ArduinoCom() { System.ComponentModel.IContainer components = new System.ComponentModel.Container(); arduinoPort = new System.IO.Ports.SerialPort(components); arduinoPort.BaudRate = 9600; } public void setPort(string port) { try { arduinoPort.PortName = port; Console.Write(port set to + port); Connect(); } catch (NullReferenceException) { } catch (InvalidOperationException) { } } private static void Connect() { try //nice error handling :) { arduinoPort.Open(); } catch (System.IO.IOException ex) { MessageBox.Show(ex.Message, Arduino not found on + arduinoPort.PortName, MessageBoxButtons. OK, MessageBoxIcon.Error); } if (!arduinoPort.IsOpen) { Console.WriteLine(Oops); return; } // this turns on ! arduinoPort.DtrEnable = true; Console.WriteLine(Communication with Arduino on + arduinoPort.PortName + with a baudrate of + arduinoPort.BaudRate); // callback for text coming back from the arduino //arduinoPort.DataReceived += OnReceived; } //For writing integers to the Arduino trough the COM Port public void writeInt(int value) { if (arduinoPort.IsOpen == true) { try {

buff[0] = (char)value; arduinoPort.Write(buff, 0, 1); } catch (System.InvalidOperationException ex) { Console.WriteLine(Error: + ex.Message); } } } private static void OnReceived(object sender, SerialDataReceivedEventArgs c) { try { // write out text coming back from the arduino Console.Write(arduinoPort.ReadExisting()); } catch (Exception exc) { Console.WriteLine(exc.Message); } } } } TotalCanvas.CS using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Drawing.Imaging; using System.IO.Ports; using System.Linq; using System.Text; using System.Timers; using System.Windows.Forms; using WiimoteLib; namespace WiimoteTest { public partial class totalCanvas : UserControl { private delegate void UpdateWiimoteStateDelegate(WiimoteChangedEventArgs args); private Bitmap frontB; //private Bitmap topB = new Bitmap(1024, 768, PixelFormat.Format24bppRgb); private Bitmap earth = new Bitmap(earth1080.jpg); //private Bitmap mars = new Bitmap(mars.jpg); private Bitmap saturn = new Bitmap(saturn1080.jpg); public Graphics front; public Graphics top; //Constants that define the properties of the wiiMote

public const float viewAngle = 33; //degrees public const float viewAngleVert = 23; //degrees public const float camWidth = 1024; //px public const float camHeight = 768; //px //This is the only data we need for the calculations!! private int wiiDistance; //In milimeters private int wiiAngle; //in degrees, from an imaginary horizontal line private int fvWidth; private int fvHeight; private System.Drawing.Point leftWiimote; private System.Drawing.Point rightWiimote; private Math3D.Point3D calcPoint; private Sphere earthSphere; private Sphere marsSphere; private Sphere saturnSphere; private Sphere[] planetArray; private Sphere closestPlanet; private Boolean hardVibration = false; private Boolean fullScreen = false; //Arduino Stuff private ArduinoCom arduino; System.Timers.Timer drawTimer; public totalCanvas() { InitializeComponent(); calcPoint = new Math3D.Point3D(0, 0, 0); fvWidth = frontView.Width; fvHeight = frontView.Height; frontB = new Bitmap(fvWidth, fvHeight, PixelFormat.Format24bppRgb); front = Graphics.FromImage(frontB); //topWidth = topView.Width; //topHeight = topView.Height; wiiDistance = Int32.Parse(WiiMoteDistance.Text); wiiAngle = Int32.Parse(WiiMoteAngle.Text); earthSphere = new Sphere(100, new Math3D.Point3D(0, 0, 0)); earthSphere.name = earth; earthSphere.vibration = 200; earthSphere.hardVibration = true; saturnSphere = new Sphere(100, new Math3D.Point3D(0, 0, 0)); saturnSphere.name = saturn; saturnSphere.vibration = 200; saturnSphere.hardVibration = false; planetArray = new Sphere[2] {earthSphere,saturnSphere}; // Get a list of serial port names.

string[] ports = SerialPort.GetPortNames(); // Display each port name to the console. foreach (string port in ports) { portBox.Items.Add(port); } System.Console.Out.Write(canvas Created \n Distance between Wiimotes: + wiiDistance.ToString() + \n Angle of Wiimotes towards each other: + wiiAngle.ToString() + \n); arduino = new ArduinoCom(); //Set Timer for drawing updates to canvas drawTimer = new System.Timers.Timer(); drawTimer.Elapsed += new ElapsedEventHandler(updateCanvasEvent); drawTimer.Interval = 1000 / 40; //FPS drawTimer.Start(); } public void updateCanvas(WiimoteChangedEventArgs args) { BeginInvoke(new UpdateWiimoteStateDelegate(wiiMoteUpdate), args); } public void wiiMoteUpdate(WiimoteChangedEventArgs args) { //Check which wiiMote is calling now WiimoteState ws = args.WiimoteState; IRSensor sensor = ws.IRState.IRSensors[0]; if (sensor.Found) { if (ws.LEDState.LED1 == true) //Code for LEFT wiiMote leftWiimote = new System.Drawing.Point(sensor.RawPosition.X, sensor.RawPosition.Y); if (ws.LEDState.LED2 == true) //Code for RIGHT wiiMote rightWiimote = new System.Drawing.Point(sensor.RawPosition.X, sensor.RawPosition.Y); //Calculate depth using data from both Wiimotes calculateDepth(); } } private void drawCanvas() { try { int brushSize = 30; //Draw the front view (2 wiiMotes in one plane) front.Clear(Color.Black); int x1 = System.Convert.ToInt16(Extensions.map(leftWiimote.X, 0, 1024, fvWidth, 0)); int y1 = System.Convert.ToInt16(Extensions.map(leftWiimote.Y, 0, 768, fvHeight, 0)); int x2 = System.Convert.ToInt16(Extensions.map(rightWiimote.X, 0, 1024, fvWidth, 0));

int y2 = System.Convert.ToInt16(Extensions.map(rightWiimote.Y, 0, 768, fvHeight, 0)); front.FillEllipse(Brushes.Blue, x1 - (brushSize / 2), y1 - (brushSize / 2), 10, 10); front.FillEllipse(Brushes.Yellow, x2 - (brushSize / 2), y2 - (brushSize / 2), 10, 10); //change earthSphere earthSphere.origin = new Math3D.Point3D((wiiDistance / 2) - ( wiiDistance / 6 ), 0, overstaand); saturnSphere.origin = new Math3D.Point3D((wiiDistance / 2) + (wiiDistance / 6), 0, overstaand); //marsSphere.origin = new Math3D.Point3D((wiiDistance / 2) - (wiiDistance / 4), 0, overstaand); //Draw the images to the canvasses frontView.Image = frontB; //Draw the planets in the boxes if (closestPlanet != null) { switch (closestPlanet.name) { case earth: planetBox.Image = earth; break; //case mars: planetBox.Image = mars; Console.WriteLine(mars); break; case saturn: planetBox.Image = saturn; break; default: planetBox.Image = null; break; } } } catch (System.InvalidOperationException) { // Console.WriteLine(Draw Fail; dont mind, dont care... Well get it next time); } } public void calculateDepth() { //Calculate depth from two (x) coordinates from different wiiMotes float gamma = wiiAngle - (viewAngle / 2); float alpha = 33 - ((viewAngle / camWidth) * leftWiimote.X) + gamma; float beta = (viewAngle / camWidth) * rightWiimote.X + gamma; double aanliggend = wiiDistance / 2; //in mm double depth = System.Convert.ToDouble((wiiDistance * Math.Sin(Extensions.toRads(alpha)) * Math. Sin(Extensions.toRads(beta))) / (Math.Sin(Extensions.toRads(alpha + beta)))); //in mm double x = depth / (Math.Tan(Extensions.toRads(beta))); //in mm double la = Extensions.Distance2D(0, 0, System.Convert.ToInt16(calcPoint.X), System.Convert. ToInt16(calcPoint.Z)); //Afstand van punt tot cam float epsilon = ((float)(leftWiimote.Y / (float)768) * viewAngleVert) - (viewAngleVert / 2); double y = Math.Tan(Extensions.toRads(epsilon)) * la; calcPoint.X = x; realXLabel.Text = X: + Math.Floor(calcPoint.X); calcPoint.Y = y; realYLabel.Text = Y: + Math.Floor(calcPoint.Y); calcPoint.Z = depth; realZLabel.Text = Z: + Math.Floor(calcPoint.Z);

int arduinoValue = 0; closestPlanet = new Sphere(); closestPlanet.name = none; foreach (Sphere planet in planetArray) { Boolean value = calculateDistancetoVirtualObject(planet); if (value == true) { //Calcpoint is in or near a Planet closestPlanet = planet; //Define closestPlanet mappedDistance.Text = Arduino: + arduinoValue.ToString() + ( Planet + closestPlanet.name + ); } } //Code for more accurately calculating the vibration value of the nearest planet if (closestPlanet.name != none) { int distance = calcDistance(closestPlanet); arduinoValue = Extensions.map(distance, 0, closestPlanet.radius, (closestPlanet.vibration / 6), closestPlanet.vibration); if (arduinoValue > closestPlanet.vibration || arduinoValue < (closestPlanet.vibration / 5)) { arduinoValue = 0; } switch (closestPlanet.hardVibration) { case true: arduino.writeInt(B); break; case false: arduino.writeInt(S); break; } arduino.writeInt(arduinoValue); } else { arduino.writeInt(0); } } private Boolean calculateDistancetoVirtualObject(Sphere sphere) { //Calculate distance from a random virtual point //---------------------------------------------int distance = 0; try { distance = calcDistance(sphere); } catch (OverflowException) { Console.WriteLine(Overflow at distance); }

Boolean value = false; if (distance <= sphere.radius) { value = true; rawDistance.Text = sphere.name; //Write distance to Label distanceProgress.Value = distance; //Write distance to one sphere to Progressbar } else { value = false; } } return value;

} private void updateCanvasEvent(object src, ElapsedEventArgs a) { drawCanvas(); } public void goFullScreen() { Rectangle screen; screen = Screen.PrimaryScreen.Bounds; if (fullScreen != true) { Console.WriteLine(Fullscreen); planetBox.Location = new System.Drawing.Point(0, 0); planetBox.Size = new System.Drawing.Size(screen.Width, screen.Height); this.ParentForm.DesktopLocation = new System.Drawing.Point(0, 0); fullScreen = true; } else { planetBox.Size = new System.Drawing.Size(343, 301); planetBox.Location = new System.Drawing.Point(22, 370); this.ParentForm.Size = new System.Drawing.Size(613, 739); this.ParentForm.DesktopLocation = new System.Drawing.Point((screen.Width / 2) - (this.Size.Width / 2), (screen.Height / 2) - (this.Size.Height / 2)); fullScreen = false; } } private Sphere getPlanetByName(string name) { Sphere finalPlanet = null; foreach (Sphere planet in planetArray) { if (planet.name == name) { finalPlanet = planet; } } return finalPlanet; } } }

arduino.setPort(port);

private int calcDistance(Sphere sphere) { int distance = Extensions.Distance3D(System.Convert.ToInt16(calcPoint.X), System.Convert. ToInt16(calcPoint.Y), System.Convert.ToInt16(calcPoint.Z), System.Convert.ToInt16(sphere.origin.X), System. Convert.ToInt16(sphere.origin.Y), System.Convert.ToInt16(sphere.origin.Z)); return distance; } private void changeDistance(object sender, EventArgs e) { try { wiiDistance = Int32.Parse(WiiMoteDistance.Text); } catch (OverflowException) { } catch (FormatException) { } } private void changeAngle(object sender, EventArgs e) { try { wiiAngle = System.Convert.ToInt32(WiiMoteAngle.Text); } catch (OverflowException) { } catch (FormatException) { } } private void chosePort(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.Enter) { string port = portBox.Text;

virtualCube.CS using System; using System.Collections.Generic;

using System.Linq; using System.Text; using System.Drawing; using System.Windows.Forms; namespace WiimoteTest { public class Math3D { public class Point3D { //The Point3D class is rather simple, just keeps track of X Y and Z values, //and being a class it can be adjusted to be comparable public double X; public double Y; public double Z; public Point3D(int x, int y, int z) { X = x; Y = y; Z = z; } public Point3D(float x, float y, float z) { X = (double)x; Y = (double)y; Z = (double)z; } public Point3D(double x, double y, double z) { X = x; Y = y; Z = z; } public Point3D() { } public override string ToString() { return ( + X.ToString() + , + Y.ToString() + , + Z.ToString() + ); } } } public class Sphere { public int radius; //in mm public Math3D.Point3D origin; //3D Coordinate in XYZ space

public string name = ; public int vibration = 255; //Number indication the vibration when over the planet public int minVibration = 100; //Indication of rumble when inside the planet public Boolean hardVibration = false; public Sphere(int defradius, Math3D.Point3D deforigin) { radius = defradius; origin = deforigin; } public Sphere() { } } }

Extensions.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace WiimoteTest { public static class Extensions { public static double toRads(double degrees) { double rads = degrees * (Math.PI / 180); return rads; } /// <summary> /// Finds the distance between two points on a 2D surface. /// </summary> /// <param name=x1>The point on the x-axis of the first point</param> /// <param name=x2>The point on the x-axis of the second point</param> /// <param name=y1>The point on the y-axis of the first point</param> /// <param name=y2>The point on the y-axis of the second point</param> /// <returns></returns> public static int Distance2D(int x1, int y1, int x2, int y2) { // ______________________ //d = &#8730; (x2-x1)^2 + (y2-y1)^2 // //Our end result int result = 0; //Take x2-x1, then square it double part1 = Math.Pow((x2 - x1), 2); //Take y2-y1, then sqaure it double part2 = Math.Pow((y2 - y1), 2); //Add both of the parts together double underRadical = part1 + part2;

//Get the square root of the parts result = (int)Math.Sqrt(underRadical); //Return our result return result; } /// <summary> /// Finds the distance between two points on a 3D surface. /// </summary> /// <param name=x1>The point on the x-axis of the first point</param> /// <param name=x2>The point on the x-axis of the second point</param> /// <param name=y1>The point on the y-axis of the first point</param> /// <param name=y2>The point on the y-axis of the second point</param> /// <param name=z1>The point on the z-axis of the first point</param> /// <param name=z2>The point on the z-axis of the second point</param> /// <returns></returns> public static int Distance3D(int x1, int y1, int z1, int x2, int y2, int z2) { // __________________________________ //d = &#8730; (x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2 // //Our end result int result = 0; //Take x2-x1, then square it double part1 = Math.Pow((x2 - x1), 2); //Take y2-y1, then sqaure it double part2 = Math.Pow((y2 - y1), 2); //Take z2-z1, then square it double part3 = Math.Pow((z2 - z1), 2); //Add both of the parts together double underRadical = part1 + part2 + part3; //Get the square root of the parts result = (int)Math.Sqrt(underRadical); //Return our result return result; } /// <summary> /// Maps One variable with a particular range against another range /// </summary> /// <param name=x>The variable to be mapped</param> /// <param name=in_min>The minimum range of the input variable</param> /// <param name=in_max>The maximum range of the input variable</param> /// <param name=out_min>The minimum range of the output variable</param> /// <param name=out_max>The maximum range of the output variable</param> /// <returns></returns> public static int map(double x, double in_min, double in_max, double out_min, double out_max) { return System.Convert.ToInt16((x - in_min) * (out_max - out_min) / (in_max - in_min) + out_min); } } }

Das könnte Ihnen auch gefallen