Beruflich Dokumente
Kultur Dokumente
Location: HCIL, University of Maryland (USA) Dates: September 4th to December 22nd, 2006
UMIACS Director
Professor V.S. Subrahmanian 301.405.2711 Tel: Fax: 301.314.9658 E-mail: vs@cs.umd.edu Website: http://www.cs.umd.edu/~vs/
Internship supervisor
Dr. Allison Druin Director, Human-Computer Interaction Lab www.cs.umd.edu/hcil Associate Professor University of Maryland College of Information Studies and Institute for Advanced Computer Studies Tel: 301.405.7406 E-mail: allisond@umiacs.umd.edu Website: www.umiacs.umd.edu/~allisond
Special thanks
To the University of Maryland. To Dr Allison Druin who allowed me to do my internship in the Human-Computer Interaction Laboratory (HCIL). To Jerry Fails, for giving so much of his time to help me. To the Human-Computer Interaction Laboratory employees and students, for being so friendly and patient with me
Internship subject
The laboratory has been working with children for almost ten years now. Therefore, more and more applications are designed for them, with their help. The objective of the project I worked on was to create mobile and interactive narratives on PDAs. In other words, the future application could allow a child to read a story, modify the text, image and sound and communicate with other children through wireless. A prototype version had been developed a month before I arrived but had some bugs in the sound code. My role was to fix the program and make it stable. Later, I worked on the infrared authentication program and at the end of the internship I did some research on possible extensions of the project.
Sujet du stage
Cela va faire dix ans que le laboratoire travaille avec des enfants. Par consquent, de plus en plus d'applications sont conues spcialement pour eux et avec leur aide. Le but du projet sur lequel je travaillais est de crer des livres d'histoires interactifs et mobiles (sur des PDA). En d'autres termes, le programme permettra un enfant de lire une histoire, de modifier le texte, l'image et le son, et de communiquer (sans fil) avec d'autres enfants. Un prototype avait dj t dvelopp un mois avant que j'arrive mais la partie son comportait des erreurs. Mon rle fut donc de rparer ce programme et de le rendre stable. J'ai ensuite travaill sur le programme d'authentification par infrarouges et vers la fin du stage j'ai effectu des recherches pour savoir s'il tait possible d'ajouter des fonctionnalits au projet.
Table of contents
Introduction...................................................................................................... 1 University description ..................................................................................... 2
University of Maryland ....................................................................................................... 2 College of Computer Science ............................................................................................ 2 UMIACS ............................................................................................................................ 3 HCIL .................................................................................................................................. 4
What is Kidsteam?......................................................................................... 17
History ..............................................................................................................................17 Description .......................................................................................................................17 Group formation................................................................................................................17 Sponsors ..........................................................................................................................17 Children as design partners..............................................................................................18 My role during Kidsteam sessions ....................................................................................20
Conclusion ..................................................................................................... 33
Internship report
Introduction
According to the NCES (National Center for Education Statistics), nearly 90% of American children enrolled in primary school use computers and half of them also surf on the Internet. This shows that today, technology represents a big part of a child's life. Children are used to have computers around them (whether it is at school or at home) and tend to know more and more how they work. The project I worked on during those four months focused on how we could take advantage of those figures by creating interactive and digital narratives for children. The idea was to be able to read, modify and create stories on a mobile technology and exchange them with others. My role was to concentrate on the sound and infrared authentication parts of the program. I was also involved in another project called Kidsteam. Finally, I have done some research to see if the Mobile Storyteller project could go any further. In the first part I will present the University of Maryland and the lab I worked in. Next I will talk about a previous project and explain further in detail what the Mobile Storyteller program will do. Then I will expose the modifications I have made to the sound code of the program. In a third part we will see what is Kidsteam and how it helped the project with its feedback. After that, I will describe my role in the IR authentication part of the project. Finally, we will see what are the possible extensions to Mobile Storytellers.
Internship report
University description
University of Maryland
Located in College Park (Maryland), the University of Maryland was created in 1856, more than 150 years ago. In 1916 it admitted the first women students. It now has more than 35,000 students enrolled each year. About 12,000 employees work on campus. Using technology, the University provides selected quality programs to audiences worldwide to share its knowledge and extend and enhance educational opportunities. The University of Maryland shares its research, educational and technological strengths with businesses, government and other educational institutions. Here is a short summary of its financial resources for 2006:
Contracts and Grants: $247,700,000 Tuition and Fees: $332,700,000 State Appropriation: $327,500,000 Educational Activities: $26,600,000
28%
30%
We can see that grants and donations are a big part of the University's yearly income. In 2006, the university is 18th in the ranking of national public universities and in the top 20 for public research universities.
Internship report
The first degree program in computer science at Maryland was the Master of Science Degree, which started in September 1967. The Computer Science Center developed a proposal to initiate a Bachelor of Science degree in computer science in the spring of 1973. Whereas most departments of computer science in the United States arose out of either Engineering, Mathematics, or Physics Departments, the Department of Computer Science at the University of Maryland, College Park, arose out of the Academic Computer Center. The Computer Science Center and the Department of Computer Science were formally split on July 1, 1973. Today Maryland's Computer Science Department is ranked 13th in the US. Despite the late integration of computers in the university, the Computer Science Department's research is very developed. It covers different subjects, such as:
Algorithms and Theory of Computation Artificial Intelligence Bioinformatics and Computational Biology Computer Vision Databases Graphics High Performance Computing Human Computer Interaction Numerical Analysis Programming Languages Software Engineering Security Systems
UMIACS
The mission of UMIACS (University of Maryland Institute for Advanced Computer Studies) is to foster and enhance interdisciplinary research and education in computing across the College Park campus. Since its inception, UMIACS has played a major role at the University of Maryland in building strong interdisciplinary research programs, cutting-edge computing infrastructure, and long-term partnerships with national and international research centers. The Institute's programs are led by distinguished researchers, many of whom hold joint appointments in strong academic units such as Computer Science, Electrical and Computer Engineering, Linguistics, Geography, Philosophy, Business, Education, and College of Information Studies. Since computing is at the core of all the Institute's activities, UMIACS has a uniquely close relationship with the Department of Computer Science whose graduate program has been consistently ranked very high by all national rankings. The
3
Internship report
synergistic environment provided by UMIACS enables innovative collaborations between the Computer Science faculty and other faculty on campus. The infrastructure provided by UMIACS is primarily geared toward supporting interdisciplinary research while the core computer science projects are primarily conducted through the Department of Computer Science. UMIACS contains several research labs, which are:
CfAR: Center for Automation Research CBCB: Center for Bioinformatics and Computational Biology CDIG: Center for Digital International Government CHESS: Center for Human Enhanced Secure Systems CLIP: Computational Linguistics and Information Processing CVL: Computer Vision Laboratory DSSL: Distributed Systems Software Laboratory FCMD: Fraunhofer Center at Maryland GLCF: Global Land Cover Facility GVIL: Graphics and Visual Informatics Laboratory HCIL: Human Computer Interaction Laboratory KECK: Keck Laboratory for the Analysis of Visual Motion LAMP: Language and Media Processing Laboratory LCCD: Laboratory for Computational Cultural Dynamics LPCD: Laboratory for Parallel and Distributed Computing MIND: Maryland Information and Network Dynamics Laboratory PIRL: Perceptual Interfaces and Reality Laboratory
HCIL
Founded in 1983, the Human-Computer Interaction Lab (HCIL) at the University of Maryland conducts research on advanced user interfaces and their development processes. It designs, implements, and evaluates new interface technologies that are universally usable, useful, efficient and appealing to a broad cross-section of people. Interdisciplinary research teams study the entire technology development life-cycle, which includes initial technology design, implementation issues, and evaluation of user performance. This work has developed new theories, methodologies, and technologies. Current work includes new approaches to information visualization, interfaces for digital libraries, multimedia resources for learning communities, zooming interfaces (ZUIs), technology design methods with and for children, and instruments for evaluating user interface technologies.
Internship report
The HCIL is an interdisciplinary lab comprised of faculty and students from Computer Science, Education, Psychology and Information Studies. The HCIL has conducted a broad range of research over the years, and continues to pursue several areas in depth. All projects are listed below, and are grouped by area:
Communities Design Process Digital Libraries Education Physical Devices Public Access Visualization
Internship report
Project descriptions
ICDL
The project I worked on used another project called ICDL (International Children's Digital Library), which started in November 2002. It is a virtual library of international children's book available around the world. It was developed with two kinds of users in mind: children ages 7-11 willing to read books and parents who want to read books to their children. Only physical books are collected. To make sure that all the books are available online, they are all scanned. The website is designed in collaboration with children to facilitate navigation and searching. One of the primary goals of the ICDL is to "create a collection of more than 10,000 books in at least 100 languages that is freely available to children, teachers, librarians, parents and scholars throughout the world via the Internet". In addition, the people in charge want to give every child a possibility to read a book in his or her mother tongue. The National Science Foundation (NSF), the Institute for Museum and Library Services (IMLS) and Microsoft are the main sponsors of this project. There are also a couple of volunteers from around the world who donate their time to help test software, conduct use studies, ensure that language usage makes sense to native speakers etc. Country or culture experts select books from the library. They seek for works with historical value as well as contemporary literature. Books are presented whole in their original languages. These books come from around the world and are totally free. Every book on the website can be read in its integrity. In other words, everyone can now connect to the Internet to read a selection of children books from around the world and in different languages. Before the beginning of this project, the people in charge contacted all the authors and editors of the books they wanted to put so that the website users could read all the books legally. There were two types of books: the ones already in public domain and the ones protected by copyrights. For the latter the directors had to sign some contracts to have the right to publish the book on the Internet in spite of the copyrights. A third option was that an author, a publisher or a national library that had the copyrights for a book donated the book to the project.
Internship report
Mobile Storytellers
Description
The project I worked on consisted of developing a new software. With this application children could load a book from the ICDL to a PDA and change the text, the image and the sound. They could also reorder the pages and create new pages if they wanted to. One of the singularities of this project is that the PDAs were able to communicate with each other. Thus, kids could read the same story together but on two separate devices (the display of the story being shared between the PDAs). The role of each PDA (or clients) would be assigned by a desktop (or server). This program could also be used to exchange stories. For instance, let's say one of the kids modified one story; he could then send it to a friend. The communication will use infrareds for identification and wireless for data transfer.
To display the pages of the books loaded, the program used a software called Piccolo. It has been developed by the lab in 2004 and is open source and totally free. It is a toolkit that allows you to build graphical applications. It is a way to create visual effects such as animation (through event-handling) and ZUI (Zoomable User Interface). The former is helpful to show little information in the overview. Then when the user zooms in, it gives more details (the data is either bigger or is more specified). For example, here are some print screens from the Mobile Storyteller program.
overview of a story
Internship report
There are three versions of Piccolo called Piccolo.Net (developed in C#), Piccolo.Java and PocketPiccolo.Net (for the Compact Framework). The program we made used both Piccolo.Net and PocketPiccolo.Net.
When I arrived to start this internship, a prototype of the project had already been completed. It was written in C#. We could ask ourselves: why choose this language? First, to be able to use Piccolo the program had to be written in either Java or C#. Of course, the project could have been done without any ZUI friendly software but it was silly not to take advantage of it. Also, the project works with PDAs as well as desktop so Piccolo was the perfect match since it had a specific version for each one. But the main reason why C# was used is that it is better than Java for this project. It offers more stability. Moreover we did not need the portability of Java since desktops and PDAs have Windows and respectively the .Net Framework and .Net Compact Framework.
Internship report
Internship report
used by a Windows application. It allowed me to call some of Windows' C++ predefined methods. Searching a little more on each function made me realize that PlaySound can play and stop a file (but not pause) and that mciSendString can do the three. As one of my objectives was to be able to pause the file, I had to find a new solution. I decided to switch and use mciSendstring instead of PlaySound. Windows' Media Control Interface (or MCI) provides a high-level, device independent interface for controlling multimedia devices. With it, your code can pretty much support any multimedia device and allows you to operate a file on a waveform audio device. MCI basically provides two ways of accessing the low-level multimedia devices: through a structure of messages, or by passing strings into the interface (using mciSendstring). This method passes a string as an argument, accomplishing different things. For instance, for playing a file the argument would be "play mysound", mysound being the device to play the sound to. The good thing about this method is that every action you do is very easy. You just have to call mciSendstring and change the command string (which is the argument of the method). Therefore, I was able to play, pause and stop an audio file without any problem.
10
Internship report
This program can: open an audio file (using the openfile dialogbox) The user can browse the folders on his or her computer and choose a WAV file he or she wants to open. Once the OK button is clicked the path of the file appears in the label next to the browse button. play and pause an audio file In order to do so, the user needs to click on the play ( ) or pause ( ) buttons. Each time an action begins, it is written in the label on the right (in the device screen). That way, the user knows what is going on and does not click twice on the same button because it does not seem to work. record a WAV file When the user pushes the record button, the program starts recording data right away. To end this, you have to click on the stop button. To transfer the buffers data into a file, one should hit the save button. It will open a save dialogbox and create a new WAV file in the folder chosen.
11
Internship report
modify the volume This was not mandatory but I thought it would be handy. The user can increase or decrease the volume, using a trackbar. If the sound is set as stereo, the volume will be changed on both channels.
I took us a bit of time to figure out how the program worked. After that we were able to locate the line in the code that made the playing looped. In fact, that line initialized the position of the cursor to 0 if the file ended. We commented that line, which made the playing stop once the end of the file was found.
We then had to introduce a pause function. This was not very complicated as the WaveOut class has a pause method. The tricky part was to add some Booleans to be sure to pause the file at the right time. Also if the file was paused we had to call the resume function to play it, otherwise we just called the play method.
12
Internship report
The last problem we had to deal with was to see why the stop button froze the application. We did not find where the error was. On the other hand, we did not spend a lot of time on this as we did not need this stop button (the file stopped by itself once it finished playing).
Creating and audio player for PDAs (using WaveOut)
We created a new project specially made to be deployed on PDAs and copied the desktop project into it. At first it didnt build because of two errors. The first one was that the GetString function had only one argument and was supposed to have three to be able to run on Pocket PCs. We switched back to three arguments, which corrected the mistake. The second error was located in the DLL import. In the desktop version we imported a library called "winmm.dll". It is a module for the Windows Multimedia API, which consists of low-level audio and joystick functions. It contains all the WaveOut functions that we called for the playing. The problem is that PDAs run on another operating system called Windows CE. This operating system does not recognize the DLL winmm. Instead, it uses another one called coredll. So we modified the DLL imported. The code then compiled and executed correctly. One of the advantages to employ the WaveOut methods was to have the exact same code for both versions (desktop and PDA). As a matter of fact, it required less maintenance and saved some time. At this point, the two programs still had two differences: First, the GetString method did not need the same number of arguments in both cases. But as the .Net Framework could also accept three arguments instead of one, we did not need to change anything. Finally, we could not use the coredll library on desktops because they were running on Windows XP. So we inserted a test that set the library to import as winmm for desktops and coredll for PDAs. We tested both versions of the program and were able to open, play and pause different WAV files. Thus, we finished the program for both desktop and PDAs.
Internship report
Internship report
After changing the quality we made sure the WAV file was still audible. It almost did not change anything on desktop but there was a noticeable difference on PDAs. The quality was much lower but we could still hear the music and the lyrics quite easily. We figured it was not only due to the quality of the recording but also to the efficiency of the microphones on PDAs (which are bad quality too). Here is what the final program looks like on PDAs (the flowchart and code can be found at annex 3, 4 and 5). We had to simplify the interface because Pocket PCs have a smaller screen than desktops.
15
Internship report
16
Internship report
What is Kidsteam?
History
Kidsteam is an intergenerational and interdisciplinary design team. It was created in 1998 at the University of Maryland by the director of the HCIL and has become increasingly popular over the years.
Description
The Kidsteam project consists of gathering a group of six to eight children who come to the lab twice a week and spend two hours working on different projects. The children (aged six to eleven) are managed by at least six adults. The objective is to keep the adult per child ratio very high to work faster and in a more efficient way. The kids main role is to test and design new technologies made for children.
Group formation
The children are chosen in spring and spend two weeks in the summer learning how to be designers. During this time they will be introduced to the different techniques researchers use to test and create prototypes. They might even start working on some of the projects they will have during the school year. Participation in Kidsteam is voluntary so anyone can drop out whenever they want. After two semesters, if the participation was sufficient and if the child wants to, he or she can come back the following year. This process can go on until he or she turns eleven, which is the limit. So how are the children chosen? Some of the children in this year's Kidsteam were there last year or are related to a Kidsteam alumnus. Word of mouth is also very efficient. As for the very first group of kids, researchers created it by going to local schools (near College Park and the D.C. area) and interviewing children. They were looking for young people interested in technology and verbal about what they are thinking. Children are not paid but receive a compensatory gift at the end of the year in exchange for their hard work.
Sponsors
Sponsors as well as PhD students suggest Kidsteam projects. These companies (who are not necessarily specialized in computing) contact the HCIL to work with the children. This semester sponsors included Microsoft, Discovery and the National Park Service.
17
Internship report
Each company has a particular problem and thus is asking Kidsteam to design or test different things. For example, it can be testing a website (see if it is easy enough to navigate), a program (to see if it has all the functionalities wanted) or give ideas for a new software. Some computing companies donate hardware to the lab instead of the normal payment. Usually this hardware is tested or used to develop a program. Each project has a particular age range. Sometimes, all of the kids do not fit in this range so we divided them in two groups: one was the exact age range and the other was a bit older or younger. We asked the second group to work on finding how the project could please their generation. From time to time, we even called some alumni from Kidsteam to form a third group and work on the same project.
Internship report
idea in a single drawing). This method is regularly implemented at the beginning of the design process to specify the project's big ideas. It is also helpful when researchers want kids to change only one feature of the project and redesign it. bags of stuff This process was usually required at the beginning of the project, when no sample of the program or website was available yet. What adults often asked children to do was to design a low-tech prototype of some broad ideas. In order to do that kids were divided into groups of one to three and were given a bag containing things such as cardboard, tape, paper, foam balls, balloons, glue, scissors From all that they had half an hour to build their prototype. Once they were done, they presented it to the whole group. If we were working with sponsors, it allowed the people in charge to bring back the whole idea to their company. Not only did they have the idea but also how the child would design the concept. sticky notes This was the most popular, used mainly to test a prototype of a product and give feedback. Each kid was in front of a computer (or the device they were testing), and was paired with an adult (if there was enough). The child's job was to play with the product and find the major bugs. In addition, he or she had to give feedback on what they were testing. To keep a record of what was said during the session, adults wrote the comments on sticky notes, keeping one idea per post-it. When a child did not like a feature of the product tested we often asked him to give a design idea to answer the question "how would I make it better?". That way, the criticism was productive and helpful to whoever was developing the project. At the end of the session, all the sticky notes were collected and grouped on a white board. Adults then sorted them in three categories: likes, dislikes and design ideas. Most of the time, several kids gave the same idea or had comments on a same element. Therefore we circled those ideas and gave a generic name to it.
After each session adults and children gathered for a debriefing, which consisted of saying the big ideas that most of the kids thought of. These debriefings were always very interesting. As a matter of fact, as each adult was working with a fraction of Kidsteam, they
19
Internship report
only heard part of the ideas. So the last meeting was a chance for everyone to catch up and see what the other kids (or groups) came up with. Also, it was fascinating to see the general ideas that emerged from the session. Even though the adults planned the session and set some guidelines before, children have this characteristic to be very unpredictable. As a consequence, no one could expect what the big ideas would be.
20
Internship report
all text
all images
There are two possible ways to play a story. the "automatic mode" After clicking on play the pages are displayed one by one in full screen. If a sound file is attached to a page, it is read. the "manual mode" Instead of having the pages scrolling all by themselves, you have to push a button on the device to get to the next page. Since we only had four PDAs, we grouped the children in pairs, giving one device per group. We asked them to read a story using the program on the device. In order to be able to play a story, we had to load them on the Pocket PCs. Each story takes about 3 to 15MB (depending on the number of pages it contains and if they have audio files included) and the devices have an average of 100MB of memory. To prevent them from crashing we only loaded three stories per device. The kids could choose from any of those. They all picked a different story, the younger kids preferring the automatic mode and the older ones feeling
21
Internship report
confident with the manual playing. By the end of the session, every group had had at least one crash. No one managed to get to the end of a story. At this time I had a program that played sound on desktop but nothing yet on PDAs. For that reason we were still running the sound code made during the summer. The problem was that it had an error that made the whole program crash.
Second session
For this session I had time to finish the sound code for PDAs, which was integrated into the main program. As we did not add any other new feature to it, we decided not to make the kids test the code again. Instead, we thought we could ask them to come up with new particularities the program could have. We wanted to focus more on the communication part, meaning how could PDAs interact with each other. In order to help the children, we built some low tech PDAs using cardboard and gave them post-its (which had the exact same size as the screen) and cutouts from stories we used last time. Each kid had a cardboard PDA (which was full-sized) and each group had a story cutout that came in three versions: text and image, images only and text only. That way, they had a concrete prop to hold on to.
a cardboard PDA
We then paired the children in groups different from the last session. The question they had to answer was "what would happen if two children had two PDAs with the MobileStoryteller program?". In other words, how can having multiple devices be helpful to read stories? Each group came out with really different ideas. The first one thought about using both PDAs to have more space to display the story. The first device would be used to show the images only and the second one could display the text only. That way, both pages were zoomed in which helped the user see better. The idea was to show a page and a half on each device so that the reader has a little idea of what was coming next. The scrolling was vertical, to take advantage of the shape of the screen on a PDA. Both kids were reading the same story at the same time, which implies that there is a way for the devices to synchronize.
22
Internship report
The only idea that came out from more than one group was that the kids had to be able to send messages to each other, meaning they wanted to communicate while reading. These messages could be to tell a friend what you think about a book or to send an extract of the book (for example a page or some text or an image). It could also be a voice message. The original idea that was given by one of the groups was to be capable of mixing stories. Therefore, you could imagine having the text of one story and the image of another or even a swap between characters and plots. In addition, there could be a surprise button that would also do a swap but the child does not know when in the book it will surface. That would be a new way to create stories.
23
Internship report
24
Internship report
Bluetooth is a short-range wireless technology, which allows connection between two devices (such as mobile phones, computers, cameras) via a secure radio frequency. It replaces cables and does not need a host PC. Power consumption is very low but in return the signal issued has very limited range (1 to 100 meters). One device (called the master) can communicate with up to seven devices (its slaves) and can change its part anytime. This creates small networks of maximum eight appliances. Data transfer reaches 723.1 kbits/s with Bluetooth 1 to 2.1Mbits/s with Bluetooth 2. Unfortunately, only two PDAs were equipped with Bluetooth and of top of that it was only the first version. infrared are well known for being used in remote controls. But these light waves (not to be mistaken with radio waves) have also an important role in communication and networking. They can be used to connect two devices that are really close to each other. The speed at which data is transferred differs from one device to another but it is around 115Kbps/s, which is rather slow. Also, the infrared ports are very small so the two devices to connect have to be very still in order for the connection to work. After looking at all our options, we chose to use infrared anyway for authentication. As a matter of fact, it offers more security than the other too, mostly because it does not use radio waves and so gets stopped by walls. That way, you do not get connected to someone you do not know. The main reason why we chose IR over bluetooth and WiFi was that we needed the kids to know when and who they were connected to. By using infrareds they had to physically get near the person they wanted to exchange information with. This action made them realize that they were trying to get connected to that person. Also, aligning two PDAs can be seen as a little fun and exciting game for a child.
Program developed
The first program that was developed before I got into the project could transfer a text file between two PDAs. By clicking the receive button on the client device then the send button on the other device the file was copied from one hard drive to the other. The communication between the two occurred by infrareds. This program was synchronous, meaning that once the file transfer ended the application closed and no other transfer could be made. The program we wanted to develop was only used for authentication. It had to display a message that there was no device connected at first. Then when a device was detected through the infrared port it had to display the name of the device and the status (the general flowchart of the program can be found at annex 6). Verification had to occur every second to see if the status changed or not. We chose to use a listbox to display the name of the devices and their status. It was the only interface we had. The first code made used only one thread (the one that displays the listbox). The program successfully managed to do IR recognition of a device. What no one knew is that using only one thread makes the program synchronous. So as soon as it detected the device, it closed. To make the program asynchronous, another thread was created. Its role was to detect a new device via IR. That way, the listbox was declared in one thread and modified in
25
Internship report
another, which kept giving an error. What happened is that the Invoke method had to be called every time the listbox in the IR recognition thread was used. It created a delegate on the interface thread that made possible the modification of the listbox. Another problem was that only one Hashtable was used to store the name of the devices. It was not possible to use the same one to read and write in. So a new one was created. Each Hashtable performs a different action. When I got into the project, the program could already detect a new device and display its name and status. The problem was to show at first a sentence saying no remote devices. This phrase had to be displayed only once and stay until a device was recognized. Then it was erased and replaced with the name and status of the device. The program uses a thread that checks every second if there is a new device. We managed to display the sentence for one second but then it disappeared. We finally found a solution by creating a flag that indicates if it is the first time that we enter the main method. If so, the message is displayed and if not all the devices statuses are set to disconnected.
This program can be used for multiple connections. For example, if person A want to connect to person B, each of their devices will show the name of the others device with the status connected. Once they disconnect, both status will show disconnected but the name of the devices do not disappear. Then if person B wants to communicate with person C, his or her device will display: Device A: disconnected, Device C: connected. So each new device that the program recognizes will be added to the list of other devices which had connection at one point or another.
26
Internship report
27
Internship report
Internship report
The file is actually stored on a server. Thanks to the pen ID the receiving device can recognize which file is being transferred. As for the stitching, the device identification is very simple: the first tap gives the sending device and the second tap the receiving device. This technique is very easy to use and could be implemented on Mobile Storytellers. It is simple enough to be used by children.
29
Internship report
We can see that the quality of the pictures and movies is in the average for mobile devices. But we could not really see what was on the pictures. As for the movies, they were not very fluid and kept freezing every couple of seconds. This could be due to the fact that a short movie takes a lot of space on the disk (5,25MB for 20s), which represents one tenth of the total memory of the PDA we tested the program on. After a couple of tests I realized that taking films also needed a lot of RAM, making the software crash very often. This combined with the fact that the images froze during the video capture made us stop the research. Anyway, the Mobile Storytellers does not need any video playing or recording. As a consequence, it would not be a handicap if we did not implement it.
30
Internship report
Internship report
on the Internet. One had to type the address of a wav file on the Internet to hear it. I did not know how to play a file stored in the phone's memory. Below are screen captures of the program.
when a file is being played, hitting the menu key will make this box appear with a list of different actions to perform
32
Internship report
Conclusion
During this internship, I managed to fix the sound player/recorder by using another algorithm. It is now working and stable. In addition, I assisted my colleagues with the Kidsteam project by being there at every session and preparing what we would need. I also helped finish the program that achieves the infrared authentication, which will be used in the future. Finally, I have done some research to see if the program could be implemented on cell phones. This internship was a good opportunity for me to discover programming for mobile technologies. I learnt a new language (C#) and managed to program in the Compact Framework.
33
Annex 1: PDA characteristics .....................................................1 Annex 2: The WAV file format ....................................................2 Annex 3: Flowchart of the playing part of the program...............4 Annex 4: Flowchart of the recording part of the program ...........5 Annex 5: Program that plays and records a WAV file.................6 Annex 6: Flowchart of the IR authentication program ..............32 Annex 7: Bibliography ..............................................................33
Annexes
DELL 64 MB 256 MB
HP 64 MB 32 MB 57 MB
Toshiba 62 MB 32 MB 62 MB
624 MHz 400 MHz 400 MHz 400 MHz Yes (1.4) Yes (1.4) No No
Annexes
We can see that each chunk has its own header (its ID and size). The far left column gives the encoding type. If it is "big endian" the bytes are written normally. On the other hand, if it is "little endian" the least significant byte will become the most significant. For instance, "46 46 49 52" (which is the code for "FFIR") will need to be read backwards as "52 49 46 46" (or "RIFF"). Let's start the explanation of each fieldname by the RIFF chunk: ChunkSize equals to the size of the entire file minus 8 bytes (which is the size of ChunkID plus the size of ChunkSize)
Annexes
If the format is set to "WAVE", two sub-chunks will follow: the format (or fmt) subchunk and the "data" sub-chunk.
Audioformat indicates if the file is compressed or not. We did not use any audio compression so we set this field to 1. NumChannels shows the number of channels (1 equals to mono, 2 to stereo etc.). We chose one channel ByteRate is the average bytes streamed per second. It is calculated through the formula: ByteRate= SampleRate * NumChannels * BitsPerSample /8
Subchunk1ID is then "fmt" Subchunk1Size equals to the size of the rest of Subchunk1
BlockAlign is the number of bytes for one sample including all channels. BlockAlign = NumChannels * BitsPerSample /8
BitsPerSample is a multiple of eight. We chose to work with 8 bits. Subchunk2ID equals to "data" Subchunk2size is the size of the data data is the audio data that the file contains
Annexes
Recording?
N
Open audio file
"Play" Click
Recording?
N N
Play file
Paused?
Resume playing
"Pause" Click
"Stop" Click
Recording?
Recording?
Playing?
Stop
Stopped?
N
Pause
Annexes
"Record" Click
N Recording?
N Playing?
Start recording
Stop recording
Create file
Write header
Write data
Close file
Annexes
namespace Sound_Player_Recorder { public class MainForm : System.Windows.Forms.Form { private System.ComponentModel.Container components = null; private System.Windows.Forms.Button OpenButton; private System.Windows.Forms.Button PlayButton; private System.Windows.Forms.Button PauseButton; private System.Windows.Forms.Button StopButton; private System.Windows.Forms.Button RecordButton; private System.Windows.Forms.OpenFileDialog OpenDlg; private System.Windows.Forms.SaveFileDialog SaveDlg;
public MainForm() { InitializeComponent(); } // Clean up any resources being used. protected override void Dispose(bool disposing) { if (disposing) { if (components != null) { components.Dispose(); } } base.Dispose(disposing); }
Annexes #region Windows Form Designer generated code private void InitializeComponent() { this.PlayButton = new System.Windows.Forms.Button(); this.StopButton = new System.Windows.Forms.Button(); this.OpenButton = new System.Windows.Forms.Button(); this.OpenDlg =new System.Windows.Forms.OpenFileDialog(); this.RecordButton = new System.Windows.Forms.Button(); this.PauseButton = new System.Windows.Forms.Button(); this.SaveDlg =new System.Windows.Forms.SaveFileDialog(); this.SuspendLayout(); // // PlayButton // this.PlayButton.Location = new System.Drawing.Point(130, 99); this.PlayButton.Name = "PlayButton"; this.PlayButton.Size = new System.Drawing.Size(125, 54); this.PlayButton.TabIndex = 0; this.PlayButton.Text = "Play"; this.PlayButton.Click += new System.EventHandler(this.PlayButton_Click); // // StopButton // this.StopButton.Location = new System.Drawing.Point(130, 244); this.StopButton.Name = "StopButton"; this.StopButton.Size = new System.Drawing.Size(125, 54); this.StopButton.TabIndex = 1; this.StopButton.Text = "Stop"; this.StopButton.Click += new System.EventHandler(this.StopButton_Click); // // OpenButton // this.OpenButton.Location = new System.Drawing.Point(130, 24); this.OpenButton.Name = "OpenButton"; this.OpenButton.Size = new System.Drawing.Size(125, 54); this.OpenButton.TabIndex = 2; this.OpenButton.Text = "Open"; this.OpenButton.Click += new System.EventHandler(this.OpenButton_Click); // // OpenDlg // this.OpenDlg.DefaultExt = "wav"; this.OpenDlg.Filter = "WAV files|*.wav"; // // RecordButton // this.RecordButton.Location = new System.Drawing.Point(130, 316); this.RecordButton.Name = "RecordButton";
Annexes this.RecordButton.Size = new System.Drawing.Size(125, 54); this.RecordButton.TabIndex = 3; this.RecordButton.Text = "Record"; this.RecordButton.UseVisualStyleBackColor = true; this.RecordButton.Click += new System.EventHandler(this.RecordButton_Click); // // PauseButton // this.PauseButton.Location = new System.Drawing.Point(130, 170); this.PauseButton.Name = "PauseButton"; this.PauseButton.Size = new System.Drawing.Size(125, 54); this.PauseButton.TabIndex = 4; this.PauseButton.Text = "Pause"; this.PauseButton.UseVisualStyleBackColor = true; this.PauseButton.Click += new System.EventHandler(this.PauseButton_Click); // // SaveDlg // this.SaveDlg.DefaultExt = "wav"; this.SaveDlg.Filter = "WAV files|*.wav"; // // MainForm // this.AutoScaleBaseSize = new System.Drawing.Size(5, 13); this.ClientSize = new System.Drawing.Size(365, 416); this.Controls.Add(this.PauseButton); this.Controls.Add(this.RecordButton); this.Controls.Add(this.OpenButton); this.Controls.Add(this.StopButton); this.Controls.Add(this.PlayButton); this.FormBorderStyle = System.Windows.Forms.FormBorderStyle.FixedDialog; this.MaximizeBox = false; this.MinimizeBox = false; this.Name = "MainForm"; this.Text = "Low-level audio player"; this.ResumeLayout(false); } #endregion /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { Application.Run(new MainForm()); }
Annexes private WaveLib.WaveOutPlayer m_Player; private WaveLib.WaveInRecorder m_Recorder; private WaveLib.WaveFormat m_Format = new WaveLib.WaveFormat(44100, 8, 1); private MemoryStream RecorderOutputStream = new MemoryStream(); private Stream m_AudioStream; private byte[] m_RecBuffer;//used to store the data recorded private string filename = ""; //used to store the name of //the file to record //booleans private bool is_playing = false; private bool is_paused = false; private bool is_recording = false;
/*----------------------------------------------------------------*/ /*----------------------Opening file methods----------------------*/ /*----------------------------------------------------------------*/ private void OpenFile() { if (OpenDlg.ShowDialog() == DialogResult.OK) { CloseFile(); try { WaveLib.WaveStream S = new WaveLib.WaveStream(OpenDlg.FileName); if (S.Length <= 0) throw new Exception("Invalid WAV file"); m_Format = S.Format; if (m_Format.wFormatTag != (short)WaveLib.WaveFormats.Pcm && m_Format.wFormatTag != (short)WaveLib.WaveFormats.Float) throw new Exception("Only PCM files are supported"); m_AudioStream = S; } catch (Exception e) { CloseFile(); MessageBox.Show(e.Message); } } }
Annexes private void CloseFile() { Stop(); if (m_AudioStream != null) try { m_AudioStream.Close(); } finally { m_AudioStream = null; } } /*----------------------------------------------------------------*/ /*------------------------Playing methods-------------------------*/ /*----------------------------------------------------------------*/ private void Play() { Stop(); if (m_AudioStream != null) { m_AudioStream.Position = 0; m_Player = new WaveLib.WaveOutPlayer(-1, m_Format, 16384, 3,new WaveLib.BufferFillEventHandler(Filler)); } } private void Filler(IntPtr data, int size) { byte[] b = new byte[size]; if (m_AudioStream != null) { int pos = 0; while (pos < size) { int toget = size - pos; int got = m_AudioStream.Read(b, pos, toget); if (got < toget) if (m_Player != null) { m_Player.file_done(); is_playing = false; break; } pos += got; } } else { for (int i = 0; i < b.Length; i++) b[i] = 0; }
System.Runtime.InteropServices.Marshal.Copy(b, 0, data, size);
} 10
Annexes /*----------------------------------------------------------------*/ /*------------------------Stopping method-------------------------*/ /*----------------------------------------------------------------*/ private void Stop() { is_paused = false; if (m_Player != null) try { m_Player.Reset(); m_Player.Dispose(); } finally { m_Player = null; } } /*----------------------------------------------------------------*/ /*------------------------Recording methods-----------------------*/ /*----------------------------------------------------------------*/ private void RecordStart() { RecordStop(); try { m_Recorder = new WaveLib.WaveInRecorder(-1, m_Format, 16384, 3, new WaveLib.BufferDoneEventHandler(DataArrived)); is_recording = true; } catch { RecordStop(); throw; } } private void DataArrived(IntPtr data, int size) { if (m_RecBuffer == null || m_RecBuffer.Length < size) m_RecBuffer = new byte[size]; //copies all the data into the m_RecBuffer System.Runtime.InteropServices.Marshal.Copy( data, m_RecBuffer, 0, size); int count = 0; while (count < m_RecBuffer.Length) { //writes m_RecBuffer into recorderOutputStream byte //per byte RecorderOutputStream.WriteByte(m_RecBuffer[count++]); } }
11
Annexes private void RecordStop() { if (m_Recorder != null) try { //where the file will be stored FileStream fs = new FileStream(filename, System.IO.FileMode.Create); //chunksize is length of wave data and the //header. long chunksize = RecorderOutputStream.Length+36; BinaryWriter bw = new BinaryWriter(fs); // Write out the header information WriteChars(bw, "RIFF"); bw.Write((int)chunksize); WriteChars(bw, "WAVEfmt "); bw.Write((int)16); bw.Write(m_Format.wFormatTag); bw.Write(m_Format.nChannels); bw.Write(m_Format.nSamplesPerSec); bw.Write(m_Format.nAvgBytesPerSec); bw.Write(m_Format.nBlockAlign); bw.Write(m_Format.wBitsPerSample); WriteChars(bw, "data"); bw.Write(RecorderOutputStream.Length); bw.Flush(); //writes the recorded data into the binary file bw.Write(RecorderOutputStream.ToArray()); fs.Close(); m_Recorder.ResetRecord(); m_Recorder.Dispose(); is_recording = false; } finally { m_Recorder = null; } } private void WriteChars(BinaryWriter wrtr, string text) { for (int i = 0; i < text.Length; i++) { char c = (char)text[i]; wrtr.Write(c); } }
12
Annexes /*----------------------------------------------------------------*/ /*-----------------------Button Click Events----------------------*/ /*----------------------------------------------------------------*/ private void OpenButton_Click(object sender, EventArgs e) { if (!is_recording) OpenFile(); } private void PlayButton_Click(object sender, EventArgs e) { if (!is_recording) { if (is_paused == true) { m_Player.Resume(); is_paused = false; } else { is_playing = true; Play(); } } }
private void PauseButton_Click(object sender, EventArgs e) { if (!is_recording && is_playing) { if (m_Player != null) { m_Player.Pause(); is_paused = true; } } }
13
Annexes private void RecordButton_Click(object sender, EventArgs e) { if (!is_recording && !is_playing) { if (SaveDlg.ShowDialog() == DialogResult.OK) { //Saves the name of the file to save filename = SaveDlg.FileName; RecordStart(); } } } } }
WaveOut class
using System; using System.Threading; using System.Runtime.InteropServices; namespace WaveLib { internal class WaveOutHelper { public static void Try(int err) { if (err != WaveNative.MMSYSERR_NOERROR) throw new Exception(err.ToString()); } } public delegate void BufferFillEventHandler(IntPtr data, int size); internal class WaveOutBuffer : IDisposable { public WaveOutBuffer NextBuffer; private AutoResetEvent m_PlayEvent = new AutoResetEvent(false); private IntPtr m_WaveOut; private private private private WaveNative.WaveHdr m_Header; byte[] m_HeaderData; GCHandle m_HeaderHandle; GCHandle m_HeaderDataHandle;
14
Annexes internal static void WaveOutProc(IntPtr hdrvr, int uMsg, int dwUser, ref WaveNative.WaveHdr wavhdr, int dwParam2) { if (uMsg == WaveNative.MM_WOM_DONE) { try { GCHandle h = (GCHandle)wavhdr.dwUser; WaveOutBuffer buf = (WaveOutBuffer)h.Target; buf.OnCompleted(); } catch { } } } public WaveOutBuffer(IntPtr waveOutHandle, int size) { m_WaveOut = waveOutHandle; m_HeaderHandle = GCHandle.Alloc(m_Header, GCHandleType.Pinned); m_Header.dwUser = (IntPtr)GCHandle.Alloc(this); m_HeaderData = new byte[size]; m_HeaderDataHandle = GCHandle.Alloc(m_HeaderData, GCHandleType.Pinned); m_Header.lpData = m_HeaderDataHandle.AddrOfPinnedObject(); m_Header.dwBufferLength = size; WaveOutHelper.Try(WaveNative.waveOutPrepareHeader (m_WaveOut, ref m_Header, Marshal.SizeOf(m_Header))); } ~WaveOutBuffer() { Dispose(); } public void Dispose() { if (m_Header.lpData != IntPtr.Zero) { WaveNative.waveOutUnprepareHeader(m_WaveOut, ref m_Header, Marshal.SizeOf(m_Header)); m_HeaderHandle.Free(); m_Header.lpData = IntPtr.Zero; } m_PlayEvent.Close(); if (m_HeaderDataHandle.IsAllocated) m_HeaderDataHandle.Free(); GC.SuppressFinalize(this); }
15
Annexes public int Size { get { return m_Header.dwBufferLength; } } public IntPtr Data { get { return m_Header.lpData; } } public bool Play() { lock (this) { m_PlayEvent.Reset(); m_Playing = WaveNative.waveOutWrite(m_WaveOut, ref m_Header, Marshal.SizeOf(m_Header)) == WaveNative.MMSYSERR_NOERROR; return m_Playing; } } public void WaitFor() { if (m_Playing) { m_Playing = m_PlayEvent.WaitOne(); } else { Thread.Sleep(0); } } public void OnCompleted() { m_PlayEvent.Set(); m_Playing = false; } } public class WaveOutPlayer : IDisposable { private IntPtr m_WaveOut; private WaveOutBuffer m_Buffers; // linked list private WaveOutBuffer m_CurrentBuffer; private Thread m_Thread; private BufferFillEventHandler m_FillProc; private bool m_Finished; private byte m_zero; private WaveNative.WaveDelegate m_BufferProc = new WaveNative.WaveDelegate(WaveOutBuffer.WaveOutProc); public static int DeviceCount { get { return WaveNative.waveOutGetNumDevs(); } }
16
Annexes
public WaveOutPlayer( int device, WaveFormat format, int bufferSize, int bufferCount, BufferFillEventHandler fillProc) { m_zero = format.wBitsPerSample == 8 ? (byte)128 : (byte)0; m_FillProc = fillProc; WaveOutHelper.Try(WaveNative.waveOutOpen(out m_WaveOut, device, format, m_BufferProc, 0, WaveNative.CALLBACK_FUNCTION)); AllocateBuffers(bufferSize, bufferCount); m_Thread = new Thread(new ThreadStart(ThreadProc)); m_Thread.Start(); } ~WaveOutPlayer() { Dispose(); } public void Pause() { WaveNative.waveOutPause(m_WaveOut); } public void Resume() { WaveNative.waveOutRestart(m_WaveOut); } public void Reset() { WaveNative.waveOutReset(m_WaveOut); } public void file_done() { m_Finished = true; }
17
Annexes public void Dispose() { if (m_Thread != null) try { m_Finished = true; if (m_WaveOut != IntPtr.Zero) WaveNative.waveOutReset(m_WaveOut); m_Thread.Join(); m_FillProc = null; FreeBuffers(); if (m_WaveOut != IntPtr.Zero) WaveNative.waveOutClose(m_WaveOut); } finally { m_Thread = null; m_WaveOut = IntPtr.Zero; } GC.SuppressFinalize(this); } private void ThreadProc() { while (!m_Finished) { Advance(); if (m_FillProc != null && !m_Finished) m_FillProc(m_CurrentBuffer.Data, m_CurrentBuffer.Size); else { // zero out buffer byte v = m_zero; byte[] b = new byte[m_CurrentBuffer.Size]; for (int i = 0; i < b.Length; i++) b[i] = v; Marshal.Copy(b, 0, m_CurrentBuffer.Data, b.Length); } m_CurrentBuffer.Play(); } WaitForAllBuffers(); }
18
Annexes private void AllocateBuffers(int bufferSize, int bufferCount) { FreeBuffers(); if (bufferCount > 0) { m_Buffers = new WaveOutBuffer(m_WaveOut,bufferSize); WaveOutBuffer Prev = m_Buffers; try { for (int i = 1; i < bufferCount; i++) { WaveOutBuffer Buf = new WaveOutBuffer(m_WaveOut, bufferSize); Prev.NextBuffer = Buf; Prev = Buf; } } finally { Prev.NextBuffer = m_Buffers; } } } private void FreeBuffers() { m_CurrentBuffer = null; if (m_Buffers != null) { WaveOutBuffer First = m_Buffers; m_Buffers = null; WaveOutBuffer Current = First; do { WaveOutBuffer Next = Current.NextBuffer; Current.Dispose(); Current = Next; } while (Current != First); } }
19
Annexes private void WaitForAllBuffers() { WaveOutBuffer Buf = m_Buffers; while (Buf.NextBuffer != m_Buffers) { Buf.WaitFor(); Buf = Buf.NextBuffer; } } } }
WaveIn class
using System; using System.Threading; using System.Runtime.InteropServices; namespace WaveLib { internal class WaveInHelper { public static void Try(int err) { if (err != WaveNative.MMSYSERR_NOERROR) throw new Exception(err.ToString()); } } public delegate void BufferDoneEventHandler(IntPtr data, int size); internal class WaveInBuffer : IDisposable { public WaveInBuffer NextBuffer; private AutoResetEvent m_RecordEvent = new AutoResetEvent(false); private IntPtr m_WaveIn; private private private private WaveNative.WaveHdr m_Header; byte[] m_HeaderData; GCHandle m_HeaderHandle; GCHandle m_HeaderDataHandle;
20
Annexes internal static void WaveInProc( IntPtr hdrvr, int uMsg, int dwUser, ref WaveNative.WaveHdr wavhdr, int dwParam2) { if (uMsg == WaveNative.MM_WIM_DATA) { try { GCHandle h = (GCHandle)wavhdr.dwUser; WaveInBuffer buf = (WaveInBuffer)h.Target; buf.OnCompleted(); } catch { } } } public WaveInBuffer(IntPtr waveInHandle, int size) { m_WaveIn = waveInHandle; m_HeaderHandle = GCHandle.Alloc(m_Header, GCHandleType.Pinned); m_Header.dwUser = (IntPtr)GCHandle.Alloc(this); m_HeaderData = new byte[size]; m_HeaderDataHandle = GCHandle.Alloc(m_HeaderData, GCHandleType.Pinned); m_Header.lpData = m_HeaderDataHandle.AddrOfPinnedObject(); m_Header.dwBufferLength = size; WaveInHelper.Try(WaveNative.waveInPrepareHeader( m_WaveIn, ref m_Header, Marshal.SizeOf(m_Header))); } ~WaveInBuffer() { Dispose(); } public void Dispose() { if (m_Header.lpData != IntPtr.Zero) { WaveNative.waveInUnprepareHeader(m_WaveIn, ref m_Header, Marshal.SizeOf(m_Header)); m_HeaderHandle.Free(); m_Header.lpData = IntPtr.Zero; } m_RecordEvent.Close(); if (m_HeaderDataHandle.IsAllocated) m_HeaderDataHandle.Free(); GC.SuppressFinalize(this); }
21
Annexes public int Size { get { return m_Header.dwBufferLength; } } public IntPtr Data { get { return m_Header.lpData; } } public bool Record() { lock (this) { m_RecordEvent.Reset(); m_Recording = WaveNative.waveInAddBuffer(m_WaveIn, ref m_Header, Marshal.SizeOf(m_Header)) == WaveNative.MMSYSERR_NOERROR; return m_Recording; } } public void WaitFor() { if (m_Recording) m_Recording = m_RecordEvent.WaitOne(); else Thread.Sleep(0); } private void OnCompleted() { m_RecordEvent.Set(); m_Recording = false; } } public class WaveInRecorder : IDisposable { private IntPtr m_WaveIn; private WaveInBuffer m_Buffers; // linked list private WaveInBuffer m_CurrentBuffer; private Thread m_Thread; private BufferDoneEventHandler m_DoneProc; private bool m_Finished; private WaveNative.WaveDelegate m_BufferProc = new WaveNative.WaveDelegate(WaveInBuffer.WaveInProc); public static int DeviceCount { get { return WaveNative.waveInGetNumDevs(); } }
22
Annexes public void ResetRecord() { WaveNative.waveInReset(m_WaveIn); } public WaveInRecorder( int device, WaveFormat format, int bufferSize, int bufferCount, BufferDoneEventHandler doneProc) { m_DoneProc = doneProc; WaveInHelper.Try(WaveNative.waveInOpen(out m_WaveIn, device, format, m_BufferProc, 0, WaveNative.CALLBACK_FUNCTION)); AllocateBuffers(bufferSize, bufferCount); for (int i = 0; i < bufferCount; i++) { SelectNextBuffer(); m_CurrentBuffer.Record(); } WaveInHelper.Try(WaveNative.waveInStart(m_WaveIn)); m_Thread = new Thread(new ThreadStart(ThreadProc)); m_Thread.Start(); } ~WaveInRecorder() { Dispose(); } public void Dispose() { if (m_Thread != null) try { m_Finished = true; if (m_WaveIn != IntPtr.Zero) WaveNative.waveInReset(m_WaveIn); WaitForAllBuffers(); m_Thread.Join(); m_DoneProc = null; FreeBuffers(); if (m_WaveIn != IntPtr.Zero) WaveNative.waveInClose(m_WaveIn); } finally { m_Thread = null; m_WaveIn = IntPtr.Zero; } GC.SuppressFinalize(this); }
23
Annexes { while (!m_Finished) { Advance(); if (m_DoneProc != null && !m_Finished) m_DoneProc(m_CurrentBuffer.Data, m_CurrentBuffer.Size); m_CurrentBuffer.Record(); } } private void AllocateBuffers( int bufferSize, int bufferCount) { FreeBuffers(); if (bufferCount > 0) { m_Buffers = new WaveInBuffer(m_WaveIn, bufferSize); WaveInBuffer Prev = m_Buffers; try { for (int i = 1; i < bufferCount; i++) { WaveInBuffer Buf = new WaveInBuffer(m_WaveIn, bufferSize); Prev.NextBuffer = Buf; Prev = Buf; } } finally { Prev.NextBuffer = m_Buffers; } } } private void FreeBuffers() { m_CurrentBuffer = null; if (m_Buffers != null) { WaveInBuffer First = m_Buffers; m_Buffers = null; WaveInBuffer Current = First; do { WaveInBuffer Next = Current.NextBuffer; Current.Dispose(); Current = Next; } while (Current != First); } }
24
Annexes { SelectNextBuffer(); m_CurrentBuffer.WaitFor(); } private void SelectNextBuffer() { m_CurrentBuffer = m_CurrentBuffer == null ? m_Buffers : m_CurrentBuffer.NextBuffer; } private void WaitForAllBuffers() { WaveInBuffer Buf = m_Buffers; while (Buf.NextBuffer != m_Buffers) { Buf.WaitFor(); Buf = Buf.NextBuffer; } } } }
WaveStream class
using System; using System.IO; namespace WaveLib { public class WaveStream : Stream, IDisposable { private Stream m_Stream; private long m_DataPos; private long m_Length; private WaveFormat m_Format; public WaveFormat Format { get { return m_Format; } } private string ReadChunk(BinaryReader reader) { byte[] ch = new byte[4]; reader.Read(ch, 0, ch.Length); return System.Text.Encoding.ASCII.GetString( ch, 0, ch.Length); }
25
Annexes { BinaryReader Reader = new BinaryReader(m_Stream); if (ReadChunk(Reader) != "RIFF") throw new Exception("Invalid file format"); Reader.ReadInt32(); // File length minus first 8 bytes // of RIFF description if (ReadChunk(Reader) != "WAVE") throw new Exception("Invalid file format"); if (ReadChunk(Reader) != "fmt ") throw new Exception("Invalid file format"); int len = Reader.ReadInt32(); if (len < 16) // bad format chunk length throw new Exception("Invalid file format"); m_Format = new WaveFormat(22050, 16, 2); // initialize // to any // format
m_Format.wFormatTag = Reader.ReadInt16(); m_Format.nChannels = Reader.ReadInt16(); m_Format.nSamplesPerSec = Reader.ReadInt32(); m_Format.nAvgBytesPerSec = Reader.ReadInt32(); m_Format.nBlockAlign = Reader.ReadInt16(); m_Format.wBitsPerSample = Reader.ReadInt16(); // advance in the stream to skip the wave format block len -= 16; // minimum format size while (len > 0) { Reader.ReadByte(); len--; } // assume the data chunk is aligned while (m_Stream.Position < m_Stream.Length && ReadChunk(Reader) != "data"); if (m_Stream.Position >= m_Stream.Length) throw new Exception("Invalid file format"); m_Length = Reader.ReadInt32(); m_DataPos = m_Stream.Position; Position = 0; } public WaveStream(string fileName) : this(new FileStream(fileName, FileMode.Open)) { }
public WaveStream(Stream S)
26
Annexes { m_Stream = S; ReadHeader(); } ~WaveStream() { Dispose(); } public void Dispose() { if (m_Stream != null) m_Stream.Close(); GC.SuppressFinalize(this); } public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } public override bool CanWrite { get { return false; } } public override long Length { get { return m_Length; } } public override long Position { get { return m_Stream.Position - m_DataPos; } set { Seek(value, SeekOrigin.Begin); } } public override void Close() { Dispose(); } public override void Flush() { } public override void SetLength(long len) { throw new InvalidOperationException(); }
27
Annexes { switch (o) { case SeekOrigin.Begin: m_Stream.Position = pos + m_DataPos; break; case SeekOrigin.Current: m_Stream.Seek(pos, SeekOrigin.Current); break; case SeekOrigin.End: m_Stream.Position = m_DataPos + m_Length - pos; break; } return this.Position; } public override int Read(byte[] buf, int ofs, int count) { int toread = (int)Math.Min(count, m_Length - Position); return m_Stream.Read(buf, ofs, toread); } public override void Write(byte[] buf, int ofs, int count) { throw new InvalidOperationException(); } } }
WaveNative class
using System; using System.Runtime.InteropServices; namespace WaveLib { public enum WaveFormats { Pcm = 1, Float = 3 } [StructLayout(LayoutKind.Sequential)] public class WaveFormat { public short wFormatTag; public short nChannels; public int nSamplesPerSec; public int nAvgBytesPerSec; public short nBlockAlign; public short wBitsPerSample; public short cbSize;
28
Annexes { wFormatTag = (short)WaveFormats.Pcm; nChannels = (short)channels; nSamplesPerSec = rate; wBitsPerSample = (short)bits; cbSize = 0; nBlockAlign = (short)(channels * (bits / 8)); nAvgBytesPerSec = nSamplesPerSec * nBlockAlign; } } internal class WaveNative { // consts public const int MMSYSERR_NOERROR = 0; // no error public const int MM_WOM_OPEN = 0x3BB; public const int MM_WOM_CLOSE = 0x3BC; public const int MM_WOM_DONE = 0x3BD; public const int MM_WIM_OPEN = 0x3BE; public const int MM_WIM_CLOSE = 0x3BF; public const int MM_WIM_DATA = 0x3C0; public const int CALLBACK_FUNCTION = 0x00030000 public const int TIME_MS = 0x0001; // time in milliseconds public const int TIME_SAMPLES = 0x0002; // number of wave // samples public const int TIME_BYTES = 0x0004; // current byte // offset // callbacks public delegate void WaveDelegate( IntPtr hdrvr, int uMsg, int dwUser, ref WaveHdr wavhdr, int dwParam2); // structs [StructLayout(LayoutKind.Sequential)] public struct WaveHdr { public IntPtr lpData; // pointer to locked data buffer public int dwBufferLength; // length of data buffer public int dwBytesRecorded; // used for input only public IntPtr dwUser; // for client's use public int dwFlags; // assorted flags (see defines) public int dwLoops; // loop control counter public IntPtr lpNext; // PWaveHdr, reserved for driver public int reserved; // reserved for driver } private const string mmdll = "winmm.dll";
// waveOut calls
29
Annexes [DllImport(mmdll)] public static extern int waveOutGetNumDevs(); [DllImport(mmdll)] public static extern int waveOutPrepareHeader( IntPtr hWaveOut, ref WaveHdr lpWaveOutHdr, int uSize); [DllImport(mmdll)] public static extern int waveOutUnprepareHeader( IntPtr hWaveOut, ref WaveHdr lpWaveOutHdr, int uSize); [DllImport(mmdll)] public static extern int waveOutWrite( IntPtr hWaveOut, ref WaveHdr lpWaveOutHdr, int uSize); [DllImport(mmdll)] public static extern int waveOutOpen(out IntPtr hWaveOut, int uDeviceID, WaveFormat lpFormat, WaveDelegate dwCallback, int dwInstance, int dwFlags); [DllImport(mmdll)] public static extern int waveOutReset(IntPtr hWaveOut); [DllImport(mmdll)] public static extern int waveOutClose(IntPtr hWaveOut); [DllImport(mmdll)] public static extern int waveOutPause(IntPtr hWaveOut); [DllImport(mmdll)] public static extern int waveOutRestart(IntPtr hWaveOut);
// WaveIn calls [DllImport(mmdll)] public static extern int waveInGetNumDevs(); [DllImport(mmdll)] public static extern int waveInAddBuffer( IntPtr hwi, ref WaveHdr pwh, int cbwh); [DllImport(mmdll)] public static extern int waveInClose(IntPtr hwi); [DllImport(mmdll)] public static extern int waveInOpen(out IntPtr phwi, int uDeviceID, WaveFormat lpFormat, WaveDelegate dwCallback, int dwInstance, int dwFlags); [DllImport(mmdll)] public static extern int waveInPrepareHeader(IntPtr hWaveIn, ref WaveHdr lpWaveInHdr, int uSize);
[DllImport(mmdll)]
30
Annexes public static extern int waveInUnprepareHeader( IntPtr hWaveIn, ref WaveHdr lpWaveInHdr, int uSize); [DllImport(mmdll)] public static extern int waveInReset(IntPtr hwi); [DllImport(mmdll)] public static extern int waveInStart(IntPtr hwi); } }
31
Annexes
Discover devices
Device discovered?
First time?
Clear listbox
32
Annexes
Annex 7: Bibliography
Introduction
http://nces.ed.gov/pubs2005/2005111rev.pdf
University Description
http://www.newsdesk.umd.edu/facts/2006rank.cfm http://prism.cs.umd.edu/papers/M02:facultytimeline/history_new.pdf http://www.cs.umd.edu/groups/areas.shtml http://www.cs.umd.edu/newsletters/InsideCS_2006s.pdf http://www.umd.edu/university/factcard2004.pdf http://www.newsdesk.umd.edu/facts/quickfacts.cfm http://www.umiacs.umd.edu/aboutus.htm http://www.umiacs.umd.edu/docs/umiacsbro.pdf http://www.umiacs.umd.edu/research.htm http://www.cs.umd.edu/hcil/about/ http://www.cs.umd.edu/hcil/research/ http://www.provost.umd.edu/Strategic_Planning/Mission2000.html
Project descriptions
33
Annexes
What is Kidsteam?
http://www.cs.umd.edu/hcil/kiddesign/ Mona Leigh Guha, Allison Druin, Gene Chipman, Jerry Alan Fails, Sante Simms, Allison Farber. Mixing Ideas: A New Technique for Working with Young Children as Design Partners. http://hcil.cs.umd.edu/trs/2004-01/2004-01.pdf, 2004
http://en.wikipedia.org/wiki/Bluetooth http://www.bluetomorrow.com/content/section/10/37/ http://en.wikipedia.org/wiki/Wi-Fi http://en.wikipedia.org/wiki/Infrared#Communications http://msdn.microsoft.com/library/default.asp?url=/library/enus/cpref/html/frlrfsystemwindowsformscontrolclassinvoketopic.asp Ken Hinckley, Gonzalo Ramos, Francois Guimbretiere, Patrick Baudisch, Marc Smith. Stitching: Pen Gestures that Span Multiple Displays. http://www.cs.umd.edu/~francois/Papers/2004-Hinckley-AVI04-Stitching.pdf, 2003 Jun Rekimoto. Pick-and-Drop: A Direct Manipulation Technique for Multiple Computer Environments. http://www.csl.sony.co.jp/person/rekimoto/papers/uist97.pdf, 1997
http://en.wikipedia.org/wiki/WAV http://ccrma.stanford.edu/CCRMA/Courses/422/projects/WaveFormat/
34