Sie sind auf Seite 1von 10

CHI 2009 ~ Programming Tools and Architectures

April 7th, 2009 ~ Boston, MA, USA

Support for Context-Aware Intelligibility and Control


Anind K. Dey Human-Computer Interaction Institute Carnegie Mellon University, Pittsburgh, PA 15213 anind@cs.cmu.edu
ABSTRACT

Alan Newberger Google, Inc. New York, NY 10011 alann@google.com


applications, it is important that they (1) understand how a context-aware application is working or behaving (e.g., monitoring a friends location or retrieving sensor readings to understand why a friend is being reported as being at a particular location); and (2) can intentionally change or control how context is processed (e.g., changing from sending alerts when a friend is within 50 meters of your location, to within 10 meters). These represent two key classes of end-user interaction with context-aware applications, intelligibility and control. Supporting them will significantly impact adoption of context-aware applications [3,4,8]. Because intelligibility and control involve application state, and context-aware applications often possess distributed state, these interactions can be challenging to implement and support. Intelligibility and control may also involve reusable infrastructure produced separately from the application, but access to sensors or the internals of reasoning systems may be difficult. Intelligibility and control necessarily happen in the user interface, so a successful solution must extend to interface designers, who are more aware of an applications users and their tasks, in addition to developers who produce reusable context components. In this paper we present a solution that exposes the internals of context-aware applications and facilitates the design of intelligibility and control interfaces from multiple interface development platforms commonly used by designers. By enabling designers to customize previously inaccessible applications, we broaden the range of people able to develop context-aware applications beyond the systems programmer and enable more usable applications. Our contributions in this work are four-fold: 1) We augment an existing context-aware infrastructure, the Context Toolkit, with a valuable programming abstraction, Situations, that makes it possible to support intelligibility and control in end-user applications. With no additional effort required, by using Situations, developers have built-in support for debugging applications and for providing simple intelligibility and control support in all of their applications. Situations also expose an API to the internal logic of a context-aware application, allowing both developers, and, more importantly, designers to provide custom intelligibility interfaces and control interfaces without needing access to an applications original

Intelligibility and control are important user concerns in context-aware applications. They allow a user to understand why an application is behaving a certain way, and to change its behavior. Because of their importance to end users, they must be addressed at an interface level. However, often the sensors or machine learning systems that users need to understand and control are created long before a specific application is built, or created separately from the application interface. Thus, supporting interface designers in building intelligibility and control into interfaces requires application logic and underlying infrastructure to be exposed in some structured fashion. As context-aware infrastructures do not provide generalized support for this, we extended one such infrastructure with Situations, components that appropriately exposes application logic, and supports debugging and simple intelligibility and control interfaces, while making it easier for an application developer to build context-aware applications and facilitating designer access to application state and behavior. We developed support for interface designers in Visual Basic and Flash. We demonstrate the usefulness of this support through an evaluation of programmers, an evaluation of the usability of the new infrastructure with interface designers, and the augmentation of three common context-aware applications.
Author Keywords

Context-aware computing, intelligibility, control, toolkits, design support


ACM Classification Keywords

H5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous.


INTRODUCTION

Context-aware applications use context information regarding the state of people, places and objects that is relevant to interaction with users [12]. Context is typically gathered in an automated fashion that uses a combination of sensing and complex rules to allow applications to react to relevant changes in the state of the world. In order for end users to interact effectively with context-aware
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CHI 2009, April 49, 2009, Boston, MA, USA. Copyright 2009 ACM 978-1-60558-246-7/09/04$5.00

859

CHI 2009 ~ Programming Tools and Architectures


codebase or needing to use the original programming language; 2) We show that Situations make it easier for developers to build applications that support intelligibility and control. We validate this in a study with 18 developers showing a reduction in development time and lines of code; 3) We show that Situations are usable by designers. In particular, we implemented Flash and Visual Basic clients for our underlying architecture. We validate the usability of our clients in a study with 10 designers, showing that designers could build interfaces using our abstractions/implementation and that they liked it. 4) We show that Situations can be used to support a range of interesting and canonical context-aware applications. We present three applications built using Situations, a unified room control application, a museum tour guide application, and an office awareness application. In the remainder of this paper we will show that structuring context-aware application logic through Situations makes it easier for application developers to build and debug applications, and for designers to help users understand and control context-aware applications. We will first describe some additional motivation for the need for intelligibility and control support. We will then discuss the Situation component and platform extensions we have implemented and the built-in support Situations have for traceability, enabling debugging and simple intelligibility interfaces. We then demonstrate through a user study that this component both speeds and simplifies the development of applications. We also describe an evaluation with designers that illustrates how they can effectively access the internal logic of context-aware applications to support end users. We then demonstrate the use of Situations through the implementation of 3 common applications and interesting extensions. We end with a discussion of related work, our conclusions and plans for future work.
BACKGROUND

April 7th, 2009 ~ Boston, MA, USA

Fig. 1. Unified Room Control interface.

context-aware security system, where a delivery person has arrived with a package. The homeowner is away at the office and notified of the delivery by the home system. Before remotely allowing the deliverer into the home foyer, she wants to see the impact of her action on the rest of the house. She confirms the system will correctly shut off access to other rooms, and executes the action. She then confirms through system video feeds that the delivery person exits the home. The homeowner might desire such explicit intelligibility and control even if the system could handle such a protocol on its own. Another example is in supporting user response to perceived application error. Consider a home lighting application that turns on lights in a home for occupants, at the same time trying to save energy costs (e.g. [23]). During normal operation, this action is completely implicit, turning lights on and off based largely on user movements and object locations but not according to user commands (Fig. 1 shows an intelligibility and control interface for such an application). However, if the system performs unexpectedly or erroneously, e.g., turning off a light in a room where a user is reading, that user will likely shift into a set of explicit interactions with the application, perhaps trying to figure out why the system turned the lights off and almost certainly trying to turn the lights back on. While an extreme case, evidence from the MavHome shows that the lack of an intelligibility and control interface can result in a very frustrating user experience [29]. The MavHome learned lighting behaviors over time with occupants who did not have visitors late at night. When an occupant moved in that had guests over late at night, the lights remained off, not having had time to learn the new occupants patterns. Apparently the occupant chose to literally remain in the dark because there were no mechanisms for him to either understand why the home was behaving this way, or to control the home directly. These examples show that although context-aware applications may operate implicitly, they will inevitably involve explicit intelligibility and control interactions from users. Explicit interaction demands that applications have interface(s) with which

Context-aware applications use context in their environments, and often take action on that context without explicit user input. Schmidt, refers to this phenomenon as implicit human-computer interaction [27]. Bellotti and Edwards state that two key design principles for contextaware systems are informing the user of the systems understanding of the world (intelligibility, or what others have called scrutability [1]) and providing control to the user [4]. A study of context-aware systems showed that users are very frustrated when they do not understand why a system has performed an action, or have the ability to fix it [3]. Intelligibility and control interactions are a significant part of context-aware applications and will be necessary for adoption. There will always be situations where users want to understand or modify application state [1]. One such situation is remote intelligibility and control under unexpected conditions. Consider a home with a

860

CHI 2009 ~ Programming Tools and Architectures


users can readily interact, to avoid user annoyance and rejection of the applications. Several infrastructures exist that support context-aware computing applications. In general they address the challenges involved in using and reusing a distributed set of computing resources in a variety of applications, providing services such as asynchronous message communication, resource discovery, event subscription, and platform independent identification and communication protocols. Examples include JCAF [2], Cooltown [5], Solar [7], iQL [10], and the Context Toolkit [12]. All can access context, and some provide mechanisms to invoke services. None, however, offer higher-level abstractions that describe actions an application takes and the context involved in those actions as an accessible unit. Situations complement these systems in providing an accounting of input, output and the mapping between them. The exception to the infrastructures above is PersonisAD [1]. While not the main focus of the infrastructure, it does have some support for intelligibility. In response to a context-aware query of subscription, PersonisAD aggregates evidence that could be used to support intelligibility, although this support is not extended out to users. We build on this work by providing a validation of our framework with developers and designers, extending intelligibility and control to developers (through debugging support), designers (by exposing application logic) and endusers (through standard and appropriate interfaces), and providing this support in a widely used context infrastructure, the Context Toolkit. Situations are a componentization of application logic to support inspection and manipulation via an established API, concretizing the concept of accounts introduced by Dourish [15]. Rather than hide how a system works and the errors a system makes, they expose this information in order to improve the user experience [6]. Situations organize applications in a similar fashion to component architectures such as JavaBeans, Open Agent Architecture and XWeb [22,25]. Like the architectures just described, however, these architectures do not provide explicit support for developing intelligibility and control interfaces. Some research has addressed aspects of intelligibility and control. The Jigsaw Editor and iCAP both facilitate enduser control by allowing them to build their own contextaware applications [17,13], but do not support intelligibility and control interfaces that users need for understanding and using context-aware applications. Cooltown, Speakeasy, and iCrafter address the need for ubicomp user interfaces (UIs) to be highly dynamic and adaptable, by delivering interfaces to users through template-based or automatic UI generation [5,24,26]. However, as Dourish maintains, enabling design customization in interfaces is crucial [14]. These 2 approaches toward effective interaction with context-aware applications are complementary to ours.

April 7th, 2009 ~ Boston, MA, USA


Our work attempts to support interface designers in developing interfaces for applications that were built at an earlier point using context-aware infrastructures. Contextaware systems are likely to be long-lasting and evolving. Dourish argues that such systems should support reflection and offer customizability during a continual design cycle [14]. He focuses on end-user customization, but acknowledges the importance of systems that allow customizability by individuals with varying degrees of expertise, as in the Buttons system [21]. Similarly, Mobile Bristol and Topiary demonstrate the value of supporting designers in building interfaces for location-aware systems [16,19], however none of these systems focus on intelligibility and control. In summary, current context-aware infrastructures greatly assist the creation of context-aware applications but do not provide necessary support for intelligibility and control. Supporting designers in building intelligibility and control interfaces has great value but little direct support exists for this. We address both issues in our work.
ARCHITECTURE

The general structure of a typical context-aware application is as follows: Context input from either sensors or users is made available to applications. Application logic is often ad hoc, programmed to acquire and analyze input, and issue or execute context output when appropriate. This includes controlling actuators, modifying data or notifying user displays. The acquisition of context input and the execution of context output are often implicit, performed without direct user interaction [27]. Context intelligibility and control are, by definition, explicit user interactions. They rely on application logic being exposed in some way, in order to access not only the state of context input and output in an environment, but also any analysis or state resident in the application that is processing inputs and outputs. In this section, we describe how Situations support intelligibility and control of context-aware applications. Then we describe how we support the designers in creating intelligibility and control interfaces for end-users through VB and Flash libraries.
Intelligibility and Control

Although the application logic (or context-aware rules) that integrates data from context components benefits from regularity and reusability of those components, it is itself completely ad hoc and does not expose operational details. There is little way to support intelligibility and control interactions for users except to either implement it when the entire application is built, or to access internals in some custom way at a later time. Both strategies are unrealistic. We address this by componentizing application logic already present in applications into Situations (see Fig. 2). In our previous work [11, 12], we introduced the concept of Situations. Here we discuss a much more sophisticated version, provide a detailed description of the abstraction

861

CHI 2009 ~ Programming Tools and Architectures

April 7th, 2009 ~ Boston, MA, USA


subscriptions, using a discoverer to determine what components can satisfy these subscriptions. References notify Situations whenever components newly satisfy or fail to satisfy a match (e.g., when a new component is discovered that can produce information about the location of people, or an existing one fails), and whenever a matched component provides new context data and this information is passed onto listeners. These events signify that a context input change of interest has occurred. For instance, a Situation that manipulated lighting for a home in response to the presence of people would have a reference that matched person inputs, receive matches whenever new people entered the home, and receive evaluations whenever a person changed rooms.

Fig. 2. Block diagram of the Situation receiving context data from inputs (references), monitoring changes in data and actions (listeners), exposing parameters for control, and using services and displays for output.

and its implementation, and focus on its value in supporting intelligibility and control. With existing context infrastructures, the home lighting application described earlier (Fig. 1) would be implemented as follows: The application logic consists of finding a discoverer, querying it for relevant people inputs, and subscribing and maintaining connections to each of those inputs. When the application receives data from each input, it must maintain internal state information keeping track of each persons location. When a users location matched a location of interest, an output or service would be called to change that locations lighting appropriately. With Situations, this development is much simplified. Now, the application logic consists of creating a Situation with a description of the information it is interested in (locations of specific people) and what the Situation should do when (set the lighting level based on the occupancy). The application logic would consist of a number of context rules of the following form: when any of (users A, B, C, D) are in the kitchen and it is nighttime, turn on the overhead lights to maximum. The Situation handles the remaining logic: discovery, individual sources of context and data, determining when input is relevant and executing appropriate services. The Situation API is designed to allow developers to easily encapsulate this logic in a component. Context rules they specify using the API are automatically decomposed into a collection of three subcomponents: references, parameters, and listeners. A Situation acquires distributed context input (e.g., location, light level) through sets of references. It processes information internally, exposing any relevant properties as parameters (e.g., bedroom light is on and Joe is in the room). Listeners are notified of occurrences within the Situation, such as any actions that are invoked (e.g. turn the lights down) and context data received. References: Situations may require data from a variety of context components. Reference objects take a context rule and convert the specification into a series of context data

Parameters: When specifying a context rule, parameters are automatically extracted from the rule to support intelligibility and control. Parameters are analogous to JavaBean properties and parameters in other component frameworks. They can be readonly or read/write, and have a description advertising their type and what they do. Situations provide mechanisms for others to inspect and manipulate these parameters in a standard way with no additional developer management. They also inform listeners when a user changes a parameter value, to assert control over an application, signifying that the behavior of the Situation itself has changed. In our lighting application, a Situation offers parameters detailing the intensity to which a light should be set to, names of individuals that the application should respond to, and locations of interest. Rules, expressed as collections of parameters, are also used to explain the behavior of the application to support intelligibility. Listeners: Situations inform listeners whenever context input is received, an action occurs in response to that context input, or parameters are modified. One Situation listener of interest that we implemented is a Situation server, that particularly supports designers in producing intelligibility and control interfaces. This server translates listener method calls into XML and sends that XML to any connected clients (including Flash and Visual Basic clients to provide cross-language support). It also listens for XML sent to it, and modifies Situation parameters in response. Situation application design: In our work, we implemented Situations on top of the Context Toolkit [12], as it is open-source and is the most commonly used contextaware framework, however, we could have used any of the many context frameworks that support discovery, context inputs and context outputs or services [2,5,7,10]. In the Context Toolkit, discovery is a core feature, context input in handled by components called widgets, and context output by services. A context-aware application will typically contain one or more Situations, each encapsulating some unit of processing on context input. Conceivably every application could contain just one Situation that retrieved all context input needed and performed all processing by itself. However, grouping an application into

862

CHI 2009 ~ Programming Tools and Architectures


sets of Situations maintains modularity and encourages reuse. Moreover, the implementation of Situation references efficiently multiplexes context input subscriptions between components, so there is no additional operational cost on the context-aware system at large by applications with many Situations. Once a developer has decided upon the number and function of Situations in the system, application development is similar to that of traditional context-aware applications. Acquisition of context input is actually made much easier because of the fully declarative mechanism provided by Situation references: describe at a high level what context your application is interested in, and let the infrastructure provide it while, if desired , abstracting away the underlying details of discovery, subscription, dynamics of components appearing and disappearing at runtime, etc. Service execution occurs much like before. Situations provide notification of events like context acquisition and execution through listeners with no additional effort for the developer. The use of Situations does not drastically impact context-aware application development processes. It provides tremendous benefits to designers and end-users, and actually decreases the burden on the developer, as we will now describe.
Traceability: Interfaces Supporting Debugging and Simple

April 7th, 2009 ~ Boston, MA, USA


a particular rule, or a particular CTK component, and provides a visual notification that the object of interest has changed. It also supports the ability to change context values and parameters at runtime to see the impact of the change on context rules and their execution. Tracer is primarily aimed at supporting a developer in debugging her context logic. However, by making the output of tracer more English-like, it can be used to provide limited intelligibility and control to end-users. Just as with Tracer, by default, an in telligibility and control interface is instantiated for each Situation component, called Introl. It is usually invisible, but can be made visible either through the application user interface or via a keyboard hotkey. Introl provides a subset of Tracer functionality by inspecting parameters and references: some information on context rules and their execution and current context values (as well as limited information on how they were derived), and the ability to modify parameters (an important aspect of application logic). While we imagine that it is too clunky for most end-users, it does support basic intelligibility and control for end-users across all Situation-based applications with no extra effort required by the application developer. Building usable intelligibility and control interfaces is an open challenge [1] that we attempt to address with our support for interface designers.
Client Extensions

The infrastructure extension, Situations, made to the Context Toolkit (CTK), can aid developers beyond making application development simpler and faster, and can aid end-users with basic intelligibility and control support. Here we describe how Situations support traceability of application logic and execution, and how we have built default support for debugging and intelligibility and control interfaces. Situation listeners provide all the necessary functionality to obtain a real-time execution trace for an application. They provide notifications of context input, changes in component (input or output) availability, service execution, and changes in parameter values. Situations themselves have methods for exposing the context rules they encode and for changing the values of parameters. We have created a default situation listener called Tracer that provides a simple visualization about the state of a given Situation component. It indicates what the current status of any context variable (used in the Situation) and the origins of that context (widget component and sensor used) via references, the current state of any context rule (specified in the Situation), and changes to context rules as a result of parameter modifications. Rather than resort to instrumenting a large amount of distributed code, this trace provides a single, centralized repository for all issues related to a developers Situation component. Tracer can be invaluable for helping developers understand why their context rules are not working as expected. While not as feature-rich as modern graphical debuggers, it does provide the ability to watch a particular context value or parameter,

Effective design of intelligibility and control interfaces is, at heart, an interaction design problem it requires specific knowledge of the application users and their tasks or activities. Interface designers are skilled at gathering this knowledge and in performing interface design. It is often difficult to know what intelligibility and control support is needed when the application is being built. But even when this is known, as the use and/or the users of the application change over time, the type of support needed will invariably change and leverage designers skills even more. There are three main challenges in performing this work. First, interface designers have limited general programming ability. Rather than simply augmenting a context-aware infrastructure to support intelligibility and control, any solution must extend that solution to designers in programming environments that they are familiar with. Second, there must be support for building these interfaces after the application has already been developed and deployed. This is needed, not only because it is valuable to separate the building of the application from the design of the intelligibility and control interface(s), but also because user needs are often not completely understood until after users have lived with an application for some time. Designers may wish to customize the presentation of information to meet particular user needs, or compose multiple context-aware applications through a coherent interface to improve usability. Our third challenge is that the ability to perform this sort of interaction design can actually be compromised by context-aware toolkits and infrastructures that promote component reuse [4]. Reuse

863

CHI 2009 ~ Programming Tools and Architectures


implies that the design of components such as sensor abstractions occurs early in a design process, before any users or tasks are definitively known. Effort must be employed to ensure that an infrastructure supports the construction of reusable components but also supports access to those components in a way that encourages flexible interaction design at the application level. This point has been argued for collaborative systems in general [14], and for feedback and control of context-aware systems in particular [4]. At the same time, such support should enable interface design while placing minimal additional burden on application developers. While intelligibility and control interfaces can be built in Java using the CTK augmented with Situations and listeners, we added Visual Basic.NET and Flash support because they are the most commonly used design environments for graphical user interfaces. This opens up the possibilities for a large development community to build intelligibility and control interfaces. We created extension libraries in VB and Flash that utilize the XMLbased Situation server mentioned above. Situation servers publish their locations on a public web page. With this information, a designer can build an interface. At design time, to discover the structure of a particular Situation, a designer can visit that server on a web browser. She will see a description of the Situation, names and types of its exposed parameters, reference queries, and descriptions of any currently matched components. Alternatively, the Situation server and this information can be accessed programmatically through our client extensions. The designer decides what information to extract and use. Whereas access to the application logic of a traditional context-aware application would need to be implemented in a custom manner, Situations provide standard access. Our extensions allow arbitrary applications to provide intelligibility and control interactions for context-aware applications through Situations. Interfaces can be built in Flash or VB, independent of the main context-aware application. Flash: Flash is an interface development tool that deploys programs that execute within the Adobe Flash Player. The Flash interface is largely visual, using a timeline metaphor. It has XML support and allows scripting through ActionScript, which supports the use of objects, and provides object representations of most visual Flash elements. Intelligibility and control interfaces written in Flash can execute on any computer or environment with network access to the extended CTK infrastructure. We implemented a Situation connection object in Flash that provides a set of custom high level events to Flash designers. It essentially extends the Situation listener interface into Flash and the high level events it provides map directly onto listener methods. Designers can attach custom handlers to these events using precisely the same semantics as event handling for all other Flash objects. For instance, our library contains onComponentAdded event

April 7th, 2009 ~ Boston, MA, USA


handlers (for new reference matches) that are used in Flash in exactly the same way as the common onPress event used by buttons. Component description data is converted into a native ActionScript data type (associative arrays). Flash interfaces can set Situation parameters via the connection object; parameter arguments are sent to the Situation via the Situation server. Visual Basic: Visual Basic.NET is an extremely popular and more full-fledged object-oriented development environment. Similarly to our Flash work, we implemented a Situation connection object in VB that parses XML and creates an object representation of Situation notifications. This representation is closely modeled after our Java object implementation. Custom application code can be attached to Situation events using .NET delegates.
VALIDATION OF ARCHITECTURE Evaluation of Developer Support

To validate that our extension to the Context Toolkit (CTK), the Situation component impacts the development of context-aware applications in a positive manner, we conducted a study of its use. We used a context-aware development mailing list to recruit 18 remote developers who had development experience (built at least 3 applications) with the CTK (note that none were affiliated with our research group or institution). We asked them to build an application that would control the lights and music playing for the occupants of a home, at room-level granularity, based on the occupants preferences and a set of provided rules that override preferences (e.g., when Bob and Janet are in the dining room, play instrumental jazz with low lighting). We provided a tutorial for using the Situation component that took about 20 minutes to go through. We provided them with the set of widgets and services they needed to build and test the application. We conducted a within-subjects study, with half the subjects first using the original CTK to build the application, and then using the Situation-based version, and the other half building the application in the reversed order. Subjects were told to build the application in one sitting, to test the application on a set of test cases we provided, and tell us how long the application took to complete and send us the source code. We compiled the source code and verified that the code worked on a second set of test cases, and conducted short interviews with the subjects. Our results verified our hypothesis that the Situation component greatly aids the development of context-aware applications over the existing Context Toolkit. The average amount of time to build the application dropped from 122.7 minutes to 54.2 minutes. The average number of lines of code dropped from 119.3 to 37.8. Both improvements were statistically significant (p<0.01). Qualitative feedback obtained from interviews supported the quantitative results. 17 of the 18 developers preferred using the Situation component. They told us that [the situation component] took care of details that I normally have to, Situations are so easy to use, and that they wanted to continue using it

864

CHI 2009 ~ Programming Tools and Architectures

April 7th, 2009 ~ Boston, MA, USA

Fig. 3. Three temperature intelligibility and control interfaces implemented by Flash designers.

for their own development (Can I keep the new toolkit?). The one holdout had an improvement in both time and lines of code, but felt like he was less in control of application development when using Situations.
Evaluation of Designer Support

requirements, and we expect our results will hold for our VB support as well.
DEMONSTRATION APPLICATIONS

To establish that designers can effectively use Situations to develop intelligibility and control interfaces to contextaware applications, we conducted an evaluation of the usability of the extended CTK with designers unfamiliar with the CTK and our research. Evaluating API usability is becoming a popular technique for assessing programming frameworks [20]. Ten experienced Flash interface designers participated in a 2-hour long study. They averaged 3.4 years experience in interface design. They were given documentation, a tutorial about our system and a scenario that described the intended users of a temperature intelligibility and control application, a 3-person family with particular needs who moved into a new home. The home was outfitted with a CTK-enabled temperature control system in 3 rooms, and the participants were to design and implement an interface for controlling and explaining settings in each room. Temperature control is a common operation and is a canonical, if basic, contextaware application. Even so, the task required designers to perform all the necessary steps for building more complex interfaces: using Situations, acquiring and displaying information about context changes and component availability changes, allowing users to change parameters and updating those in the Situation, and providing explanations of the systems behavior. Every participant succeeded in designing an interface in the allotted time using an average of 51 lines of ActionScript code (Fig. 3 shows examples). In an exit survey (1=strongly disagree to 5=strongly agree), all felt that the extensions were useful (M=4.4, SD=0.36), all could accomplish similar interface development tasks in the future using the connection object (M=4.7, SD=0.48) and all wanted to use our tool in the future (M=4.8, SD=0.41). The participants positive performance and impressions suggest that our tool will succeed in allowing Flash designers with similar expertise to customize interaction and presentation of context-aware applications to task-specific user

For users to interact effectively with context-aware applications, they should be able to understand and control them. Our implementation of Situations makes it easier for application developers to build their applications, while, at the same time, exposing application logic. The client extensions provide designers with access to this logic and enable them to produce intelligibility and control interfaces without having to implement entire context-aware applications. Designers can produce interfaces targeted to the needs of specific users, independent of the original development process. In this section we describe three applications implemented using Situations and the Flash communication library that demonstrate how users may benefit from increased designer support for the construction of context intelligibility and control interfaces. Each application explores particular aspects of the relationship between Situations and user interfaces for intelligibility and control. Our first is a unified home controller interface that controls temperature and lighting. This interface shows how designers can take existing applications written using Situations and integrate them into novel intelligibility and control interfaces. Second, we present a museum exhibit interface for museum administrators that explains the museum conditions and controls visitors context-aware guides. Here, we show how a designer can design a useful intelligbility and control interface for a different set of users than is targeted by the original application. Our final application is an activity monitoring system with privacy control added, demonstrating how Situations enable the enhancement of existing context-aware applications.
Unified Room Control

A designer might want to customize an interface to present an efficient means of monitoring and controlling a set of context-aware applications in an environment. We developed a wall-mounted interface (Fig. 1) that composed two applications, temperature and lighting, into one interface for a living room. The interface was designed to display only information that an occupant of the space requires to understand the applications behaviors. On the

865

CHI 2009 ~ Programming Tools and Architectures

April 7th, 2009 ~ Boston, MA, USA


knowledge of how and where users will interact with an interface to selectively present useful information.
Museum Exhibit Control

Interactive tour guides are the canonical context-aware application. They are often considered in a museum setting, where visitors can retrieve extra information about exhibits as they roam the museum. Simple audio tour guides are commonly used today; plaques describe information about a particular installation and have a numerical code that visitors can enter into a portable audio device and receive more information than is available on the plaque itself. We considered a possible extension of these future tour guides and built a prototype context-aware application. In our museum guide, users carry location-sensitive PDAs that can provide audiovisual commentary about exhibits. The installation plaques are dynamic displays, enabling the presentation of more content than can fit on one static plaque or a small PDA. The context-aware application uses knowledge of users proximity to installations to initiate short presentations on plaques that entice users to explore topics in greater depth on their PDA. This application could provide great value to the experience of museum visitors. However, the reality of any particular museum setting may be impossible for developers to anticipate. The solution is to expose relevant controls and support museum administrators in fine-tuning. To this end, we have implemented a control interface (Fig. 4), for an exhibit that utilizes the application described above. The interface displays a floor plan noting all installation locations. Visitor movements are tracked and displayed on the floor plan. Installation and visitor icons can be highlighted to provide detailed information in areas to the left of and below the floor plan, respectively. Administrators can view the status of visitor and installation plaque displays as either inactive or presenting, and receive explanations for this status. Moreover, they can set the visitor proximity threshold of any plaque display to begin presentation playback. Implementation This application uses two Situations; one is for application logic that monitors the location of PDAs and delivers appropriate content to dynamic plaques when instructed by visitors. It has 1 reference to widgets representing the PDAs. The other Situation has logic that invokes installation plaque displays based on visitor proximity. This Situation has 2 references, one to installation plaque display widgets and one to PDA widgets. Its relevant parameter is the number of visitors near an inactive display before beginning a presentation. It monitors visitor locations and initiates a presentation when the appropriate number of visitors is near a display. The Flash interface allows the administrator to adjust the threshold parameter for any installation, sending a parameter change request, and waiting for a parameter change event to arrive from the Situation server before changing its display value. The interface caches the latest

Fig. 4. Museum Exhibit Control interface.

right, the interface monitors the temperature and allows users to control the target temperature. The light application turns on local light sources when certain items enter the proximity of those sources, similar to other research projects [9]. The interface indicates which lights are on, and by clicking on the light the user can see what item is responsible for the application behavior. For instance, in Fig. 1 there is a book on the sofa; the application provides increased illumination to aid in reading. The application is simplified for purposes of demonstration, and does not use time of day nor per-user preferences. Implementation The context-aware application is comprised of two Situations, one for each application. The temperature Situation has 1 reference for obtaining the current temperature, 1 parameter to allow adjusting temperature, and calls the appropriate service to heat or cool. The lighting Situation has 3 references that retrieve widgets representing light source intensity and the locations of people and items. The Situation tracks regions with local light sources for the presence of people and items. If an item has an eligible type ( e.g., book), the Situation activates the light source. If the person leaves the region but the item remains, the Situation leaves the light on in case the person might return. If both entities leave, the Situation shuts off the light source. The Flash interface connects to the 2 Situations and monitors context input as it changes over time. It caches data for display when a user clicks on a light. Whenever it receives input that a light is activated, it displays that light source on the floor plan. The Situations contains about 30 lines of code, while the Flash interface utilizes 220 lines of ActionScript, with 35 lines managing Situation communication. Discussion The room control display unifies two previously and independently developed context-aware applications into one intelligibility and control interface. It demonstrates how a designer might choose relevant facets of an application exposed by multiple Situations and expose those facets to the user, without ever seeing the original source code. Moreover, it shows how designers may use

866

CHI 2009 ~ Programming Tools and Architectures


details about visitors and installations so when a user highlights an icon, the display can immediately display the requested information for intelligibility. The CTK application contains about 60 lines of code for the two Situations and the Flash portion contains 170 lines of code (~ 30 lines for communication). Discussion The museum control interface is the kind of custom interface, implemented after a typical tour guide system is developed, that can improve the experience of end-users by allowing administrators to understand and control that experience. Even if the general concept of administrator control is accounted for in the original application, the ability of designers to customize this display for a particular installation is valuable. By fitting all relevant information on a single, dynamic display, this application is suitable for the particular needs of museum employees who may be stationed right outside the exhibit. Designers can customize without needing to get their hands dirty in the internal application logic of the original application.
OfficeView Activity Monitoring

April 7th, 2009 ~ Boston, MA, USA

Fig. 5. Activity Monitoring interface.

adding privacy control to the activity monitoring system, without ever needing to see or recompile the original applications code.
CONCLUSIONS AND FUTURE WORK

Our Java-based activity monitoring application follows in a long line of awareness systems intended to increase work productivity and efficiency [18]. It provides no privacy controls, delivering exactly what information is sensed to interested parties via a textual interface. In contrast, the OfficeView intelligibility and control interface (Fig. 5) built on top of the Java application provides a floorplan view of a workplace on which a user can view information about activities in other offices. In addition, it augments the existing application with privacy measures, allowing users to specify whether their activity information can be revealed to others or not. Users can also request information about what sensed information led to another users setting, by hovering over that persons icon. If a person is not in his office, his location is provided, assuming the user allowed this. Implementation This application uses a Situation with 1 reference that monitors the activity of each person in the interface and exposes this parameter as well. The Flash interface provides a visualization of activity information acquired from the Situation and allows users to change their activity setting to anything, including Unknown (e.g., private). Multiple users all connect to the same Situation, allowing value changes made by one person to be propagated to the other application instances. The CTK application contains about 75 lines of code for the Situation and the Flash interface contains 130 lines of ActionScript (about 20 lines for Situation communication). Discussion This application and interface demonstrates the implementation of a useful context-aware application, allowing users to view each others activities as well as specify what information is released to others. In addition, the interface shows how an intelligibility and control application can be built on top of an existing application, by

We have shown that intelligibility and control are essential interactions in context-aware applications. To support these explicit interactions, designers must have the ability to design interfaces that account for user needs, with the freedom to customize and compose the presentation of information. To this end we have implemented enhancements to an existing context-aware infrastructure, the Context Toolkit (CTK) [12], that provides rich access to context-aware components and application logic. The enhanced API, an abstraction component called a Situation, was architected while considering the needs of intelligibility and control interaction design. It exposes context-aware state and supports access from interface design environments. Situation clients are available for designer use in a variety of platforms, including Java, Visual Basic, and Flash. They enable designers to build intelligibility and control interfaces both during and after application deployment without requiring them to have implementation knowledge of application state and behavior. Designers can build new and more usable and appropriate interfaces for existing context-aware applications and combine multiple applications together into a single, more usable interface. By improving usability in these ways, users of these applications will be less likely to reject these applications out of frustration (e.g., Microsoft Agent Clippy was not intelligible and not very controllable and users quickly abandoned it). Situations allow application developers to encapsulate application logic and reduce the burden in developing applications. By default, developers get built-in support for debugging and end-users get simple interfaces for intelligibility and control. We demonstrated the benefits of

867

CHI 2009 ~ Programming Tools and Architectures


the Situation abstraction in supporting context intelligibility and control through augmentation of three applications, an evaluation of the infrastructure support with developers and a usability study of our API with Flash designers. Our primary goal in this research is to improve the end-user experience in context-aware applications by supporting intelligibility and control. Toolkit support is a necessary first step. However, in this work, we have only begun our investigation of intelligibility. We expose and allow modification of application logic, but that logic may not be inherently intelligible. We would like to move beyond this initial support, to provide additional information that would help users understand how an application is working [28]. This includes in-depth exploration and evaluation of particular intelligibility and control interaction techniques that are shown to benefit users. The work described here should assist this research by enabling rapid prototyping of intelligibility and control displays. We are also interested in extending Situations. Although exposing application logic directly to designers, who then can expose it in turn to endusers, is extremely useful and enables usable context-aware applications, designers cannot change or enhance the application logic implemented by developers. We are interested in implementing a subclass of Situations that support declarative, rule-based definitions of application logic. Rather than just supporting the exposure of a finite number of parameters, the entire application logic could itself be represented as a modifiable construct. Finally, we intend to evaluate the use of the applications and interfaces produced using our toolkit support, to verify that they do support intelligibility and control. Acknowledgements The authors would like to thank Edwin Chau for his work in developing some of the applications shown in this paper. This work was funded by the National Science Foundation under CAREER award No. IIS-0746428. References
1. Assad, M., Carmichael, D.J., Kay, J. and Kummerfield, B. PersonisAD: Distributed, Active, Scrutable Model Framework for Context-Aware Services. Pervasive 2007, 5572. 2. Bardram, J. The Java Context-Awareness Framework (JCAF) A service infrastructure and programming framework for context-aware applications. Pervasive 2004, 98115. 3. Barkhuus, L. and Dey, A.K. Is context-aware computing taking control away from the user? Three levels of interactivity examined. Ubicomp 2003, 149156. 4. Bellotti, V. and Edwards, K. Intelligibility and accountability: Human considerations in context-aware systems. HumanComputer Interaction, 16(24): 193212, 2001. 5. Caswell, D. and Debaty, P. Creating Web representations for places. HUC 00, 114126. 6. Chalmers, M., MacColl, I. and Bell., M. Seamful design: Showing the seams in wearable computing. IEE Eurowearable 2003, 1117. 7. Chen, G. and Kotz, D. Context aggregation and dissemination in ubicomp systems. WMCSA 02.

April 7th, 2009 ~ Boston, MA, USA


8. Cheverst, K., Davies, N., Mitchell, K. and Efstratiou, C. Using context as a crystal ball: Rewards and Pitfalls. Personal and Ubiquitous Computing Journal, 5(1), 811. 2001. 9. Coen, M. Design principles for intelligent environments. AAAI Spring Symposium98, 547554. 10. Cohen, N., Lei, H., Castro, P., Davis, J. and Purakayastha, A. Composing pervasive data using iQL. WMCSA 02, 94104. 11. Dey, A.K. and Abowd, G.D. CybreMinder: A context-aware system for supporting reminders. HUC 2000, 172186. 12. Dey, A.K., Abowd, G.D. and Salber, D. A conceptual framework and a toolkit for supporting the rapid prototyping of context-aware applications. Human-Computer Interaction, 16(24): 97166, 2001. 13. Dey, A.K., Sohn, T., Streng, S. and Kodama, J. iCAP: Interactive Prototyping of Context-Aware Applications. Pervasive 2006, 254271. 14. Dourish, P. Developing a reflective model of collaborative systems. ACM Trans. CHI, 2(1): 4063, March 1995. 15. Dourish, P. Accounting for system behavior: Representation, reflection and resourceful action. Computers in Context 95, 147156. 16. Hull, R., Clayton, B. and Melamed, T. Rapid authoring of mediascapes. Ubicomp 2004, 125142. 17. Humble, J., Crabtree, A., Hemmings, T., Akesson, K., Koleva, B., Rodden, T. and Hannson, P. Playing with the bits Userconfiguration of ubiquitous domestic environments. Ubicomp 2003, 256263. 18. Isaacs, E., Whittaker, S. and Frohlich, D. Information communication reexamined: New functions for video in supporting opportunistic encounters. Video-Mediated Communication, 459485. Lawrence-Erlbaum, 1994. 19. Li, Y., Hong, J. and Landay, J. Topiary: A tool for prototyping location-enhanced applications. UIST 2004, 217226. 20. Klemmer, S.R., Li, J., Lin, J. and Landay, J.A. Papier-Mache: Toolkit support for tangible input. CHI 2004, 399406. 21. MacLean, A., Carter, K., Lovstrand, L. and Moran, T. Usertailorable systems: Pressing the issues with buttons. CHI 90, 175182. 22. Martin, D.L., Cheyer, A.J. and Moran, D.B. The Open Agent Architecture: A framework for building distributed software systems. Applied Artificial Intelligence, 13(12):91128, 1999. 23. Mozer, M.C. The Neural Network House: An environment that adapts to its inhabitants. Intelligent Environments 98, 110114. 24. Newman, M., Izadi, S., Edwards, K.W., Sedivy, J. and Smith, T.F. User interfaces when and where they are needed: An infrastructure for recombinant computing. UIST 02, 171180. 25. Olsen, D.R., Jefferies, S., Nielsen, T., Moyes, W. and Fredrickson, P. Cross-modal interaction using XWeb. UIST 00, 191200. 26. Ponnenkanti, S.R., Robles, L.A. and Fox, A. User interfaces for network services: What, from where, and how. WMCSA 02, 138147. 27. Schmidt, A. Implicit human computer interaction through context. Personal Technologies, 4(2&3):191199, 2000. 28. Tullio, J., Dey, A.K., Chalecki, J. and Fogary, J. How it works: A field study of non-technical users interacting with an intelligent system. CHI 2007, 3140. 29. Youngblood, G.M., Cook, D.J. and Holder, L.B. A learning architecture for automating the intelligent environment. Innovative Applications of Artificial Intelligence 2005, 1576 1583.

868

Das könnte Ihnen auch gefallen