Beruflich Dokumente
Kultur Dokumente
Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Alfred Kobsa
University of California, Irvine, CA, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Germany
Madhu Sudan
Microsoft Research, Cambridge, MA, USA
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbruecken, Germany
José Bravo Diego López-de-Ipiña
Francisco Moya (Eds.)
Ubiquitous Computing
and Ambient Intelligence
6th International Conference, UCAmI 2012
Vitoria-Gasteiz, Spain, December 3-5, 2012
Proceedings
13
Volume Editors
José Bravo
Castilla-La Mancha University
Ciudad Real, Spain
E-mail: jose.bravo@uclm.es
Diego López-de-Ipiña
Deusto University
Bilbao, Spain
E-mail: dipina@deusto.es
Francisco Moya
Castilla-La Mancha University
Ciudad Real, Spain
E-mail: francisco.moya@uclm.es
CR Subject Classification (1998): H.4, C.2.4, H.3, I.2.11, H.5, D.2, K.4
The ubiquitous computing (UC) idea envisioned by Weiser in 1991 has recently
evolved to a more general paradigm known as ambient intelligence (AmI). AmI
represents a new generation of user-centric computing environments aiming to
find new ways to obtain a better integration of the information technology in
everyday life devices and activities.
AmI environments are integrated by several autonomous computational de-
vices of modern life ranging from consumer electronics to mobile phones. Ideally,
people in AmI environments do not notice those devices, but they benefit from
their provided services. Such devices are aware of the people present in those
environments by reacting to their gestures, actions, and context. Recently the
interest in AmI environments has grown considerably owing to new challenges
posed by society demanding highly innovative services such as vehicular ad hoc
networks (VANET), ambient assisted living (AAL), e-health, Internet of things,
and home automation among others.
The International Conference on Ubiquitous Computing and Ambient Intel-
ligence celebrated its sixth edition. Since its first edition back in 2005 the event
has grown significantly, as shown by its increasing number of participants and
more importantly its growing impact in the UbiComp community. This interna-
tional conference has brought together the work of researchers from around the
globe in these disciplines, where half of the attendants have come from Latin
American countries. This event has yielded several special issues in JCR publi-
cations, five special issues in this series, denoting the increasing influence of the
works presented in it on the UbiComp literature.
The main focus of this new edition of the conference has been to explore
how AmI can contribute toward smarter but still more sustainable environments
(e.g., smart cities, smart cars, eco-aware device orchestration and so on). This
also explains why it has been celebrated in the 2012 European Green Capital,
namely, Vitoria-Gasteiz in Spain.
Beyond sustainable computing, the UCAmI 2012 proceedings also include
research works describing progress on other key research topics for AmI such as:
human-environment mobile-mediated (through NFC or AR) interaction, artifi-
cial intelligence techniques to foster user-and context-aware environment adap-
tation, future Internet trends such as social networks analysis, linked data or
crowd-sourcing applied to AmI, Internet-connected object ecosystems collabo-
rating to give place to smarter environments (in cities, cars, tourist sites or for
more intelligent transport) among others. Altogether, this sixth edition includes
70 research articles.
VI Preface
Finally, we would like to thank all organizers (i.e., MAmI Research group,
University of Deusto, CIC Tourgune and Technological University of Panama),
and collaborators (i.e., Vitoria-Gasteiz Council), together with the reviewers
(members of the Program Committee) for helping us by contributing to a high-
quality event and proceedings book on the topics of ubiquitous computing and
ambient intelligence.
General Co-chairs
José Bravo University of Castilla-La Mancha, Spain
Diego López-de-Ipińa University of Deusto, Spain
Program Committee
Julio Abascal University of the Basque Country, Spain
Hamid Aghajan Standford University, USA
Xavier Alaman UAM, Spain
Rosa Alarcon Pontificia Universidad Católica de Chile, Chile
Mariano Alcañiz UPV - i3bh/LabHuman, Spain
Roberto Aldunate Applied Research Associates, USA
Jan Alexandersson DFKI GmbH, Germany
Cecilio Angulo Universitat Politécnica de Catalunya, Spain
Rosa Arriaga Georgia Institute of Technology, USA
Mohamed Bakhouya University of Technology of Belfort
Montbeliard, France
Mert Bal Miami University, USA
Javier Baliosian University of the Republic, Uruguay
Francisco Ballesteros Rey Juan Carlos University, Spain
Nelson Baloian University of Chile, Chile
Francisco Bellido University of Cordoba, Spain
Juan Botia Universidad de Murcia, Spain
Robin Braun University of Technology Sydney, Australia
Jose Bravo Universidad de Castilla La Mancha, Spain
Davide Brunelli University of Trento, Italy
Yang Cai Carnegie Mellon University, USA
Luis Carriço University of Lisbon, Portugal
Sophie Chabridon Telecom SudParis, France
Ignacio Chang Technological University of Panama, Panama
Liming Luke Chen University of Ulster, UK
Wei Chen Eindhoven University of Technology,
The Netherlands
Vaclav Chudacek Czech Technical University of Prague,
Czech Republic
VIII Organization
Additional Reviewers
The Voice User Help, a Smart Vehicle Assistant for the Elderly . . . . . . . . 314
Ignacio Alvarez, Miren Karmele López-de-Ipiña, and Juan E. Gilbert
1 Introduction
Wearable personal health systems for continuous monitoring are widely rec-
ognized to be a key enabling ICT technology for next-generation advanced
citizen-centric eHealth delivery solutions. Through enabling continuous biomed-
ical monitoring and care, they hold the promise of improved personalization
and quality of care, increased ability of prevention and early diagnosis, and en-
hanced patient autonomy, mobility and safety. Furthermore, wearable personal
Research by Marina Zapater has been partly supported by a PICATA predoctoral
fellowship of the Moncloa Campus of International Excellence (UCM-UPM).
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 1–8, 2012.
c Springer-Verlag Berlin Heidelberg 2012
2 M. Zapater, J.L. Ayala, and J.M. Moya
health systems can help the eHealth sector realize its potentials in terms of
rapid sustained market growth, reduction of healthcare costs and avoidance of
unnecessary cost to the public purse.
To provide the necessary accurate, integrated and long-term assessment and
feedback, these wearable personal health systems must acquire, monitor and
analyze a large number of physiological and metabolic parameters both during
physical activity and rest. The number as well as the nature of the parame-
ters of interest depends on the actual biomedical application/scenario and tar-
get population, e.g., fitness and illness prevention (healthy people or people at
risk), rehabilitation (patients after an event) or disease management (acute- or
chronically-ill patients).
It is largely accepted that Wireless Body Sensor Networks (WBSN) will be
the underlying common architecture and technology of these personal health
systems. More specifically, the WBSN will consist of a number of sensor nodes
attached to the subject/patient body. Each WBSN node ensures the accurate
sensing and capture of its target physiological data, its (pre-) processing and
wireless communication to the wearable Personal Digital Assistant (PDA), which
acts as the network coordinator and central data collector. This PDA will be re-
sponsible for the storage, organization, complementary analysis and fusion of
the collected physiological and metabolic information, its user-friendly represen-
tation, and its dissemination to the relevant medical staff or central monitoring
service through private and/or public wireless access networks.
The signal processing applications executed in the nodes of the wireless body
sensor network require complex operations that should be adapted to the na-
ture of the sensed signal. Therefore, it is needed an efficient application-specific
architecture but, at the same time, flexible enough to provide the required perfor-
mance to every simultaneous process in execution (multiple sensors that capture
multiple sources of information per node).
The envisioned setting proposes a worldwide deployment of nodes for constant
monitoring of biophysical and environmental variables, as already proposed in
some ambitious European initiatives (FET Flagships 2013: Guardian Angels).
The large amount of acquired data to be processed and stored, as well as the
computing-demanding algorithms developed for the signal classification and the
knowledge acquisition, requires the provision of powerful data centers distributed
across the territory and communicated with the WBSNs. The incredible cost of
operation and cooling of state-of-the-art data centers could be an obstacle for
enterprises as the one proposed here or by the European Union, what motivates
a research effort on cooling techniques and energy efficiency.
These facts explain the energy limitations found in these systems. A typical ar-
chitecture of a WBSN node is composed of: a microprocessor, a data memory
hierarchy, an instruction memory subsystem, sensors, actuators, RF transducer
and an energy source [1]. In case of WBSNs for bio-medical applications, there
is an additional constraint for the energy dissipation. The heat produced by
bio-medical systems has to be carefully controlled to avoid any damage in the
skin and tissues of the area of placement. A recent work by [2] that studied
the energy distribution in an ASIP running a heart beat detection algorithm
has shown that the two main sources of power consumption are the memories
(especially, the instruction memory) and the functional units (FU). Therefore,
the optimization of these two elements will benefit the efficiency of the proposed
system.
Design and control of a banked instruction memory is an efficient mechanism
to reduce the leakage power [3]. Most of the traditional approaches assume that
the configuration of the memory in banks is performed statically. However, a few
approaches also allow a dynamic selection [4], increasing the complexity of the
control logic.
Historically, the Functional Units have been monolithic elements with a static
behavior. Recently, several units with a dynamic changing function have ap-
peared in the literature. For example, reconfigurable functional units [5] are
functional units based on look-up tables capable of modifying the operation
upon selection of the control signals. The morphable functional units, already
existing in the industry [6], and the mutable functional units [7] are elements
that implement several functionalities with a slight increment in the logic area.
Similarly, the variable-latency functional units present an improved performance
with a limited overhead [8]. However, most of these works only evaluate the
area/performance trade-off, but do not analyze the impact on power consump-
tion that the multiple alternatives can have.
approaches adapt the duty cycle to the network needs [10], or reduce the idle-
listening cycle required by the MAC layer [11].
easier to cool down, the task scheduling will be more efficient in terms of energy
for cooling [21]. A similar approach can consider the load balancing between hot
to cold areas in the room [22].
The proposed approach considers all the agents implied in the computing para-
digm mentioned in Section 1 to build a strategy for energy/cost minimization.
Also, this phase develops a framework for the automatic design and optimiza-
tion of applications. This framework is composed of a set of tools that start with
a high-level description and help on executing tasks like the application mapping
to the network nodes, or the optimization for different target architectures.
improvement. For example, even though the Intel Xeon server should be better
than the others, there are some tasks in which the Sparc server outperforms the
Intel. On the other hand, the Sparc server behaves very bad with some specific
tasks. This experiment demonstrates that a proper usage of heterogeneity and
an efficient optimization algorithm could lead to significant energy savings in
current data centers.
0.6
0.4
0.2
0
perlbench
bzip2
gcc
mcf
gobmk
hmmer
sjeng
libquantum
h264ref
omnetpp
astar
xalancbmk
Fig. 2. Variation on energy consumption of SPEC CPUINT 2006 depending on the
target processor
At this stage, the analysis of the cooling mechanism is performed, and the
development of control techniques for the dynamic tuning of the room tempera-
ture. This task allows to anticipate the effect of the tasks in execution. A set-up
of distributed sensors is deployed to capture the metrics: temperature, humidity,
air flux and direction.
References
1. Feng, J., Koushanfar, F., Potkonjak, M.: System-architectures for sensor networks
issues, alternatives, and directions. In: Proceedings of the 20th International Con-
ference on Computer Design (2002)
2. Yassin, Y., et al.: Ultra low power application specific instruction-set processor
design for a cardiac beat detector algorithm. In: NORCHIP 2009, pp. 1–4 (2009)
3. Kandemir, M.T., Kolcu, I., Kadayif, I.: Influence of Loop Optimizations on Energy
Consumption of Multi-bank Memory Systems. In: Horspool, R.N. (ed.) CC 2002.
LNCS, vol. 2304, pp. 276–292. Springer, Heidelberg (2002)
4. Gordon-Ross, A., Vahid, F., Dutt, N.D.: Fast configurable-cache tuning with a
unified second-level cache. IEEE Trans. VLSI Syst. 17(1), 80–91 (2009)
5. Hauck, S., Fry, T.W., Hosler, M.M., Kao, J.P.: The chimaera reconfigurable func-
tional unit. In: FCCM, pp. 87–93 (1997)
8 M. Zapater, J.L. Ayala, and J.M. Moya
1 Introduction
Smart Cities aim to improve citizens’ life by means of the integration of new architec-
tonic elements, technological innovations and Information and Communication
Technology (ICT) infrastructures.
Energy consumption in buildings accounts for between 20% and 40% of the over-
all amount of energy used in developed countries [1]. The integration through a com-
mon infrastructure of different building systems such as lighting, HVAC (heating,
ventilation, and air conditioning), security or life safety is important to provide
intelligent management services with the aim of improving energy efficiency.
Building Energy Management Systems (BEMS) aim to reduce energy consumption
by monitoring data collected from sensors and controlling electrical devices and sys-
tems such as lighting or HVAC. Nevertheless, available commercial systems present
several issues. On the one hand, interconnection among devices using different com-
munication technologies is a complex and expensive task and it usually entails a
reduction of functionalities compared to those available systems that support just a
single technology [2]. On the other hand, no additional external services can be added
to these systems, limiting the development of future functionalities.
In order to solve these problems and encourage smart cities development, we pro-
pose an energy management system for buildings, designed to ease the implementa-
tion of new services and the integration of existing control systems within the
building. The system consists in a management platform (Bat-MP) that enables the
integration of different building control technologies, independently of the automation
protocol used.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 9–16, 2012.
© Springer-Verlag Berlin Heidelberg 2012
10 J. Caffarel et al.
2 System Description
The amount of information that an intelligent building should manage includes the
following:
• Structural characteristics such as the number of floors, number and distribution of
rooms or geographical orientation (in the northern hemisphere, rooms facing
south are generally warmer than those facing north).
• Information about the electric systems of the building, such as sensor networks,
security devices and lighting or HVAC systems.
• Physical quantities which can be either measured or modified (or both), such
as temperature, light level, electrical consumption or gas levels, like carbon
monoxide.
Bat-MP: An Ontology-Based Energy Management Platform 13
• Information related to the use people make of the building, such as user
timetable, specific usage of every area or user profiles.
The use of ontologies to represent this knowledge is becoming increasingly
widespread [7]. By using an ontology-based system, all the building data described
above are stored in a structured way and the relationships between all of them are
explicitly described. Furthermore, the use of ontologies allows the further publication,
sharing and extension of the data model. The Model Manager uses a Resource
Description Framework (RDF) data model to describe all the entities, characteristics
and properties of a building. RDF is a standard model for data interchange on the
Internet which uses statements in the form of subject-predicate-object expressions.
An ontology-based model of the building has been developed in order to store all
the information described above. The main concepts of the model are:
• Room. Every building contains at least one room, with its own characteristics,
such as size or geographical orientation (or in case the room has windows, the di-
rection it is facing). In order to simplify the modeling process of the building, the
location of every room within it is only defined by the adjacent rooms and their
orientation.
• Device. Every room can contain zero or more devices that can be either a sensor
(which gives a measurement of a physical quantity) or an actuator (which can
perform actions in order to change certain magnitudes inside a room, such as a
light dimmer).
• Parameter. A physical magnitude is represented as an object that contains infor-
mation about its current value, the available actions over it and the units in which
it is measured. Each parameter has, among others, “Measurement” attribute
which stores the timestamp and the value of each measured physical quantity.
The set of actions that can be performed over a parameter is determined by the
devices present in the room. For instance, if a room contains a light installation
that, due to its control technology, can just be switched on or off but not dimmed,
only those two actions will be available in the parameter.
This ontology describes sensors and observations and all its associated concepts, since
the main goal of the Model Manager is to represent and work with parameters in a
simple way. The Model Manager manages queries written in an RDF query language,
validates them and retrieves the information from the database, allowing thereby the
sharing of building data among different systems. The Model Manager uses the RDF
Query Language SPARQL in order to manipulate the data stored in the database,
considering that it is a W3C Recommendation [8] when implementing ontology
databases.
the service should only take into account physical quantities which are related to
climate, independently of the control technologies of the devices.
With the aim of allowing the development of remote services, the Service Manager
also provides a common interface to access to the Management Platform through
the Internet. Thus, a web service has been implemented using a service-oriented ar-
chitecture (SOA). This interaction model is based on a message exchange process
between the entities involved which allows a loosely coupled interaction between the
services and the platform. The main goal of this design model is to achieve an easier
integration of different services, using defined protocols to communicate with them.
This approach gives the developer liberty to design services using different technolo-
gies or programming languages, since the only thing needed is the format of the mes-
sages exchanged. Thus, the messages sent and received by the platform consist of
text-based documents written in XML containing information about the building, its
parameters and the actions to be performed.
The main disadvantage of this solution is the lower efficiency compared to other
highly coupled implementations. Nevertheless, we consider that the possibility of an
easy integration of different services based on different technologies compensates for
this drawback.
A Building Monitoring Service has been implemented. It is described below
to illustrate the Service Manager functionality. All the operations will be executed
using SPARQL queries wrapped in XML messages. The main purpose of this
Service is to request data of the different sensors present in the Energy Efficiency
Research Facility (EERF) at CeDInt-UPM building and show them to the user graph-
ically. These data can represent either real time information such as instant energy
consumption or historical information, such as the annual average temperature in a
given room.
Providing that the Service has no information about the building in the first place, a
SPARQL query will be sent to the platform asking about the rooms within it:
The Service will then receive a message containing the list of the rooms and their
associated Uniform Resource Identifiers (URIs). If the service wants to know the
available parameters inside a specific room with name “/CeDInt-UPM/rooms/show_
room_1”, it will send the following query to the Bat-MP:
Once the Service receives the list of Parameters with their corresponding URIs, it can
ask for the particular recorded values within a given period of time, or just for a real
time value every time it changes. These values will be returned in the form of pairs
“Date:Value” and codified using JSON, a text-based standard format for data ex-
change through the Internet. In this case, since a 6LoWPAN power meter is installed
in the EERF, a message will be returned containing the Parameters representing the
consumption of each electrical line, as well as other Parameters available in the room.
Finally, in order to get the consumption value of the HVAC system (with ID
“HVAC”) in a certain time (specified in milliseconds since January 1st 1970), the
following message will be sent:
SELECT ?v
WHERE{ ?p rdf:type ro:Parameter.
?p ro:id "HVAC".
?m rdf:type ro:Measure.
?p ro:hasMeasure ?m.
?m ro:time "1338544800^^xsd:dateTime".
?m ro:value ?v
}
This example shows how it is possible to interact with different Physical Devices
within a building considering only the physical quantities they are related to, and
independently of the communication protocol they use.
Among the information which can be obtained by a Service, the set of available
actions can also be found. These actions correspond to the ones that can be performed
considering the systems and devices present in a certain room. For instance, if a room
has just a temperature sensor, the only available action related to the Parameter “tem-
perature” will be a method to get the current temperature. If there were also a HVAC
system, the available actions would be a set of methods to get, increase or decrease
temperature. In this context “available action” stands for an action that can be done
considering the access permission of the user (not every user is allowed to perform
certain kind of actions).
Thus, using a common description language like RDF combined with the
implemented platform, a whole set of actions can be executed inside a building with-
out needing to know highly specific details about the control systems present in the
building.
To sum up, the Service Manager mainly provides an application programming in-
terface allowing a Service to connect to the platform, letting the former register in the
system and interact with the parameters in the building model. Besides, this layer
supports access control security policies and resolves conflicts in concurrent accesses.
In order to provide scalability and flexibility, the system runs on OSGi framework,
which allows the inclusion of new Services in real-time and also new technology
drivers and devices.
16 J. Caffarel et al.
4 Conclusions
References
1. Pérez-Lombard, L., Ortiz, J., Pout, C.: A Review on buildings energy consumption
information. In: Energy and Building. Elsevier (March 2007)
2. Winston, J.: The Problem with Home Automation. Electronic House (January 15, 2008)
3. Wong, J.K.W., Lia, H., Wang, S.W.: Intelligent building research: a review. Automation in
Construction 14(1), 143–159 (2005)
4. Gómez-Pérez, A., Fernandez-López, M., Corcho, O.: Ontological Engineering with
examples from the areas of Knowledge Management. In: e-Commerce and the Semantic
Web, 1st edn. Springer (2004)
5. Malinowsky, B., Neugschwandtner, G., Kastner, W.: “Calimero: Next Generation” Automa-
tion Systems Group. Institute of Automation. Vienna University of Technology
6. Staab, S., Studer, R.: Handbook on Ontologies. Springer (2009)
7. Eric, P., Andy, S.: SPARQL Query Language for RDF. In: W3C. World Wide Web
Consortium
Will Eco-aware Objects Help
to Save the World?
Abstract. Our society waste more energy than they should. This is
mostly due to the inadequate use that human beings perform on electrical
devices. The presented paper aims to justify that embedding intelligence
within everyday objects is valuable to reduce the portion of unnecessary
consumed energy which is due to human misusing. To such extend, we
have augmented a capsule-based coffee machine which is placed in a
work office to back our assumptions. Using this device we have devised
an energy saving model that takes into consideration features like how
and when workers use the appliance along the day.
Additionally, we have simulated the model to demonstrate, through
error metric comparison (measured in KWh), that a big amount of energy
would be reduced if such intelligent systems were applied when compared
with a baseline approach. Therefore, this paper contributes with a set
of early, but promising, findings regarding how smart eco-aware objects
can help to save energy in areas where people inhabit (cities, buildings
or homes).
1 Introduction
Nowadays there are more and more devices, appliances and electronics in our
common settings which operate differently one of each other. Some have to be
continuously connected to the mains, e.g. the telephone. Others may be switched
off while they are not in use (Television, Hi-Fi appliances or an electric coffee
machine), but continuing connected to the electric grid, they will likely consume
some Watts in the so called Stand-By mode. Finally there are devices that do not
require energy at all, even if they are connected, e.g. a simple radio, a hairdryer
or an iron.
The consumptions associated to the reviewed operational modes are hard
to be understood by people [3][4][7]. This lack of understanding difficulties the
awareness and therefore, the task of reducing energy consumption with human
actions (e.g. completely switch-off a radio when not used). One hypothesis of
such difficulty would be related to the intangible nature of the electricity. Indeed,
today we find just few ways to get in touch with such a ghost measure.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 17–24, 2012.
© Springer-Verlag Berlin Heidelberg 2012
18 D. Casado-Mansilla, J. López-de-Armentia, and D. López-de-Ipiña
This paper discussed the drawbacks that the invisible nature of energy entails
and its correlation to the waste of energy resources that we unconsciously do
in our everyday life. Our research assumes that people do an inappropriate and
inefficient use of power-consumption-devices during their usage1 .
With the goal of drawing conclusions regarding the premature assumptions,
the authors have conducted three months energy-data collection over a capsule-
based coffee machine which is placed in their laboratory. The idea behind this
monitoring is to analyze how researchers use the device under-study, since no
rationale was found in a first data-analysis. We observed: i) different number
of coffees per day, ii) these coffees were prepared at different hours during the
work-time, and iii) people applied different operation modes to prepare them,
i.e. some left the coffee-maker on Stand-By, and others left it switched-off after
its utilization.
The presented paper contributes with the empirical demonstration that peo-
ple misusing Stand-By appliances gives rise to a high waste of power resources.
Secondly, it is demonstrated through simulation that such a waste could be
diminished by levering eco-intelligence within everyday objects. That is, leav-
ing them the autonomy to alternatively commute their operational mode of
use - On-off or Stand-by mode. Finally, we end by opening the discussion about
what strategy is better to reduce energy consumption: Human behavior change
[4][6][8] vs. leave people aside of device’s intervention to prevent misusing.
The paper is organized as follows. In the next Section the different strategies
to reduce energy are discussed. Section 3 describes the operational modes that
Stand-By devices perform, and it formalizes the problem statement. In Section 4
the results of the data collection phase are presented while Section 5 is devoted
to their analysis and to review this paper contributions.
As the number of devices in homes, buildings and cities is increasing, the need
for a more sustainable and efficient appliance’s design has become paramount to
conserve energy resources. Until today, the so-called smart devices, domotics and
green appliances are only efficient in their self-energy consumption. That means,
that designers and manufacturers assume an appropriate and also efficient us-
age of them. However, works by several researchers [3][4], our own experience,
and the empirical monitoring that we have conducted for this ongoing research
demonstrate that a non negligible amount of energy is wasted for people appli-
ance’s misuse. We deem that such misuse is due to the difficulty to get in touch
with the energy that we consume everyday, even if there exist approaches that
have been devised to lower this barrier:
1
For this paper, ’device usage’ is referred to the fact of leaving indistinctly an
appliance in the Stand-By mode or switching it off after its use. Whereas the term
’misuse’ is applied when is more efficient to perform an operational mode than the
other.
Will Eco-aware Objects Help to Save the World? 19
With these two premises we conclude that is not easy to be familiarized with
energy. A strategy to better understand the appliance’s operational modes and
energy leakages from the consumer side, is to install electricity feedback systems
in households. Froehlich et al. described the eco-feedback systems in detail and
discussed their effectiveness [4].
In the opposite side, electricity suppliers and researches on the field, in an
attempt to increase the percentage of eco-aware people, are approaching different
strategies that could be applied when the long awaited Smart Meter reaches our
homes [7]. For instance Mankoff et al. [6] have bet for specific energy-topic social
networks. In this kind of networks people within similar settings (e.g. people in
the same flat with a similar number of inhabitants) would interchange their
consumption in order to participate in fair competitions to promote a behavior
change to save energy. Similarly, in [8] the authors have conducted an empirical
experiment in a neighborhood to demonstrate that more reduction is possible if
social norm and nudge, telling the people what others do, were applied.
This paper is aligned with previous approaches, however, it presents a differ-
ent strategy to reduce energy. Such strategy is not focused on human behavior
change, but rather on the automation of devices’ operation modes. The presented
approach is motivated by the waste of energy resources that we unconsciously do
in our everyday life, and by the assumption that people perform an inappropriate
and inefficient use of power-consumption-devices.
Therefore, we have two research questions in this article: (i) to determine if
the use that people perform with their appliances improves its self-efficiency,
or rather makes it worse. (2) to justify if embedding intelligence within these
appliances, to let them manage their operational modes in an eco-smart way, is
worthy.
2
http://bit.ly/KmW81y
20 D. Casado-Mansilla, J. López-de-Armentia, and D. López-de-Ipiña
To back the assumptions and try to answer the previous questions we have
converted a capsule-based coffee machine in an eco-aware object3 . The coffee-
maker is grouped in the category of Stand-by appliances. Thus, this device is
designed to operate in two different modes: The On-Off and the Stand-By mode.
The former, On-Off, refers to the fact that a coffee consumer switches-off the
device after its hot drink is ready. This mode does not consume energy during the
idle periods, but the coffee-maker water pressure system consumes a significant
amount of energy when it is again switched on for the next drink. In contrast, in
the Stand-By mode, the device is never switched off at the expense of warming
the pressure engine periodically, i.e. consuming energy steadily. In a previous
authors’ publication [5], it was stated that these modes are randomly detected
with no rationale during a work-day in our laboratory. Therefore, in this paper
we define a new mode, called Mixed mode, as an intelligent combination of
previous modes, that aims to save energy and prevent misusing.
Fig. 1. Coffees distribution during 41 work-days and its associated histogram to observe
the number of coffees prepared by slot-time
The Figure 1 shows two different views of the coffee consumption for the 41
days that the coffee-maker was monitored. On the left graph, the dots denote the
exact time when a coffee was prepared, while the right one presents its histogram.
In a glimpse, we can see a high concentration of usage in two separated periods.
That finding led us to think that one of the previously presented mode might be
more appropriate at certain times than another, thus an intelligent combination
should save energy.
At this point, the reader may wonder what of the presented operational modes
could be more efficient. a) On-off mode; b) Stand-By mode; c) Random mode (monitored-
real usage); d) Mixed mode (intelligent); The intangible nature of energy plays a key
factor in the people decision making, thus, it is difficult to state that a mode is
better than another with only the information that has been provided. Therefore
3
http://socialcoffee.morelab.deusto.es
Will Eco-aware Objects Help to Save the World? 21
the next section is devoted to formalize the rationale behind the choice of an
appropriate mode, and the parameters which influence such decision.
For the sake of ease the formulas understanding, we do not explain in deep the
meaning of all temporal parameters. Let consider them as scalar times, measured
in seconds, that have been averaged after the data-collection phase (see next
section). These parameters come from the time that the coffee-maker spends to
prepare a coffee (Tcf ), to boot up the engine (Tst ), to maintain its temperature
(Tpeak ) and to recover after make a coffee (Tacf ).
The energy consumed in the On-Off mode does not care about Tsl . It only
depends on n :
With these formulas is easy to work out n to know the minimum number of
coffees that should be prepared to shift from the On-Off to the Stand-By mode.
As an example, if we divide the working day into slots of one hour (Tsl =
3600 seconds), then n = 2.3 cof f ees. With this information, the objective now
is to automatically derive a predictive model which would infer when n ≥ 2.3
for each of the hour during D to minimize the energy consumed.
22 D. Casado-Mansilla, J. López-de-Armentia, and D. López-de-Ipiña
1. The everyday object that has been converted into a smart device, is an
off-the-shelf capsule-based coffee machine, namely Dolce Gusto .
2. To tackle the coffee-maker consumption, a LEM AT 10 B5 current sensor has
been used. This sensor is self-powered by induction, its maximum amperage
is 10A, and the output, that varies from 0 to 5 volts depending on the current
that is flowing through it, is then processed by an Arduino microcontroller.
Assuming a mains voltage of 220V, a power factor (PF) as the unit, and
setting the consumption time as t, the consumed energy can be worked out:
P = V ∗ I ∗ PF; E =P ∗t
4.2 Results
To extract visual results, we have culled the data collected during 3 months in
order to perform an appropriate analysis by selecting just the working days, i.e.
41 days. We think that 41 days are enough to get an accurate idea of the coffee-
maker usage pattern. However, with only real usage data it is not possible to
infer whether the device utilization has been appropriate or not. Therefore we
have simulated three energy optimization approaches to compare with the real
usage, which has been used as baseline.
The first optimization strategy was evident; the coffee-maker would auto-
matically switched off between 7 p.m. and 8 a.m. to prevent absentmindedness.
Therefore the consumption in that interval has been removed from the dataset,
only considering the period when the laboratory is inhabited. The consumption
is considerably reduced from 11.895 KWh to 10.117 KWh (≈15% of save).
Inspired by the new Dolce Gusto models that automatically switch them
off some minutes after a coffee preparation, we have performed a simulation
applying only the On-Off mode. Thus, in the real case, the coffee machine would
immediately switch off from the mains after preparing a coffee (avoiding the
possibility to “forget” the device on). To simulate it, the energy consumed by
the Stand-By mode has been removed from the original collected data. This
simulation has led to a reduction of energy, obviously greater than the previous
Will Eco-aware Objects Help to Save the World? 23
Fig. 2. Energy consumed by the coffee-maker applying Real, On-Off and Optimal Mode
strategy. The energy consumed in this case is 7.831 KWh (≈35% of save). The
On-Off mode curve of Figure 2 clearly shows such reduction when compared to
the baseline. As was demonstrated in the third section, when n ≥ 2.3 per hour
slot, it is more efficient to maintain the operation mode in Stand-By, rather than
switch the appliance off again. From this last statement, we have simulated what
it could be an Optimal Mode (Mixed approach). Here, it would be operating in
On-Off mode, except in time slots where n exceeds 2.3, that it would shift to
Stand-By. As expected, the energy consumption has been considerably reduced
when is compared with other optimizations, 5.561 KWh (≈53% of save).
misusing. To this aim, the devices should automatically learn their usage pat-
tern during a training period, and then apply appropriate predictive models for
inferring when such devices are more prone to be operated along the day. With
this information, the eco-devices will be ready to reduce energy by bridging the
the gap between an efficient design and an intelligent operation-mode shifting.
To summarize the paper, we have conducted an energy-data collection dur-
ing three months over a capsule-based coffee machine. It has been empirically
confirmed that humans use more energy than they should do. Then, to reduce
the energy waste due to device’s misuses, we have advocated for the design
of intelligent objects that are able to automatically manage their operational
modes in an eco-smart way. Thus, moving people aside of their intervention.
Our approach, focused on the usage of a coffee-maker, demonstrates that the
energy reduction is very significant when it is compared with the real-life case,
and not negligible when faced with the naive strategy of switching off the
coffee-maker after prepare a drink.
The future of this ongoing research is very promising. Next steps will be
focused on the dataset statistical analysis, the selection of the appropriate
predictive model and its implementation to be leveraged into a device with
constrained capabilities.
References
1. Energy appliance visualization,
http://visualization.geblogs.com/visualization/appliances/
2. Towards a smarter future: Government response to the consultation on electricity
and gas smart metering. Tech. rep., Dept. of Energy and Climate Change (2009)
3. Fitzpatrick, G., Smith, G.: Technology-enabled feedback on domestic energy con-
sumption: Articulating a set of design concerns. IEEE Pervasive Computing 8(1),
37–44 (2009)
4. Froehlich, J., Findlater, L., Landay, J.: The design of eco-feedback technology. In:
Proceedings of the 28th International Conference on Human Factors in Computing
Systems, pp. 1999–2008 (2010)
5. López-De-Armentia, J., Casado-Mansilla, D., López-De-Ipiña, D.: Fighting against
vampire appliances through eco-aware things. In: Proc. of the 1st Workshop on
Extending Seamlessly the Internet of Things. IEEE, Palermo (2012)
6. Mankoff, J., Fussell, S.R., et al.: StepGreen.org: Increasing energy saving behaviors
via social networks. In: Proceedings of the 4th International AAAI Conference on
Weblogs and Social Media (May 2010)
7. Qiu, Z., Deconinck, G.: Smart meter’s feedback and the potential for energy savings
in household sector: A survey. In: ICNSC, pp. 281–286. IEEE (2011)
8. Studley, M., Chambers, S., Rettie, R., Burchell, K.: Gathering and presenting social
feedback to change domestic electricity consumption. In: Interface, pp. 1–4 (2011)
A New Approach to Clustering with Respect
to the Balance of Energy in Wireless Sensor Networks
1 Introduction
A Wireless Sensor Network (WSN) is composed of a number of wireless sensor
nodes forming a sensor field and a base station. The energy sources of sensor nodes
are usually powered by batteries which are very difficult or often impossible to be
recharged or replaced. Therefore, improving the energy efficiency and maximizing
the networking lifetime are the major challenges in sensor networks. Clustering is a
key technique used to both extend the lifetime of the sensor networks and make them
scalable by forming clusters [1].
LEACH algorithm is one of the fundamental clustering protocols. Using a
clustering technique and frequent rotation of the cluster heads result in distributing the
energy load uniformly between all nodes in the network [3]. However, the LEACH
algorithm does not consider the current energy of the nodes at the time of cluster head
selection which may cause early death of some nodes. Also, it does not take into
account the location of the nodes which might lead to a sparse distribution of clusters
and affect the total performance of the algorithm [2, 9].
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 25–32, 2012.
© Springer-Verlag Berlin Heidelberg 2012
26 A. Vejdanparast and E. Zeinali Kh.
2 Related Works
LEACH algorithm [7] selects cluster heads based on generating a random number
between 0 and 1 for each node. If the random number is less than the following
threshold, the node becomes a cluster head for the current round.
,
1
1 (1)
0 ,
is the cluster head probability, shows the current round and is the set of nodes
that have not been cluster head in the last 1/ rounds. This algorithm ensures that
every node becomes a cluster head once within 1/ rounds. Looking at the process of
cluster head selection, it is possible that a node with low residual energy and far from
the base station is selected as cluster head for the current round, therefore there is no
guarantee about the location and the number of cluster heads [9].
An enhancement over the LEACH protocol, LEACH-C [3], uses a centralized
clustering algorithm that requires location information of all nodes of the network.
However, this location information in mobile wireless networks is only available
through GPS which requires additional communication among the nodes.
V-LEACH [4], introduce the vice-cluster head in each cluster, that in case the
cluster head dies, the vice cluster will start working as cluster head and collected data
will reach the base station through the cluster head. There is no need to select the new
cluster head, but this will cause extra energy to be used for vice cluster selection.
NEW-LEACH [6], Improves the energy efficiency of original LEACH by
introducing the combinational factor. It considers three important factors: the energy
of each node, the number of times a node is chosen as a cluster head and the distance
between nodes and the base station. Also, the role of cluster head will be rotated
among all nodes during the running rounds. However, this protocol imposes extra
computational overhead on setup phase of LEACH algorithm.
FZ-LEACH protocol [8], come to solve the problem of “Existence of the large
clusters in WSN”. It introduces the Far-Zone (FZ), including the nodes with low
residual energy and far from the base station. Once the FZ is formed, the Zone Head
(ZH) is selected. All nodes in the FZ can directly communicate with the ZH rather
than cluster head. Cluster head then collects sensed data from the rest of the nodes
and ZH and after aggregating transmits to the base station and this way it produces the
equal clusters. But it is suitable for the networks that deployed in wide area.
A New Approach to Clustering with Respect to the Balance of Energy in WSNs 27
. (3)
is the electronics energy for the radio component that depends on some factors
such as the digital coding, modulation, filtering and spreading of the signal. Energy
consumption of the amplifier depends on both bit error rate and the distance to the
receiver, for free space and for the multiple attenuation models, and is a
constant threshold which relies on the application environment.
head selection procedure of LEACH by applying the distance and the current residual
energy parameters of all member nodes of the same cluster. Actual cluster heads are
selected in order to save more energy inside each cluster of the network.
(4)
4 Performance Evaluation
In this section we have simulated four algorithms: the LEACH algorithm, the EN-
LEACH algorithm which considers only the residual energy metric at the time of
ACH selection, the Distance-LEACH algorithm which considers only the average
distance to other nodes of the same cluster at the time of ACH selection along with
our proposed algorithm which uses both mentioned metrics together. We present the
results of our experiments and evaluate our proposed algorithm by comparing with
LEACH, EN-LEACH and Distance-LEACH using MATLAB simulation software.
Moreover, to be able to compare these algorithms in our simulation we need to run
them on the same networks. This is achieved by using equal random seeds for each
run in all algorithms. The simulation parameters are presented in Table1.
Parameter Value
Number of nodes 100
Length of monitoring (100,100) m
area
Position of the base (50,150) m
station
Initial energy 0.3 J
Data packet size 4000 bits
Simulation end condition Number of
nodes < 3
50nJ/bit
10PJ/bit/m2
0.0013PJ/bit/m4
shows the number of live nodes at each round for each algorithm. It can be observed that
our proposed algorithm increases the useful lifetime of the network in terms FND (First
Node Dies) and HNA (Half of Nodes Alive) metrics. Our proposed algorithm outperforms
LEACH as it takes the current residual energy of nodes into consideration during cluster
head selection. This way, in each round, nodes with higher residual energies have better
chances of becoming cluster heads and hence spend more energy comparing to non-
cluster head nodes in that round. So, as the system ages (round-wise) the energy
distribution over all nodes converges to uniform. This means that no node dies (discharges
completely) while there are still other nodes with much higher energies. It reduces the time
interval between the discharges of nodes and nodes are kept alive as long as possible. In
our algorithm, almost over 95% of nodes lose their remaining energy at the end of useful
lifetime of the network (last 50 rounds before complete discharge). This causes a steep
slope that can be observed at the end of the diagram of our proposed model in Figure1
(last 50 rounds). Figure2 shows that our algorithm improved the first node dies by 85%
and the half node live metric by 35% compared to LEACH.
Fig
g. 1. Number of Live nodes per round
700
606
600 546
532 521 535
500 448
400
FND
D
295
300 HND
D
200
100
10
0
LEACH Disstance-LEACH EN-LEACH PROPOSED Alg.
Fig. 2.
2 Comparison diagram of FND and HNA
* 10000
18
LEACH
PROPOSED Alg.
16
14
P A C K E T S R E C E IV E D A T B A S E S T A T I O N
12
10
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35
AVERAGE RESIDUAL ENERGY OF LIVE NODES
Fig. 4. Total amount of packets received at base station per energy unit
the distance metric at the time of proper cluster head selection. Such consideration
results in saving the transmission energy. That is why the energy consumption load is
uniformly distributed over all clusters in the network resulting in the message delivery
to base station to be still more effective at the end of useful lifetime of the network.
32 A. Vejdanparast and E. Zeinali Kh.
5 Conclusion
References
1. Bouabdallah, F., Bouabdallah, N.: On Balancing Energy Consumption in Wireless Sensor
Networks. IEEE Trans. on Vehicular Technology 58, 2909–2924 (2009)
2. Dai, S.J., Li, L.M.: High Energy-efficient Cluster-based Routing Protocol for WSN.
Application Research of Computers, 2201–2203 (2010)
3. Heinzelman, W., Chandrakasan, A., Balacrishnan, H.: An Application-Specific Protocol
Architecture For Wireless Microsensor Networks. IEEE Trans. on Wireless
Communications, 660–670 (2002)
4. Bani Yassein, M., Alzoubi, A., Khamayseh, Y.: V-LEACH, Improvement on LEACH
Protocol of Wireless Sensor Network. International Journal of Digital Content Technology
and its Application 3, 132–136 (2009)
5. Zhang, Q., Xiao, L.U., Chen, X.: Improvement of Low Energy Adaptive Clustering
Hierarchy Routing Protocol Based on Energy-efficient for WSN. Computer Engineering
and Design, 427–429 (2011)
6. Yingqi, L.U., Zhang, D., Chen, Y., Liu, X., Zong, P.: Improvement of LEACH in Wireless
Sensor Network Based on Balanced Energy Strategy. In: Proceeding of IEEE International
Conference on Information and Automation, pp. 111–115 (2012)
7. Heinzelman, W., Chandrakasan, A., Balacrishnan, H.: Energy-Efficient Communication
Protocol for Wireless sensor network. In: Hawaii International Conferences System
Science, vol. 1, pp. 3005–3014 (2000)
8. Katiyar, V., Chand, N., Gautam, G.C., Kumar, A.: Improvement in LEACH Protocol for
Large-Scale Wireless Sensor Networks. In: Proceeding of ICETECT, pp. 1070–1075
(2011)
9. Hou, R., Ren, W., Zhang, Y.: A wireless Sensor Network Clustering Algorithm based on
Energy and Distance. In: Second International Workshop on Computer Science and
Engineering, vol. 1, pp. 439–442 (2009)
10. Lin, N., Shi, W.H.: Simulation Research of Wireless Sensor Networks Based on LEACH
Protocol Computer Simulation, pp. 178–181 (2011)
Lightweight User Access Control in
Energy-Constrained Wireless Network Services
Abstract. This work introduces a novel access control solution for in-
frastructures composed of highly constrained devices which provide users
with services. Low energy consumption is a key point in this kind of sce-
narios given that devices usually run on batteries and are unattended
for long periods of time. Our proposal achieves privacy, authentication,
semantic security, low energy and computational demand and device
compromise impact limitation on a simple manner. The access control
provided is based on user identity and time intervals. We discuss these
properties and compare our proposal to previous related work.
1 Introduction
The paradigm known as the Internet of Things (IoT) defends the benefits of
everyday objects becoming first-class citizens of the Internet. To do so, these
objects must be provided with connectivity to expose and to consume data from
any other applications or services. Nevertheless, the embedded devices used to
connect the objects face different problems and challenges compared to normal
computers. Due to the large scale of objects that will populate the IoT, these
devices are usually designed to be small and inexpensive, resulting in limited
processing capability. Additionally, these devices are often running 24 hours a
day so low power consumption is required to enable sustainable computing.
Security-related routines sometimes impose an increment of energy consump-
tion due to expensive calculations. Indeed, little attention has been paid to the
security aspects in the IoT, and commonly security is left as a dispensable, en-
ergy draining process. The focus of this contribution is to define a novel and low
consumption solution for restricting access to legal users and securing communi-
cations among them and constrained devices. The solution enables a trustworthy
communication on an insecure network at the cost of reduced energy consump-
tion. The proposed model is compared with other solutions from the literature,
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 33–41, 2012.
© Springer-Verlag Berlin Heidelberg 2012
34 J.A.M. Naranjo et al.
showing how it is an advance in the state of the art on the field of efficient
security with the restrictions commonly present in the IoT.
The contributions of this paper are twofold. First, a model is proposed to cover
a typical scenario, adding the required access control layer, and second a review of
the state-of-the-art and a comparison of this solution with different existing solu-
tions to the date is provided. Section 2 provides some cryptographic background,
Section 3 details the scenario we are focusing, while Sections 4 and 5 introduce
our proposal and compare it with previous works. Section 6 concludes the paper.
2 Background
The infrastructure considered in our solution involves three kinds of players: sen-
sors, Base Stations and user devices (e.g. smartphones). Sensors are extremely
constrained devices, frequently battery-powered and with reduced computational
capabilities. Equipment and power shortage prevents sensors from performing on
a frequent basis the complex arithmetic operations involved in public-key cryp-
tography in order to achieve encryption and authentication. However, symmetric
cryptography is an option, specially given that many 802.15.4/ZigBee compliant
sensors have an Advanced Encryption Standard (AES) coprocessor installed.
Base Stations are better equipped devices that handle groups of sensors for
message routing purposes and also for key management in our case. They are
assumed to have a more powerful hardware, a permanent (or at least much
longer) power supply and large storage space. They are also assumed to handle
public-key cryptography routines and certificates.
Finally, users communicate with Base Stations and sensors through their
smart devices, such as mobile phones or tablets.
Lightweight User Access Control in Wireless Network Services 35
In order to illustrate the scenario we are focusing let us assume a home au-
tomation infrastructure. A set of sensors is deployed within a house, e.g. lights
control, alarm or TV. Different users of the house may enjoy different access
privileges and therefore can also be separated into different groups (e.g. adult
owners, children, friends or relatives). Each group has a different set of permis-
sions. For example, adult owners should have the highest privilege to access and
control every single sensor/actuator; children may have access to the TV actua-
tors, but won’t be able to purchase pay-per-view programs; and friends will be
able to access the WIFI and turn on and off some lights of the house. These
permissions are issued by members of the adult owners group.
4 Our Proposal
Our main goal is to allow sensors and legal user devices only to establish encrypted
and authenticated one-to-one channels while minimizing the intervention of the
Base Station. The process should require a small amount of energy consumption
and storage, specially on the sensor side. Minimizing storage requirements also im-
plies that communications should be as stateless as possible, i.e., no inter-session
information should be stored for long periods of time. Besides, it should be easy
for the sensor to perform access control operations on user devices.
Our solution covers four phases: sensor bootstrapping, user join, regular commu-
nication and user eviction, all of which are described next. For the sake of simplic-
ity and without loss of generality we focus on a simple scenario: one Base Station
(namely BS), one sensor (namely S), and one user device (namely A). The exten-
sion of the proposed protocol to several users, sensors and base stations is straight-
forward from the protocol description below. Messages involved in the protocol are
depicted in Figure 1 while Table 1 shows the notation used from now on.
Sensor Bootstrap. At the time of adding the new sensor S, the BS generates
two master secrets: one for encryption and one for authentication, M SencS and
M SauthS respectively. These secrets are sent to S under a secure channel. See
[6] for a good survey on the latter and [5] for a particularly smart solution.
Table 1. Notation
MSencS , MSauthS Encryption and authentication master secrets for sensor S
KencS,A , KauthS,A Encryption and authentication keys for communication between
sensor S and user A
KencS,A {x} x is encrypted with KencS,A
MACKauthS,A (x) A MAC is done on x with KauthS,A
KDF (x, {a, b}) A Key Derivation Function is applied to master secret x using a as
public salt and b as user-related information
H(x) A hash function is applied to x
x||y Concatenation of x and y
IDA Identifier of user A
a Random integer salt
init time, exp time Absolute initial and expiration time of a given key
36 J.A.M. Naranjo et al.
% &
' '
&
( )
!
User Join. Let us assume that S is already operating under normal conditions.
User A arrives at the scenario handling her mobile device and wishes to request
some information from S. First, A sends a request to BS asking for keying mate-
rial to communicate with S (step 2). The message should include authentication
and authorization information so BS can perform high-level access control on
user A. For this we suggest the use of public key cryptography [7] given that
(i) both the Base Station and the user device are assumed to handle it easily
and (ii) it also allows to create a secure channel between them. In any case, let
us remark that this step is only performed at user arrival and that many target
sensors can be requested in the same message at step 2.
2. A → BS : [IDA , S, credentials]
If A’s request is accepted then the Base Station generates appropriate keying
material (step 3 can be repeated as many times as target sensors were requested)
and sends it to A through a secure channel (step 4). The expiration time of this
material is decided by the Base Station and cannot be changed by A.
3. BS computes:
(a) a, random integer salt1
(b) (init time, exp time), keying material validity interval
(c) KencS,A = KDF (M SencS , {a, IDA ||init time||exp time})
(d) KauthS,A = KDF (M SauthS , {a, IDA ||init time||exp time})
4. BS → A (secure channel) : [KencS,A, KauthS,A, a, init time, exp time]
Regular Communication. A can now use the received keying material to
encrypt and authenticate her first message M addressed to S (step 5).
5. A → S : [KencS,A{M }, IDA , a, init time, exp time,
M ACKauthS,A (M, IDA , a, exp time)]
Upon reception of the message, sensor S computes the corresponding keying
material as in steps 3c and 3d. S can now decrypt and authenticate the whole
1
We assume that M SencS , M SauthS and a are obtained from a secure pseudorandom
number generator.
Lightweight User Access Control in Wireless Network Services 37
Subsequent messages within the same transaction are respectively encrypted and
authenticated with (K encS,A, K authS,A ), (K encS,A, K authS,A) and so
on. If, for any reason, one of the players loses synchronization regarding which
key to use for a given message it can always recover it by trying consecutive
hashes until the proper key is found. This resynchronization process should not
take long for short message exchanges. A very similar technique is used by the
well known SNEP protocol [4].
So far we have considered the necessity of privacy between A and S. If the service
provided by S does not require privacy then messages do not need to be encrypted,
just authenticated with KauthS,A (this is applicable to the rest of the paper). In
any case, when the message exchange finishes the sensor can delete the keying ma-
terial related to A since it can be easily recomputed in the next exchange. The
protocol therefore does not require S to store any inter-session information.
User Eviction. The inclusion of a validity time interval in the key derivation
function input provides easy time-based access control. Before computing the
keying material after step 5 the sensor S checks whether the expiration time
has not yet been reached. This needs a very relaxed time synchronization with
the BS, in the order of seconds, while other well known protocols impose much
stronger requirements on this matter (centiseconds or milliseconds) [4,8]. Also
note that A cannot fake her (init time, exp time) pair because the keys derived
by the sensor will be different and communication will be impossible. Conse-
quently, the user is forced to be honest.
Finally, the Base Station may decide to evict A before her expiration time
in certain situations, e.g. due to misbehaviour or key exposure. This is more
problematic: the only way of making S reject messages from A before exp time
is to maintain a blacklist with (IDA , exp time) items in every sensor, which
requires an additional communication per item between the BS and the sensor.
However, the scarce storage space at the sensor will not allow for long blacklists.
In any case, items could be removed as soon as their corresponding expiration
times were surpassed: from that moment on sensor S will reject step 5 messages
basing on the obsolete exp time value sent in step 5 rather than in IDA .
No keys are publicly disclosed, nor they even travel encrypted in the user-sensor
message exchange. Following good cryptography practice, different keys are used
for encryption and authentication so the use of a single key for more than one
task is avoided. The impact that a sensor compromise makes on the rest of the
network is reduced since there are no shared keys among sensors nor among
users: every sensor owns a different master secret pair, so an attack on that
38 J.A.M. Naranjo et al.
node would not provide any knowledge about other sensors in the network. In a
similar way, each user knows only those keys shared with a given set of sensors
and, what’s more, those keys are exclusive for her. A compromise on them would
only allow to impersonate that user. This is not the case of [9] (see Section 5).
So far, the protocol suffers from a weakness that can be easily solved. We re-
fer to the fact that, within the same key validity interval, different message
exchanges at different transactions between A and S will use the same key
chains: the first message A → S (step 5) will always be encrypted/signed with
(KencS,A, KauthS,A), the first message S → A will use (K encS,A , K authS,A),
and so on. Reusing keys like that is obviously not desirable so we propose a sim-
ple, painless solution. Before introducing it let us recall that we are searching for
a stateless proposal which does not require the storage of session data on a per-
user base within the sensor. Now, the solution relies on performing a random
number of hash operations, say h, on (KencS,A, KauthS,A) at the beginning
of each message exchange. Then the key chain used in the given exchange will
start in (K h encS,A , K h authS,A). The downsides are that h must be communi-
cated in step 5 and that different key chains may overlap (e.g., [h, h+15] and
[h-10, h+5]). To avoid the latter the user can choose ever-increasing, sufficiently
scattered values for h.
Note that the user decides on the value h to use. If the sensor does not trust
user devices by default then it can choose a different value on step 6, say h , and
include it in the response message. The device should then use h + 1 in the next
message and so on. In any case, the sensor can delete this information along with
the user-specific keying material when the transaction ends. The main benefit
we obtain from this solution is achieving semantic security without needing to
negotiate on initial parameters like in [4] (see Section 5).
5 Related Work
feature (she might use a deliberately weak key). In our scheme, the BS takes care
of this task. Second, the sensor must store all keys shared with online users at a
given time (which requires a large storage space), or involve the trusted device
in frequent communication establishments (which would make the process less
efficient). In our scheme, the sensor can generate any valid key on the fly.
Table 2 compares our proposal to the reviewed related work according to some
relevant features. All protocols shown provide encryption and authentication.
6 Conclusions
This work introduces a simple user access control solution for wireless network
services in a typical infrastructure composed of Base Stations and sensors, with
users interacting through their smart devices (e.g. mobile phones) in an IoT sce-
nario. We focus on a minimal use of computation, energy and storage resources
at the sensor so as to address constrained devices: key distribution and access
control rely on extremely fast key derivation functions and, for the same rea-
son, memory usage is reduced since keys are computed on the fly when needed.
This way, adding security to an IoT scenario does not imply a high energy con-
sumption which would disable sustainability. Our solution provides encryption,
authentication, semantic security and access control based on user identities and
time intervals without requiring tight clock synchronization among devices. Fi-
nally, the intervention of the Base Station in user - sensor communications is
minimal, which is also a desirable feature. Regarding future work, a sample sce-
nario will be deployed so as to provide energy, CPU and memory consumption
measurements compared with a straightforward insecure solution. This would
be particularly interesting for measuring the extra energy consumption required
by the secure solution, and how it might be affordable in most IoT scenarios.
References
1. Bellare, M., Canetti, R., Krawczyk, H.: Keying Hash Functions for Message
Authentication. In: Koblitz, N. (ed.) CRYPTO 1996. LNCS, vol. 1109, pp. 1–15.
Springer, Heidelberg (1996)
2. Chen, L.: Recommendation for key derivation using pseudorandom functions. NIST
Special Publication 800-108 (2008)
3. Krawczyk, H.: Cryptographic Extraction and Key Derivation: The HKDF Scheme.
In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 631–648. Springer,
Heidelberg (2010)
4. Perrig, A., Szewczyk, R., Tygar, J.D., Wen, V., Culler, D.E.: SPINS: security pro-
tocols for sensor networks. Wirel. Netw. 8, 521–534 (2002)
Lightweight User Access Control in Wireless Network Services 41
5. Zhu, S., Setia, S., Jajodia, S.: LEAP+: Efficient security mechanisms for large-scale
distributed sensor networks. ACM Trans. Sen. Netw. 2(4), 500–528 (2006)
6. Zhang, J., Varadharajan, V.: Wireless sensor network key management survey and
taxonomy. Journal of Network and Computer Applications 33(2), 63–75 (2010)
7. Information technology - Open Systems Interconnection - The Directory: Public-
key and attribute certificate frameworks. ITU-T recommendation X.509 (2005)
8. Chowdhury, A.R., Baras, J.S.: Energy-efficient source authentication for secure
group communication with low-powered smart devices in hybrid wireless/satellite
networks. EURASIP J. Wireless Comm. and Networking (2011)
9. Ngo, H.H., Wu, X., Le, P.D., Srinivasan, B.: An individual and group authentica-
tion model for wireless network services. JCIT 5(1), 82–94 (2010)
10. Le, X.H., Khalid, M., Sankar, R., Lee, S.: An efficient mutual authentication
and access control scheme for wireless sensor networks in healthcare. Journal of
Networks 6(3) (2011)
Channel Analysis and Dynamic Adaptation
for Energy-Efficient WBSNs
1 Introduction
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 42–49, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Channel Analysis and Dynamic Adaptation for Energy-Efficient WBSNs 43
– (Wireless) sensor node: A device that responds to and gathers data on phys-
ical stimuli, processes the data if necessary and reports this information
wirelessly.
– (Wireless) actuator node: A device that acts according to data received from
the sensors or through interaction with the user.
– Personal Device: A device that gathers all the information acquired by
the sensors and actuators and informs the user about the signal analysis
performed.
2 Related Work
In a common WBSN scenario, the user is fully functional and develops a physical
activity with regular exercise and movement. The sensor nodes are strategically
placed on the body according to the bio-signal to be captured. Usually, the MAC
protocol used for the wireless communication is IEEE 802.15.4 [10], conceived
for low-power, low-cost and low-speed communications.
The operation of the WBSN is determined, among others, by the proper
communication between the wireless links. The wireless communication is very
sensitive to several factors like the model and direction of the antenna, the
distance, external disturbances (EM radiations in near spectral frequencies) and
the presence of obstacles for the signal [11,12]. According to this, the human body
44 M. Vallejo, J. Recas, and J.L. Ayala
affects to the quality of the communication due to the imposed attenuation and
blocked signal paths. Some authors have started to analyze this phenomenon
with Intel Mote 2 devices placed at the chest and ankle of the user, showing
packet misses around 24-28% [13]. Something similar has been done for sleeping
users [14] and resulting in losses greater than a 10%. However, these studies did
not evaluate other variables related to the quality of the link, did not perform a
methodical study and did not propose a contention mechanism.
The effect of the users in movement has also been studied by several authors in
the literature. The positions adopted by the body, the block caused by the body
to the line of sight between the nodes, and the relative movement of arms and legs
have shown to be responsible of up to 20 dB of attenuation [15]. The movement
of the antenna position also impacts negatively in this value [16,17,5,18]. An
approach similar to ours can be found in [19] and [20], where the authors evaluate
a walking patient and the effect on the Received Signal Strength Indication
(RSSI), which is a very good predictor of whether a short-term link is high
quality or not. However, as opposed to our proposal, none of these works evaluate
complex but common natural movements, estimate the effect of body type, and
propose a mechanism to alleviate the negative impact of these factors.
3 Experimental Setup
The sensor node used in our experimental setup is the Shimmer [21], a small, low
power commercial wireless sensor platform for noninvasive biomedical research.
The Shimmer node is equipped with an ultra-low-power 16-bit microcontroller
(TI MSP430) that runs at a maximum clock frequency of 8MHz and includes
10KB of RAM and 48KB of Flash. This platform has also two radio links, IEEE
802.15.4-compliant CC2420 transceiver [22] and Bluetooth radio (the last one
has not been used in our experiments because of its power consumption). From
the software viewpoint, we have ported FreeRTOS [23] for this platform, which
is a portable, open source, and hard real-time mini kernel that includes support
for the microcontroller and the IEEE 802.15.4-compliant radio chip used by
Shimmer.
The nodes were placed on the subject’s body considering a star topology,
with the coordinator placed in the waist (just over the navel), and the node
sensors in the left arm (link L1) and in the left knee (link L2), as shown in
Fig. 1a.
The radios were tuned to 802.15.4 channel number 24 and each link was
tested independently, with no inter-link interference. The coordinator sent bea-
cons packets of 40 bytes of payload, at a fixed rate of 1 packet per second at
the maximum power level, 0dBm. The other nodes answered to beacon with
data packets of 30 bytes of payload, at a fixed rate of 20 packets per second.
The coordinator of the system is programmed to provide RSSI reading, CRC bit
reading, and the sequence number for every data packet received from the node
sensors. Then, the coordinator integrates and sends this information in the bea-
con packet payload. An external node passively listens the beacon packets and
Channel Analysis and Dynamic Adaptation for Energy-Efficient WBSNs 45
– Scenario 1: the subject sat on a chair performed five movements of the arms
(for Link 1): 1) hands on thighs. This link and position will be denoted as
L1/P1; 2) arms crossed, L1/P2; 3) arms extended forward, L1/P3; 4) arms
extended up, L1/P4; and 5) arms extended to both sides, L1/P5.
– Scenario 2: the subject sat on a chair performed four movements of the
legs (Link 2): 1) leg in 90o angle with the body, L2/P1; 2) left leg crossed
over the right knee, L2/P2; 3) right leg crossed over left knee, L2/P3; and
4) leg extended forward, L2/P4.
– Scenario 3: the subject is walking (for L1 and L2).
– Scenario 4: the subject performs a complete sequence (for L1 and L2):
1) sat with hands on thighs, 2) standing with arms parallel to the body,
3) walks 10 steps, 4) suspended, and extends the arms up. Each step is 5
seconds long.
Four real human subjects were used in the experimental work, holding a variety
of body types. Table 1 shows the dimensions of the four subjects used in the
experiments (4 males). Subject 1 and Subject 2 are taller than the average, close
to 190 cm, but have different body masses. While subject 1 has a Body Mass
Index (BMI) equal to BM I1 = 28.16, that corresponds with overweight, subject
2 has BM I2 = 23.70 what is in normal range. Subject 3 and 4 have average
height with a BM I3 = 24.54 and BM I3 = 27.68, that correspond with average
and overweight. For each subject and scenario, the measurements were repeated
for 4 power transmission levels of the radio sensor nodes: -25 dBm, -15 dBm, -7
dBm and 0 dbm. All the results were taken in controlled conditions to minimize
the effect of interfering EM-waves (WiFi, 3G, solar radiation,etc.)
Table 1. Characteristics
4 Measurement Results
Fig 1b shows the Packet Error Rate (PER) for each subject in the different
scenarios and links described in Section 3. The radio has been configured to
transmit at -7 dBm. The measurement of the packet loss was made on steady
periods of 10 seconds. According to the experimental results, we can highlight
the following facts:
Threshold: For RSSI values below -83 dBm, the packet loss increases dra-
matically because near to the sensitivity threshold, the link quality varies rad-
ically [20]. The background noise presents a constant value of -98 dBm and do
not affect the radio link. The threshold effect can be observed on Figure 2. This
figure shows the RSSI values for Subject S1, Link L2 and position P2 for 4 dif-
ferent power transmission levels, -25, -15, -7 and 0 dBm. As can be observed,
the RSSI values are far from the threshold for a) and b), therefore, there is not
loss of packets. In c), the reception energy is very close to -83 dBm while for
d) is below the threshold and almost all the packets do not reach the destination.
Lossless Positions: In L1, positions P1-P3 have no packet losses for all the sub-
jects; also, there is not packet loss for L2 in P1-P2. Something similar happens
for L2 in P1, P2 and P4. This can be explained because the sensor nodes have
direct line of sight in these configurations and are located at a short distance. At
-7dBm, some positions have an RSSI over the threshold and hence, the wireless
link can be considered lossless.
Body Types: For L1 in positions P4 and P5 (arms extended up and arms
extended to both sides) we can observe a large percentage of packet losses for
Subject 1 and 2 (both taller than the average) while none for Subjects 3 and 4.
From these data we can conclude that longer arms affect the communications; if
we move the sensor away from the waist, we can reach the point at which the RSSI
is below the threshold. It is also noticeable that, while PER for L1/P5 is similar for
(a) Node Position (b) Packet Error Rate and RSSI(dBm) using -7dBm for
waist-arm (L1) and waist-leg (L2) links
Subjects 1 and 2 (about 25%), the L1/P4 PER is very different, 51.9% for S1 and
6.2% for S2. This is closely related to the body type: in P5 the block generated
by the body is very similar (just the arm) and its circumference is very similar.
However, in P4 we have a noticeable body block, the signal has to go through a
big body section and suffers more attenuation for S1 (overweight) than S2 (aver-
age). The same effect can be found in L2/P3 for Subjects 2 and 4 compared with
Subjects 1 and 3, the higher the body mass the stronger the body block.
a) Power level 0dBm b) Power level −7dBm c) Power level −15dBm d) Power level −25dBm
−60
−65
−70
RSSI[dBm]
−75
−80
−85
−90
0 5 100 5 100 5 10
0 5 10
Time[sec] Time[sec] Time[sec] Time[sec]
5 Optimization
Figure 3 represents the change of RSSI for Scenario 4 introduced in Section
3 in three different configurations: transceiver configured to transmit at 0dBm
(subfigure a), at -25dBm (b) and the proposed dynamic adjustment for energy
savings (c). The devised policy for lost packets works as follows: if a packet that
does not reach the destination is detected, this packet is re-sent at maximum
a1) Scenario 4 Link1 at 0dBm b1) Scenario 4 Link1 at −25dBm c1) Scenario 4 Link1 with dynamic adjustment
−60 Sit Stand Walk Arms up Sit Stand Walk Arms up Sit Stand Walk Arms up
RSSI[dBm]
−70
−80
−90
0 5 10 15 200 5 10 15 200 5 10 15 20
a2) Scenario 4 Link2 at 0dBm b2) Scenario 4 Link2 at −25dBm c2) Scenario 4 Link2 with dynamic adjustment
−60 Sit Stand Walk Arms up Sit Stand Walk Arms up Sit Stand Walk Arms up
RSSI[dBm]
−70
−80
−90
0 5 10 15 200 5 10 15 200 5 10 15 20
Time[sec] Time[sec] Time[sec]
6 Conclusions
The application domain of Wireless Body Sensor Networks (WBSNs) is a growing
segment of the market. Currently, the leading applications in WBSNs consider mo-
bile subjects and battery-operated devices. In this environment, the quality of the
transmitted data and the energy performance of the system must be optimized.
This work has shown the effect of body positions and human movement in the
quality of the wireless communication and hence the power consumption of the
radio link. The analysis of the experimental data show how shades and blocking
to the line of sight between two nodes impact negatively the transmission.
The experimental work described in this paper has also analyzed the effect of
several body types in the quality metrics. The mass and volume of the subject
has a direct effect on the ratio of missed packets, as the attenuation by the water
percentage and the blocking shape suggest. Finally, we have proposed a reactive
mechanism that tunes the power transmission in order to alleviate the effect of
the mentioned factors and extend the battery duration. The obtained results for
a complex real-life movement sequence confirm the benefits of the approach.
References
1. Garcı́a-Hernández, C.F., Ibargüengoytia-González, P.H., Garcı́a-Hernández, J.,
Pérez-Dı́az, J.A.: Wireless sensor networks and applications: a survey. Interna-
tional Journal of Computer Science and Network Security 7(3) (2007)
2. Latré, B., Braem, B., Moerman, I., Blondia, C., Demeester, P.: A survey on wireless
body area networks. Wirel. Netw. 17(1), 1–18 (2011)
3. Timmons, N., Scanlon, W.: Analysis of the performance of ieee 802.15.4 for medi-
cal sensor body area networking. In: First Annual IEEE Communications Society
Conference on Sensor and Ad Hoc Communications and Networks, IEEE SECON
2004, pp. 16–24 (October 2004)
4. Akyildiz, I.F., Su, W., Sankarasubramaniam, Y., Cayirci, E.: A survey on sensor
networks. IEEE Communications Magazine (2002)
5. Fort, A., Desset, C., De Doncker, P., Wambacq, P., Van Biesen, L.: An ultra-
wideband body area propagation channel model-from statistics to implementation.
IEEE Transactions on Microwave Theory and Techniques 54(4), 1820–1826 (2006)
6. Tang, Q., Tummala, N., Gupta, S., Schwiebert, L.: Communication scheduling to
minimize thermal effects of implanted biosensor networks in homogeneous tissue.
IEEE Transactions on Biomedical Engineering 52(7), 1285–1294 (2005)
Channel Analysis and Dynamic Adaptation for Energy-Efficient WBSNs 49
7. Washington, A.N., Iziduh, R.: Modeling of military networks using group mobility
models. In: ITNG, pp. 1670–1671 (2009)
8. Marin-Perianu, R., Marin-Perianu, M., Rouffet, D., Taylor, S., Havinga, P., Begg,
R., Palaniswami, M.: Body area wireless sensor networks for the analysis of cycling
performance. In: Proceedings of the Fifth International Conference on Body Area
Networks, BodyNets 2010, pp. 1–7. ACM, New York (2010)
9. Sivaraman, V., Grover, S., Kurusingal, A., Dhamdhere, A., Burdett, A.: Experi-
mental study of mobility in the soccer field with application to real-time athlete
monitoring. In: WiMob, pp. 337–345 (2010)
10. LAN Man and Standards Committee. IEEE standard for information technology-
telecommunications and information exchange between systems- local and
metropolitan area networks- specific requirements–part 15.4: Wireless mac and
phy specifications for low-rate wpans. Control, 1–305 (September 2006)
11. Ren, H., Meng, M., Cheung, C.: Experimental evaluation of on-body transmis-
sion characteristics for wireless biosensors. In: IEEE International Conference on
Integration Technology, ICIT 2007, pp. 745–750 (March 2007)
12. D’Errico, R., Rosini, R., Maman, M.: A performance evaluation of cooperative
schemes for on-body area networks based on measured time-variant channels. In:
2011 IEEE International Conference on Communications (ICC), pp. 1–5 (June 2011)
13. Shah, R., Yarvis, M.: Characteristics of on-body 802.15.4 networks. In: 2nd IEEE
Workshop on Wireless Mesh Networks, WiMesh 2006, pp. 138–139 (September 2006)
14. Smith, D., Miniutti, D., Hanlen, L.: Characterization of the body-area propagation
channel for monitoring a subject sleeping. IEEE Transactions on Antennas and
Propagation 59(11), 4388–4392 (2011)
15. Di Renzo, M., Buehrer, R., Torres, J.: Pulse shape distortion and ranging accuracy
in uwb-based body area networks for full-body motion capture and gait analysis.
In: Global Telecommunications Conference, GLOBECOM 2007, pp. 3775–3780.
IEEE (November 2007)
16. Neirynck, D.: Channel characterisation and physical layer analysis for body and
personal area network development. PhD thesis, University of Bristol (2006)
17. Maman, M., Dehmas, F., D’Errico, R., Ouvry, L.: Evaluating a tdma mac for body
area networks using a space-time dependent channel model. In: 2009 IEEE 20th
International Symposium on Personal, Indoor and Mobile Radio Communications,
pp. 2101–2105 (September 2009)
18. Hall, P., Hao, Y., Nechayev, Y., Alomalny, A., Constantinou, C., Parini, C., Ka-
marudin, M., Salim, T., Hee, D., Dubrovka, R., Owadally, A., Song, W., Serra, A.,
Nepa, P., Gallo, M., Bozzetti, M.: Antennas and propagation for on-body commu-
nication systems. IEEE Antennas and Propagation Magazine 49(3), 41–58 (2007)
19. Prabh, K.S., Hauer, J.H.: Opportunistic Packet Scheduling in Body Area Networks.
In: Marrón, P.J., Whitehouse, K. (eds.) EWSN 2011. LNCS, vol. 6567, pp. 114–129.
Springer, Heidelberg (2011)
20. Srinivasan, K., Levis, P.: Rssi is under appreciated. In: Proceedings of the Third
Workshop on Embedded Networked Sensors, EmNets (2006)
21. Burns, A., Greene, B., McGrath, M., O’Shea, T., Kuris, B., Ayer, S., Stroiescu, F.,
Cionca, V.: Shimmer #x2122; #x2013; a wireless sensor platform for noninvasive
biomedical research. IEEE Sensors Journal 10(9), 1527–1534 (2010)
22. Corporation, C.: Cc2420 2.4 ghz ieee 802.15.4 / zigbee-ready rf transceiver,
http://www.ti.com/lit/gpn/cc2420
23. Freertos real-time operating system, http://www.freertos.org/
An Efficient, Eco-Friendly Approach
for Push-Advertising of Services in VANETs
1 Introduction
Thanks to the current interest on Intelligent Transportation Systems (ITS), and
the efforts from several governments and the research community, highways are
to become connected networks. Cars will be able to communicate with each other
and with roadside units. This capability will allow the deployment of applications
that contribute to the energy efficiency and the reduction of CO2 emissions.
These goals can be achieved through a better planning of stops on a long trip.
We have designed a “push” service discovery system that lets gas stations ad-
vertise their location in a wide enough radius. We take advantage of the wireless,
broadcast nature of IEEE 802.11p communications. A few selected cars forward
the message, so that all the vehicles in the target area receive the information.
Using an efficient flooding scheme for this task is a key feature. By efficiency,
we refer to the minimum use of the shared bandwidth, as it is a limited good in
the VANET. We have optimized the formulation of a forwarding scheme we had
already proposed in [4], in order to reach this goal.
The result is a gas/charging station advertising solution for roadway envi-
ronments. It would run as an UDP application in the vehicle’s on-board com-
puter and in roadside units owned by gas stations. We assume that both are
equipped with IEEE 802.11p and GPS capabilities. The on-board computer will
select from the incoming advertisements only the ones that fit best the planned
route, and present a sorted list to the driver according to her preferences (price,
affiliation, etc.).
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 50–57, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Efficient, Eco-Friendly Push-Advertising in VANETs 51
2 Related Work
3.1 Description
In distant-based flooding, the most distant node from the last relay gets the
highest priority. We give the priority by means of an order for forwarding.
When a node receives a new packet, it launches a short wait of W = 2 ms in
which it expects to receive a potential duplicate. If it hears any, it stores in dmin
the distance to the nearest node from which it received the same packet. Next,
it calculates a forwarding delay tw as:
If there is not any collision, only one node from every covered area forwards
the message. Typically, this node is not at the end of the radius, but at a distance
E[dmax ]. This is the average distance at which the most distant node is from the
last relay within the coverage area.
For simplicity, if we model the highway as an unidimensional, straight sce-
nario, we can say that the position of each vehicle in it is an event from a Pois-
son process. Hence, we can model the distance between nodes as following an
exponential distribution with λ = ρ. Given that this distribution is memoryless,
E[dmax ] will be the maximum possible distance between sender and receiver, R,
minus the average distance between cars, 1/ρ: E[dmax ] = R − 1/ρ .
The total number of nodes that forward the message in the given distance D
will be as many as covered zones. And so, the ideal ratio would be:
D D
E[dmax ] R−1/ρ 1
= = . (2)
ρ×D ρ×D ρR − 1
But, as we have already said, there can be more than one sender per covered
area. This is the case when two consecutive nodes have a difference between their
respective forwarding delays (t1 and t2 ) less than τ :
d2 d1 Rτ
t2 − t1 < τ → Tmax 1 − − Tmax 1 − < τ → d1 − d2 < .
R R Tmax
So now we know the average number of nodes that actually forward the message
per covered zone and the true ratio of forwarders per receiver:
From (1) and (3), the reader can notice that Tmax plays an important role in the
performance of the scheme. The lower this value, the shorter the per-hop delay
in (1) will be. However, we want to keep it high enough, so that the ratio in (3)
is as low as possible.
We consider that the application should be able to configure Tmax . However,
we can give some hints for choosing a good value.
First, according to the reasoning in the previous section, the minimum value
should be, for the highest ρ expected in the road, Tmax > ρRτ .
Now, we want to recommend a value for this parameter. As the forwarding
ratio also depends on the car density, we can only express it as a function of
54 E. Garcia-Lozano et al.
ρ. If the current density is available through roadside units, each node can rely
on this information with the confidence that all of them will use the same Tmax
value. In (3), Tmax is multiplied by ρR and divided by ρR − 1. If we take this
parameter as the only variable of the equation, different values of ρ will provide
very similar curves at different heights. In case the nodes can not learn the actual
road density, they can use a default value and still expect good results.
For any given ρ, we select the point from which we do not experience a sig-
nificant improvement in the forwarding ratio. As subjective as this statement is,
the application could also choose where to put the limit. We will use a specific
one for the reasoning that follows.
If we fix ρ in (3), the expression is a function of Tmax in the form y = 1/x.
In this basic case, the symmetry axis goes through the point (1, 1). Before this
point, the function goes down very fast, and from then on, very slowly, being
limx→∞ 1/x = 0 .
The zone of interest in our problem is at the right of the symmetry axis. What
is more, we can take this intersection as a reference. From x = 1 until x → ∞,
the curve goes down from y = 1 until y = 0. We search, within this zone, the x
where the curve has dropped a 95%, that is, 1/x95% = 0.05 → x95% = 1/0.05
Now, to translate it to our equation, we only take into account the transfor-
mations that imply a change in the proportion of the curve:
ρRτ
Tmax = × x95% . (4)
ρR − 1
As we want a good value for any given ρ, we can simply find the limit: limρ→∞
Tmax = τ x95% = 0.018 s.
First, it is important to highlight the difference between city and roadway sce-
narios when choosing a strategy for a given task. What is best for one may have
a poor performance in the other. In a roadway, the path is almost linear and
there are few intersections. For now, we have focused on this type of scenario.
Next, there are two types of service discovery. In the “push” mode, the service
providers send their information to everyone. In the “pull” mode, it is the node
interested in a service that looks for a provider. In a broadcast environment as
a VANET, it makes sense to use the “push” mode to advertise gas stations (or
any other generally interesting roadside service).
We assume that vehicles have GPS and 802.11p communication capabilities.
Also, they have an on-board computer that can display a list of the gas stations
that fit best the planned route, sorted according to the driver’s preferences (as
price or affiliation). Gas stations can send messages by means of an own roadside
unit. Both are able to run our solution, designed as an application over UDP.
Along with the specific information (brand name, prices, etc.) that the service
provider wants to spread, the station location and the target zone must be
specified in the message. The provider location is unique and fixed, so it is used
Efficient, Eco-Friendly Push-Advertising in VANETs 55
Provider location (16 B) Radius (4 B) Last relay location (16 B) Data (?)
4.1 Simulations
We have chosen the same type of scenario as authors in [9] for its simplicity.
We set a straight 6000 m road in which cars move in both traffic directions.
Cars communicate over 802.11p technology thanks to the extensions to ns-2
presented in [2]. We also make use of the Nakagami radio propagation model,
suggested in [5] as the best for VANETs. The configuration values for the radio
propagation as in a highway (Highway 101 in the Bay Area of the USA), as well
as for 802.11p, have been taken from the documentation released by the same
authors.
The values of car density are based on information provided by the Govern-
ment of the region of Madrid [7]. For non-sparse traffic, these values go from 30
to 40 cars per kilometer. Though the Bay Area and the region of Madrid are
different places, we are confident that the resulting combination is representative
enough of the general conditions on an average highway around the world.
Cars are located in the roadway following an exponential distribution as in [6],
according to the mentioned densities. They are also given a random speed in any
of both directions. We have selected the speed limits based on the traffic in the
region of Madrid (16.7 – 33.3 m/s).
A fixed node in the left end of the scenario sends a message and it is forwarded
according to our flooding scheme and the radius specified in the message. Every
simulation batch uses 1000 random scenarios that match the given car density.
4.2 Results
After running the simulations, the first step is to check if they have reached a
100% coverage. A look at Fig. 2(a) confirms that most of the runs conform to
this requisite. Recall that the equations in Sect. 3 are only valid in connected
networks. A line for future work is making the forwarding scheme resilient to
disconnections.
In figures 2(b) – 2(d), we have drawn dotted lines to represent the theoret-
ical forwarding ratio and average per-hop delay. The forwarding ratio is cal-
culated with Eq. 3. The average per-hop delay is easily obtained from Eq. 1
56 E. Garcia-Lozano et al.
1.01 0.21
simulations
1.005 0.2 theoretical
Forwarding ratio
Delivery ratio
1 0.19
0.995 0.18
0.99 0.17
0.985 0.16
0.98 0.15
30 32 34 36 38 40 14 16 18 20 24 28
Vehicle density [cars/km] Tmax [ms]
(a) Coverage (Tmax = 18 ms) (b) Forwarders/receiver (ρ = 40 c/km)
4.8 5.5
simulations simulations
Avg. per-hop delay [ms]
4.4 4.5
4.2 4
4 3.5
3.8 3
30 32 34 36 38 40 14 16 18 20 24 28
Vehicle density [cars / km] Tmax [ms]
(c) Per-hop delay (Tmax = 18 ms) (d) Per-hop delay (ρ = 40 cars/km)
by replacing dmin with E[dmax ] and adding the fixed previous wait, W . We
confirm that the values resulting from the simulations, drawn with error bars,
tally with the analytical model. This proves that the suggested default value
of Tmax = 18 ms, deduced from the mentioned equations, is appropriate for a
moving scenario, too. The reason for this is that messages travel much faster
than vehicles. Even if the relay had had to wait the maximum per-hop time
(i.e., 18 + 2 = 20 ms), the car would have moved less than a meter from
the reception of the packet to the time of forwarding it. Though in move-
ment, the scenario is almost static from the point of view of communications
speeds.
Finally, it is interesting to look up actual performance values in the graphs.
For example, we see that the average per-hop delay when using the default
Tmax is about 4 ms. So, the message would take 4 × 4000/E[dmax] = 76 ms on
average to reach the edge of the radius in absence of any other traffic. Also,
when ρ = 40 cars/km and Tmax = 18 ms, the ratio of forwarders per receiver is
less than 18%. That is, in such an scenario with around 160 cars running in the
target zone, only 29 of them had to act as relays.
Efficient, Eco-Friendly Push-Advertising in VANETs 57
References
1. Abrougui, K., Boukerche, A., Pazzi, R.: Design and evaluation of context-aware and
location-based service discovery protocols for vehicular networks. IEEE Transac-
tions on Intelligent Transportation Systems 12(3), 717–735 (2011)
2. Chen, Q., et al.: Overhaul of IEEE 802.11 modeling and simulation in ns-2. In:
Proceedings of the 10th ACM Symposium on Modeling, Analysis, and Simulation
of Wireless and Mobile Systems, MSWiM 2007, pp. 159–168. ACM (2007)
3. Dikaiakos, M., Florides, A., Nadeem, T., Iftode, L.: Location-aware services over
vehicular ad-hoc networks using car-to-car communication. IEEE Journal on Se-
lected Areas in Communications 25(8), 1590–1602 (2007)
4. Garcia-Lozano, E., Campo, C., Garcia-Rubio, C., Cortes-Martin, A.: Bandwidth
Efficient Broadcasting in VANETs. In: The 8th International Wireless Communica-
tions and Mobile Computing Conference. Vehicular Communications Symposium,
IWCMC 2012, pp. 1091–1096 (2012)
5. Hartenstein, H., Laberteaux, K.: A tutorial survey on vehicular ad hoc networks.
IEEE Communications Magazine 46(6), 164–171 (2008)
6. Tonguz, O., et al.: DV-CAST: A distributed vehicular broadcast protocol for ve-
hicular ad hoc networks. IEEE Wireless Communications 17(2), 47–57 (2010)
7. Traffic Office of the Region of Madrid: Study on the traffic management on roads
of the Region of Madrid (2011) (in Spanish), www.madrid.org (last accessed on
October 4, 2011)
8. Villas, L., Ramos, H., Boukerche, A., Guidoni, D., Araujo, R., Loureiro, A.: An
efficient and robust data dissemination protocol for vehicular ad hoc networks.
In: 9th ACM International Symposium on Performance Evaluation of Wireless Ad
Hoc, Sensor, and Ubiquitous Networks, PE-WASUN (October 2012)
9. Wisitpongphan, N., Tonguz, O., Parikh, J., Mudalige, P., Bai, F., Sadekar, V.:
Broadcast storm mitigation techniques in vehicular ad hoc networks. IEEE Wireless
Communications 14(6), 84–94 (2007)
10. Woerndl, W., Eigner, R.: Collaborative, context-aware applications for inter-
networked cars. In: 16th IEEE Intl. Workshops on Enabling Technologies: Infras-
tructure for Collaborative Enterprises (WETICE), pp. 180–185 (2007)
BatNet: A 6LoWPAN-Based Sensors
and Actuators Network
1 Introduction
Smart Cities aim to improve citizens’ life by means of the integration of new architec-
tonic elements, technological innovations and Information and Communication
Technology (ICT) infrastructures.
Energy consumption in buildings, which is one of the cornerstones of the Smart Ci-
ties, accounts for between 20% and 40% of the total energy used in developed coun-
tries [1]. Gathering information related to the use of energy made by devices, users
and systems within a building is useful to improve the efficiency of electricity distri-
bution systems, since it provides accurate information to avoid peaks and dips in
energy demand.
Real time metering systems of electric line parameters (such as current, voltage or
real power) provide accurate information related to the building electrical consump-
tion. This information is essential to improve electricity generation and distribution in
a smart grid. Most of the distribution board metering devices proposed so far only
measure electric current and consequently the power consumption is calculated as-
suming a constant voltage from the electric line [2]. Other devices measure the line
voltage as well but just on one electric line.
In order to solve these problems and serve as a basis for smart cities we propose an
energy management system for buildings. The system comprises a management
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 58–65, 2012.
© Springer-Verlag Berlin Heidelberg 2012
BatNet: A 6LoWPAN-Based Sensors and Actuators Network 59
platform and wireless transducer network (WTN). The system description is presented
in Section 2 while the WTN architecture, elements and implementation are described
in Section 3.
2 System Description
In order to provide the building management platform with relevant data (such as
power measurement) and to enable different devices control, a wireless transducer
network (BatNet) has been designed and implemented. The main characteristics of
BatNet, focused on avoiding the traditional systems limitations in terms of cost,
interoperability, power consumption and collaboration, are:
There are several processing and communication tasks common to every BatNet de-
vice, with independency of the node type (transducer, coordinator or repeater). Ga-
thering these functions in a single device simplifies modules programming and
enables and makes it easier and more efficient their implementation, due to the mod-
ular architecture of devices. First, the use of available commercial boards such as
Zigduino [6] or Zolertia Z1 [7] was considered, but BatNet devices have two
requirements that are a must: reduced size and the possibility of integrating a wide
variety of sensors. To fulfill these requirements a low-cost and low-power processing
and communication module has been designed and implemented: the BatMote
(see Fig. 2).
The core of the BatMote is the ATmega 128RFA1 from Atmel Corporation. The
ATmega 128RFA1 is a single chip microprocessor integrating an IEEE 802.15.4 tran-
sceiver, which allows a noticeable reduction of the hardware size. Besides, the
ATmega 128RFA1 provides 8 ADC channels (10 bits resolution), allowing data
collection from different sources simultaneously.
BatNet: A 6LoWPAN-Based Sensors and Actuators Network 61
Fig. 2. BatMote
BatSense
It is an ambient Multi-Sensor module which includes temperature, humidity, illumina-
tion and presence sensors together in a single device. Other ambient parameters
(such as CO2 or noise) sensors can be added since extra pads and pins are still
available.
62 G. del Campo et al.
1
Open Energy Monitor project: www.openenergymonitor.org
BatNet: A 6LoWPAN-Based Sensors and Actuators Network 63
2. Every 20 electric signal periods (400ms) average voltage, current, active power,
apparent power and power factor are calculated.
3. Average values are sent every 15 seconds to a BatMote connected to a PC.
The whole smart meter consists of a BatMote and a BatMeter and has been installed
in the distribution board at CeDInt-UPM Energy Efficiency Research Facility (EERF)
(see Fig. 4). This distribution board has separated lines to independently measure
electrical consumption in three illumination groups, two HVAC systems and six
appliance lines.
Different lines (Illumination, HVAC and Electric Water Heater) and appliances
(Electric Heater and Computer) have been monitored.
To validate the system, energy consumption measures have been compared to
those obtained from a commercial static meter installed at the distribution board
(Model: Orbis Contax 2511SO). Table 1 shows the energy consumption measure-
ments from different appliances during one hour working time:
As it can be seen in Table 1, the developed smart meter shows accurate measure-
ments for all the appliances with an error below 5%.
Figure 5 shows the measured electric parameters (average voltage and average cur-
rent) for a line where a PC has been connected. During the time period represented (1
hour), electric voltage varies between 233V and 237V. For long-time monitoring (a
96 hour period) a 228V-243V range has been recorded. Measured current is almost
constant below 300mA for the PC dealing with low load processes during the time
period.
Consequently in the current measurements obtained, the active and apparent power
do not suffer major oscilations. The power factor oscilates between 90% and 99%,
which matchs the assumed power factor of a PC with an active power factor corrector
power supply (95%). Energy consumption of 60 Wh corresponds to value measures
for the expected value for 1.7 GHz Pentium IV computer.
Fig. 5. Voltage and Current measurements and Power (active and apparent), Power Factor and
energy calculations for a PC
4 Conclussions
processing module (BatMote) allow an easy integration of new sensors and actuators
and provide several advantages in terms of design, programming and installation.
Finally, results of real time power consumption measurements realized by the
smart meter -composed by the power meter module (BatMeter) and the BatMote- are
presented. The smart meter calculates power consumption along with other electric
parameters (power factor, active power and apparent power) by real time measuring
of both current and voltage.
References
1. Pérez-Lombard, L., Ortiz, J., Pout, C.: A Review on buildings energy consumption
information. In: Energy and Building. Elsevier (March 2007)
2. Ploennigs, J., Ryssel, U., Kabitzsch, K.: Performance Analysis of the EnOcean Wireless
Sensor Network Protocol. In: Proc. 20120 IEEE Conference on Emerging Technologies and
Factory Automation (ETFA), pp. 1–9 (2010), doi:10.1109/ETFA.2010.5641313,2010
3. Lu, C.-W., Li, S.-C., Wu, Q.: Interconnecting ZigBee and 6LoWPAN wireless sensor net-
works for smart grid applications. In: Proc. 2011 Fifth International Conference on Sensing
Technology (ICST), pp. 267–272 (2011), doi:10.1109/ICSensT.2011.6136979
4. Constrained Application Protocol (CoAP) draft,
https://datatracker.ietf.org/doc/draft-ietf-core-coap/
5. Zigduino, http://www.logos-electro.com/zigduino
6. Zolertia Z1, http://www.zolertia.com/products/z1
7. Oikonomou, G., Phillips, I.: Experiences from porting the Contiki operating system to a
popular hardware platform. In: 2011 International Conference on Distributed Computing
in Sensor Systems and Workshops (DCOSS), pp. 1–6 (2011), doi:10.1109/DCOSS.
2011.5982222
A Classable Indexing of Data Condensed Semantically
from Physically Massive Data Out of Sensor
Networks on the Rove
MinHwan Ok
1 Introduction
The smart devices are evolving to help human life much comfortable. The smart
phone seems to become a smart assistant with its computation capability. The vehicle
is being equipped with the computation capability to be a smart car in near future. For
the smart car, the safety in driving is one importance in the capability of smartness.
As cars share the road in driving, a faulty car is liable to cause traffic accident. This
car should be early found and need be assisted, apart from the road. It is not
appropriate solutions either notifying a problem by the driver or detecting most of all
sort of the faultiness within the car. Notification by the driver might be too late and
detecting most of the faultiness should be too heavy and expensive functionality in
one car. In this work the sensor data a vehicle generated are transmitted to a system of
distributed databases for malfunction detection and health monitoring of the vehicle.
As the data are aggregated in the system, reducing the amount of the enormous data is
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 66–72, 2012.
© Springer-Verlag Berlin Heidelberg 2012
A Classable Indexing of Data Condensed Semantically from Physically Massive Data 67
processed in two methods aggregating the data. The topmost server becomes the head
index to the sensor data of a faulty car.
In-vehicle network delivers the data of driving status for active safety such as
drive-assistance and accident-avoidance. Various vehicle states are checked by,
vehicle dynamic management system, adaptive cruise control, and lane keeping. To
avoid crash, electronic stability control (ABS), warning-and-brake assist, and/or
automatic braking/steering.[1] This smart functionality focuses on safe driving on the
side of an individual vehicle. Since the road is a shared resource by multiple vehicles,
another smart functionality is required focusing on safety in the driving group on the
side of multitude of vehicles.
A platform named BeTelGeuse is proposed for gathering and processing situational
data[2]. BeTelGeuse is an extensible data collection platform for mobile devices that
automatically infers higher-level context from sensor data. The system is able to
provide context data of mobile phones. This work is similar in system organization
for Internet of Things to, but different by the perspective toward Cyber-physical
System from, BeTelGeuse. For reduction of amount of data in the system, GAMPS[3]
was proposed. It is a general framework that addresses reduction of data so that the
data is efficiently reconstructed within a given maximum error. While GAMPS aims
for the system suchlike BeTelGeuse and the proposed system of this work, the
complete data required are delivered between databases in the proposed system, and
polynomial time approximation schemes of GAMPS is not adequate for timely
processing of the sensor data.
The vehicle traverses a number of areas to arrive at its destination. Monitoring and
detecting an abnormal state of the vehicle could be designed by tracking the vehicles
however the privacy would be easily violated. Thus the servers covering their areas
gathers sensor data from individual vehicles identifying each one by only IDs
registered to the system in this work. The traversal is not traced unless someone
collects the recorded data from all the area servers. The system organization is shown
in Fig.1.
Once the vehicle at its time slot transmits a dataset of a set of sensors in a reduced
form by a data abbreviation[4], the area server receives the abbreviated data
and restores the complete data and records the dataset. The area server calculates
with the dataset to find out the data out of the boundary value meaning a sensor
detected the abnormal state in the vehicle. The complete data are condensed
semantically according to a predefined classification[5] and then the classificatory
data in a reduced amount is transmitted, with the data indicating the abnormal state
if found, to a server covering the region constituted with areas. The region
server aggregates the condensed data of the vehicle from the area servers.
The procedure with the abnormal state found is described in the next section in
detail.
68 M. Ok
Fig. 1. The system organization with the process and flow of the sensor data originated from
the vehicle. The rightmost console can access the data at either server of its coverage.
Data generated at a vehicle are transmitted to the server of the area, and data
abbreviation reduces their amount to be transmitted. The internal temperature of a
tyre, for instance, is digitized into Temperature data. The internal temperature is
checked several times within one collection period in the vehicle, and thus the values
of Temperature data changes discretely. Before the next data collection, the values are
split into two groups, one group of displacements above the average and the other
group of displacements below the average. Merely differences from previous
displacements are recorded in either group spilt by the average of Temperature,
together with the average of Temperature in that period. This technique reduces the
data in bits and called Scale among abbreviation techniques[4]. The condensed data
are restored into the complete data, the digitized Temperature data, at an area server.
From the area servers to the nation server, the complete data are condensed to be
aggregated in the server at a higher level. Fig. 2 shows the hierarchy of servers
managing (condensed) datasets. The essence of semantic condensing is to change data
with wider bits into data with narrower bits, exploiting the semantic meaning of the
value. Suppose the original Temperature data ranges from 0 to 250 (8-bits) in an area
server of the congregational level. To be aggregated in a region server of the regional
level, the data are changed into ones of {Under the Extent, Near the Low-
boundary, Low, Below the Average, Above the Average, High, Near
the High-boundary, Over the Extent} (3-bits). If the vehicle of this data
is found to be noticeable, the condensed data are further condensed into ones of
{Normal, Abnormal} (1-bit). This condensed dataset is aggregated in the nation
A Classable Indexing of Data Condensed Semantically from Physically Massive Data 69
server of the central level, together with an appended complete datum(8-bit) relayed
from the area server to record the abnormal state with the corresponding original
datum. A client could trace a vehicle noticeable from the central level as servers of
the central level and the regional level maintains indexes to the complete data at the
congregational level.
Fig. 2. A hierarchical representation of sensor datasets; the number of bits are reduced along
the hierarchy and data in less bits are indexes to the sensor dataset at the congregational level at
which the complete data are stored
Fig. 3. The valid extent of the other property changes in accordance with the change of one
property
A direct relation between two properties is one directional. Fig.3 illustrates the change
of the valid extent of the other property according to one property. While the value of
one property is d, the valid extent of the other property is predefined as a range from
LB to HB. When the value of one property is changed to d’, the valid extent of the
other property changes to a range from LB’ to HB’ as predefined. In the example of
the brake pedal, a brake pedal position sensor sends a couple of signals to Brake
System Control Module, which determines the brake force requested. The hydraulic
pressure to actuate brakes is generated by an electric pump internal to the BSCM. The
hydraulic pressure should be a respective value in a reasonable range, according to the
sensor signals. In the fault of the electronic system for braking however, the
mechanical system functions for braking instead and this backup to the electronic
system is activated by itself in the vehicle.
(b) The recorded data are collected from the area servers the vehicle passed by.
Fig. 4. The region server collects data to decide whether the vehicle noticeable is a faulty car
A Classable Indexing of Data Condensed Semantically from Physically Massive Data 71
The data negating its valid extent is appended to the condensed data transmitted
from the area servers. This appended data is named the noticeable data. The region
server received the noticeable data marks the vehicle noticeable and starts collecting
the data of those properties. As the route tracking is not conducted for the privacy
reason, the region server requests the recorded data of the vehicle to all the area
servers under the region server as depicted in Fig. 4. The collected recorded data is
reported to the monitor person and the person responds to this report; i.e. informs the
cars on the same road of the vehicle noticeable. The region server transmits further
condensed dataset with the noticeable data to the nation server.
As a faulty car is liable to become the cause of any traffic accident possible, the
objective of the system is distinctive; keeping cars with normal states safe by taking
the faulty car apart from the road, and then let the faulty car be served with a
professional support. Although not explained in this work, the system could trace the
car probably noticeable. For RPM in the example, if the classificatory value of Near
the High-boundary is continuous for some duration, the vehicle could be sorted
into ‘probably noticeable’. Yet much of work should be conducted for this.
There are pairs of properties that complicate to clarify direct (or indirect) relations.
For the relation of RPM of the engine to the velocity of the vehicle, the velocity
increases according to RPM in proportion to the level of the automatic transmission.
The level of the automatic transmission is changed according to the velocity. The
level of the automatic transmission intervenes however, the circumstance conditions
such as the slope of the road should also intervene between RPM and the velocity. For
this reason, the area server compares the pair of the properties to the data recorded
previously with other vehicle of the identical vehicle model, in the case the
circumstance condition is also related, such like RPM and the velocity; the same pair
of other vehicle passed by this location on that road recently, with a similar RPM.
Since the comparison is to find out pairs discordant in the values, historically
averaged values could be used for a selection from the candidates for a valid extent.
This would be our future work.
References
1. Aoyama, M.: Computing for the Next-Generation Automobile. IEEE Computer 45(6), 32–
37 (2012)
2. Kukkonen, J., Lagerspetz, E., Nurmi, P., Andersson, M.: BeTelGeuse: A Platform
for Gathering and Processing Situational Data. IEEE Pervasive Computing 8(2), 49–56
(2009)
3. Gandhi, S., Nath, S., Suri, S., Liu, J.: GAMPS: compressing multi sensor data by grouping
and amplitude scaling. In: 35th SIGMOD International Conference on Management of Data,
pp. 771–784. ACM, New York (2009)
72 M. Ok
4. Ok, M.: A Hierarchical Representation for Recording Semantically Condensed Data from
Physically Massive Data Out of Sensor Networks Geographically Dispersed. In: Meersman,
R., Herrero, P., Dillon, T. (eds.) OTM 2009 Workshops. LNCS, vol. 5872, pp. 69–76.
Springer, Heidelberg (2009)
5. Ok, M.: An Abbreviate Representation for Semantically Indexing of Physically Massive
Data Out of Sensor Networks on the Rove. In: Chiu, D.K.W., Bellatreche, L., Sasaki, H.,
Leung, H.-f., Cheung, S.-C., Hu, H., Shao, J. (eds.) WISE Workshops 2010. LNCS,
vol. 6724, pp. 343–350. Springer, Heidelberg (2011)
A Time-Triggered Middleware Architecture
for Ubiquitous Cyber Physical System Applications
1 Introduction
The term “Cyber-Physical Systems” (CPS) was coined around 2006, by researches
from different disciplines, predominantly in real-time systems, hybrid systems and
control systems, to describe the increasingly important area at the interface of the
cyber and the physical worlds [2]. More specifically, CPS are integrations of compu-
tation and physical processes [1]. One of the most relevant characteristic of this kind
of systems is the fact that since computers must interface physical processes, the time
by which computations are performed is relevant and concurrency is intrinsic. More-
over, most CPS require the integration of different technologies used at computing,
communication and control fields, and include several relevant research domains such
as networked control, hybrid systems, real-time computing, real-time networking,
wireless sensor networks, security and model-driven development [2].
Regarding the application domains, CPS are demanded in different sectors includ-
ing transportation systems, process control, factory automation, healthcare or electric-
al power grids. These application domains impose on CPS special requirements that
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 73–80, 2012.
© Springer-Verlag Berlin Heidelberg 2012
74 A. Noguero, I. Calvo, and L. Almeida
have been analyzed by [1] and [3]. Some of them are the close integration of hetero-
geneous and limited embedded platforms, dealing with time, reconfiguration and
adaptation needs, fault tolerance and robustness.
Most CPS form ubiquitous applications built as a combination of several devices
connected through different wired and wireless network technologies. These devices
include embedded platforms, real-time systems, sensors and actuators as well as net-
work protocol implementations. Consequently, CPS have benefited of the continuous
advances and convergence in these fields. Nevertheless, customers demanded new
functionalities as technology evolved, requiring more challenging characteristics. In
this context, leveraging time to market and easing the development process becomes
crucial to ensure competitiveness.
CPS require intensive use of the communication networks being the IP technolo-
gies increasingly used, as described in [19]. In this scenario the use of middleware
solutions, such as CORBA [13], DDS [16], OPC [15] or Web Services, has been suc-
cessfully adopted to reduce the communications complexity. However, most of these
middleware solutions do not cope with certain issues such as the synchronization of
the tasks of the system or mixing several types of traffic with different QoS parame-
ters. In addition, these specifications lack of mechanisms to deal with non-functional
requirements and do not provide models for the system resources (e.g. CPU, memory
or battery), which is a key issue in CPS. These facts recommend the definition of new
layers that provide high-level services to CPS applications [19].This work focuses on
CPS that require (1) the timely execution of their activities in an accurate way and (2)
flexibility to reconfigure the systems dynamically at run-time. Frequently, these sys-
tems require synchronous or cascaded measurement and actuation of several devices,
such as ambient sensors, cameras and robots. Examples of these systems may be
found in different domains that range from industrial applications to home automa-
tion. In some cases real-time requirements are not only applicable to the application
tasks themselves, but also to the messages exchanged among them, needing some
kind of network management. In addition, the management of other resources, such as
memory, CPU usage or memory may become a phenomenal task that augments time
to market of the final application without providing any added value in terms of
functionality.
More specifically, the authors propose a time-triggered middleware architecture
that allows developers to focus on the functionality of the applications liberating pro-
grammers from synchronization and resource management issues. This architecture,
which is capable of addressing dynamic reconfiguration of the applications at run-
time, has been specially designed for soft real-time applications at a variety of appli-
cation domains from industrial applications to home automation. Different types of
data traffic requiring different priorities may be involved, such as sensor data, video
streams and alarm messages. Finally, the proposed middleware architecture is also
capable of scheduling the execution of a set of replicated tasks and optimizing the use
of the resources of the distributed system in order to adapt to changes in the functio-
nality and cope with temporary faults at run-time.
The current work follows an approach that is different from other works in the
field. In [4] and [5] a control loop oriented middleware architecture is proposed to
A Time-Triggered Middleware Architecture for Ubiquitous CPS Applications 75
2 Architecture Description
Applications not only include the behavioural description of the tasks (i.e. the di-
rected graph) that compose the application, but also time characteristics of the appli-
cations and tasks (e.g. period and deadline). Furthermore, tasks that generate and/or
consume data must declare the data topics they exchange to allow the architecture to
allocate the needed resources. Fig. 1 shows three example application descriptions,
along with the expected execution timeline. It is important to note that, since time is
split into ECs, the length of the time window defines the minimum granularity of a
FTT-MA system.
In FTT-MA synchronization is achieved through the collaboration among a set of
services, organized in three layers, as depicted in Fig. 2.
A preliminary version of the FTT-MA was proposed in [17] in the scope of real-
time Service Oriented Applications (SOA) to orchestrate the invocation of services
implemented as CORBA methods. However, this architecture has been enriched with
new components to manage the resources of the devices and an integrated distribution
service for distributing different types of data at run-time as follows.
The proposed architecture is composed by three layers: (1) the System Manage-
ment Layer which elaborates the execution plan the FTT Dispatcher enforces; (2) the
FTT Layer which triggers the operations of the distributed system and manages the
access to the data distribution channel; and (3) the Application Services Layer which
implements the services that provide the functionality of the system (e.g. as CORBA
methods).
A Time-Triggered Middleware Architecture for Ubiquitous CPS Applications 77
The top layer of the architecture implements the Application Management Service
(AMS), the System Monitoring Service (SMS) and the Application Scheduling Service
(ASS). The AMS provides of an interface (see Fig. 3) to interact with the middleware
architecture that may be used at run-time for reconfiguration purposes. Among
the functionalities provided by the AMS are included (1) loading/unloading applica-
tions, (2) monitoring the status of the system and (3) modifying the configuration
parameters of the FTT-MA.
Before loading any application or accepting any modification, the AMS evaluates
the use of the resources needed and notifies the solicitor whether it is feasible. Further
details on this resource-based admission test can be found in [9]. The SMS is devoted
to track the status of the resources of the distributed system. It gathers physical infor-
mation from the nodes. More specifically, CPU usage, available memory and battery
status data are collected. Additionally, the SMS monitors the gathered data and detects
unavailable nodes, triggering corrective actions. Finally, the ASS executes the alloca-
tion and scheduling algorithms on the loaded application descriptions. First the alloca-
tion policy selects from all the available task instances in a distributed system those
that will be executed to fulfill the functional requirements of the applications. Then,
the scheduling algorithm calculates the priority of each task on each distributed node.
It is important to note that, for scheduling issues, each node is taken into account indi-
vidually and that a fixed priority capable OS is expected to run on each node. The
results of the allocation and scheduling algorithms are placed in a table containing the
execution plan.
The synchronization mechanisms are implemented at the middle layer of
FTT-MA. The synchronization problem has been separated into execution and com-
munication. On the one hand, the synchronization of execution is enforced by the
FTT-Dispatcher and the FTT-Activators. The FTT-Dispatcher reads the task activa-
tion table created by the scheduling service and sends activation messages to
FTT-Activators over the network. Activation messages are sent using multicast
communications. Each activation message contains the information regarding which
tasks must start their execution during the current EC along with its local priority at
every node. FTT-Activators process activation messages and forward the activation
requests to the tasks when needed. On the other hand, communications are synchro-
nized through the FTT Event Channel and the Federated Event Channels in the
distributed nodes. The main goal of the FTT Event Channel is to ensure that data
78 A. Noguero, I. Calvo, and L. Almeida
communications among distributed tasks do not interfere with the activation messag-
es, since this fact would endanger the synchronism of the entire system. To
send/receive a message a task connects to the Federated Event Channel deployed in
its node. Each federated channel queues all the messages to be sent until the central
event channel instructs them to send the data messages [10].
It must be noted that the FTT Event Channel is only effective in deployments
where the communication medium is shared among the distributed nodes. This is the
case in hub Ethernet or wireless communications.
Finally, at the bottom of the architecture task instances are deployed on the distri-
buted nodes. These tasks are linked to the end-points of the middleware, and must
provide an interface to activate them.
3 Implementation Considerations
been developed. This tool, called FTT-Modeler, includes an editor for the design of
directed graphs models of the applications, an editor for distributed systems modeling,
and a code generation engine to support the task implementation process [18]. Addi-
tionally, FTT-Modeler includes a FTT-MA simulator that enables designers predict
how their applications will behave at runtime. FTT-Modeler has been implemented
using Model Driven Engineering (MDE) technology, including metamodels, editors,
transformations and constraints.
The tool implements a client for the FTT-CORBA implementation of the FTT-MA
AMS. The client enables the modification of the configuration parameters of the dis-
tributed system as well as the storage of the status data of the distributed nodes for
analysis purposes. FTT-Modeler is also part of the FTT-CORBA project.
References
1. Lee, E.A.: Cyber Physical Systems: Design Challenges. In: 11th IEEE Symp. Object
Oriented Real-Time Distributed Computing (ISORC 2008), pp. 363–369 (2008)
2. Kim, K.D., Kumar, P.R.: Cyber-Physical Systems: A Perspective at the Centennial. Proc.
of the IEEE 100, 1287–1308 (2012)
3. Shi, J., Wan, J., HuiSuo, H.Y.: A Survey of Cyber-Physical Systems. In: Intl. Conf. on
Wireless Communications and Signal Processing, WCSP (2011)
80 A. Noguero, I. Calvo, and L. Almeida
4. Wang, X., Lu, C., Gill, C.: FCS/nORB: A feedback control real-time scheduling service
for embedded ORB middleware. In: Microprocessors and Microsystems (June 2008)
5. Kalogeraki, V., Melliar-Smith, P.M., Moser, L.E., Drougas, Y.: Resource management us-
ing multiple feedback loops in soft real-time distributed object systems. The Journal of
Systems and Software 81, 1144–1162 (2008)
6. Noguero, A., Calvo, I.: A Framework with Proactive Nodes for Scheduling and Optimiz-
ing Distributed Embedded Systems. In: Aagesen, F.A., Knapskog, S.J. (eds.) EUNICE
2010. LNCS, vol. 6164, pp. 236–245. Springer, Heidelberg (2010)
7. Zhang, Y., Gill, C., Lu, C.: Configurable Middleware for Distributed Real-Time Systems
with Aperiodic and Periodic Tasks. IEEE Transactions on Parallel and Distributed Systems
(April 17, 2009)
8. Losert, T.: Extending CORBA for hard real-time systems, Ph.D. Thesis. ViennaUniversity
of Technology (2005)
9. Noguero, A., Calvo, I., Almeida, L., Gangoiti, U.: A Model for System Resources in Flex-
ible Time-Triggered Middleware Architectures. In: 18th EUNICE Conference on Informa-
tion and Communications Technologies, August 29-31 (2012)
10. Noguero, A., Calvo, I.: A Time-Triggered Data Distribution Service for FTT-CORBA. In:
Proc. of the Emerging Technologies and Factory Automation (ETFA 2012) (September 2012)
11. FTT-CORBA project webpage,
http://sourceforge.net/projects/fttcorba/
12. Pedreiras, P., Gai, P., Almeida, L., Buttazzo, G.C.: FTT-Ethernet: A Flexible Real-Time
Communication Protocol That Supports Dynamic QoS Management on Ethernet-Based
Systems. IEEE Transactions on Industrial Informatics 1 (2005)
13. OMG, Object Management Group, Common Object Request Broker Architecture: Core
Specification, Version, 3.0.3 (March 2004)
14. Henning, M.: A new approach to object-oriented middleware. IEEE Internet Compu-
ting 8(1), 66–75 (2004)
15. OPC foundation, http://www.opcfoundation.org/
16. OMG, Object Management Group, Data Distribution Service for real-time systems, ver-
sion 1.2 (2007)
17. Calvo, I., Almeida, L., Perez, F., Noguero, A., Marcos, M.: Supporting a Reconfigurable
Real-Time Service Oriented Middleware with FTT-CORBA. In: 15th IEEE International
Conf. on Emerging Technologies and Factory Automation, ETFA 2010, Bilbao, Spain
(September 2010)
18. Noguero, A., Calvo, I.: FTT-Modeler: A support tool for FTT-CORBA. In: 7th Iberian
Conference on Information Systems and Technologies CISTI 2012 (2012)
19. Koubâa, A., Andersson, B.: A Vision of Cyber-Physical Internet. In: Proc. of the Workshop
of Real-Time Networks (RTN 2009), Satellite Workshop to (ECRTS 2009) (July 2009)
20. Pedreiras, P., Gai, P., Almeida, L., Buttazzo, G.C.: FTT-Ethernet: a flexible real-time
communication protocol that supports dynamic QoS management on Ethernet-based sys-
tems. IEEE Transactions on Industrial Informatics 1(3), 162–172 (2005)
21. Amoretti, M., Reggiani, M.: Architectural paradigms for robotics applications. Advanced
Engineering Informatics 24(1), 4–13 (2010)
22. Noguero, A., Calvo, I., Almeida, L.: The design of an Orchestrator for a middleware archi-
tecture based on FTT-CORBA. In: 6th Iberian Conference on Information Systems and
Technologies, CISTI 2011 (2011)
23. OCI, Object Computing, Inc. TAO Developer’s Guide, version 2.0a (2011)
A Message Omission Failure Approach to Detect
the Quality of Links in WSN
1 Introduction
Wireless Sensor Networks (WSN) are deployed in the real world, even in hostile
environments, and work unattended. As sensor nodes are small and are powered
with very limited batteries, it is common that the depletion of the battery turns
in the crash of nodes. Therefore, WSN need a fault management system that de-
tects sensor node faults and reacts reconfiguring the system to face the detected
failures and ensure the network quality of service [1]. Besides, another important
issue in WSN is the probability of message loses, which could be high depend-
ing on the environment the WSN operate. We focus on failure detection from
a network communication perspective, considering that nodes might crash and
therefore stop sending and receiving messages, or just omit some messages while
sending or while receiving. In this regard, messages omitted by a node or omitted
by the network are indistinguishable. However there are few failure detector pro-
posals that consider message looses. In [2] they propose a failure detector for a
synchronous system that provides two lists, a list of suspected nodes and another
list of mistakes done in previous suspicions due to message looses. In [3] they
propose a topology discovery failure detector mechanism that classify network
links in stable, unstable or disconnected depending on the messages lost in the
link. Both failure detection mechanisms [2] [3] work on demand, and they do not
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 81–84, 2012.
c Springer-Verlag Berlin Heidelberg 2012
82 U. Burgos et al.
2 System Model
links, i.e., messages can be lost temporarily or permanently during its transmis-
sion in a link. Henceforth nodes that have permanent message omissions in all
their outgoing links will be considered as crashed nodes. However we delimit
the message lost pattern of some links to follow a variant of the ADD (Average
Delayed/Dropped) [6] pattern that we will call AD (Average Dropped). An AD
link allows an infinite number of messages to be lost, but guarantees that some
subset of the messages sent on it will be received and that such messages are not
too sparsely distributed in time. More precisely, there is a constant B such that
for all intervals of time in which a node p sends at least B messages to another
node q, at least one of these messages is received by q.
We assume that all the nodes of the system that do not crash and communicate
through AD links are well-connected: they are able to receive and send messages
along a path of AD links from every non-crashed nodes. This assumption imposes
a maximum number of crash failures and permanent omission failures in the
system in such a way that the system do not get partitioned.
For this system a failure detector definition close to that of an Eventually
Perfect failure detector for the crash model, denoted ♦P [7], can be given. This
failure detector satisfies the following completeness and accuracy properties:
– Strong Completeness: eventually every not well-connected node will be per-
manently considered as not well-connected by every well-connected node.
– Eventual Strong Accuracy: eventually every well-connected node will be per-
manently considered as well-connected by every well-connected node.
3 Proposal
Based on the failure detector algorithm of [5] we propose a failure detector
that uses periodical heartbeat messages and timers to detect failures. A sensor
node uses heartbeat messages to test link quality with its neighbours. Each
bidirectional link can be in one of three possible states: Active, Paused and
Blocked. Each node monitors all of its neighbours in such a way that if a node
does not receive a message from a neighbour in the expected time an Active link
becomes Blocked. If this transition corresponds to a false fault suspicion, and
consequently later a message arrives on that link, the link becomes Active again.
An Active link can be paused in order to reduce the communication cost and
built a spanning tree. Reciprocally, a Paused link can be activated in order to
increase the node connectivity when a fault occurs in another link. Observe that
the algorithm works as a connectivity detector, where a node that have crashed
is seen as a node that do no sent messages any more, and consequently has all
his links Blocked.
The algorithm builds a spanning tree by selecting the minimum set of non
Blocked links needed. To select the links with the best communication quality,
each link has associated a counter that indicates the number of messages omitted
on that link. This counter is incremented every time a time out occurs in a link.
Observe that links are assumed to be syncrhounous, and therefore, a time out
only might occur if a message is lost in a link. Observe also that in AD links the
84 U. Burgos et al.
number of time outs is finite and the value of the associated counter will stop
increasing. Using this counter values a minimum spanning tree can be obtained
using, for example, the Kruskal algorithm [8]. This algorithm is applied using
too matrices: the first matrix provides global connectivity information indicating
the state of the links, and the second matrix stores the weights of the links.
The spanning tree will be used to route application level messages following
Active links. Whenever an Active link is Blocked the spanning tree will be
reconfigured and the routing will be adapted to follow a new path.
References
1. Yu, M., Mokhtar, H., Merabti, M.: Fault management in wireless sensor networks.
IEEE Wireless Communications 14(6), 13–19 (2007)
2. Benhamida, F., Challal, Y., Koudil, M.: Efficient adaptive failure detection for
query/response based wireless sensor networks. In: 2011 IFIP Wireless Days (WD),
pp. 1–6 (October 2011)
3. Chandra, R., Fetzer, C., Hogstegt, K.: Adaptive topology discovery in hybrid wire-
less networks. In: Proceedings of Informatics, 1st International Conference on Ad-
hoc Networks and Wireless, pp. 1–16 (2002)
4. Tai, A.T., Tso, K.S., Sanders, W.H.: Cluster-based failure detection service for large-
scale ad hoc wireless network applications. In: Proceedings of the 2004 International
Conference on Dependable Systems and Networks, DSN 2004, pp. 805–814. IEEE
Computer Society, Washington, DC (2004)
5. Soraluze, I., Cortiñas, R., Lafuente, A., Larrea, M., Freiling, F.C.: Communication-
efficient failure detection and consensus in omission environments. Inf. Process.
Lett. 111(6), 262–268 (2011)
6. Sastry, S., Pike, S.M.: Eventually Perfect Failure Detectors Using ADD Channels.
In: Stojmenovic, I., Thulasiram, R.K., Yang, L.T., Jia, W., Guo, M., de Mello, R.F.
(eds.) ISPA 2007. LNCS, vol. 4742, pp. 483–496. Springer, Heidelberg (2007)
7. Chandra, T.D., Toueg, S.: Unreliable failure detectors for reliable distributed sys-
tems. Journal of the ACM 43(2), 225–267 (1996)
8. Kruskal, J.B.: On the shortest spanning subtree of a graph and the traveling sales-
man problem. Proceedings of the American Mathematical Society 7, 48–50 (1956)
Sensor Network Integration by Means
of a Virtual Private Network Protocol
David Villa, Francisco Moya, Félix Jesús Villanueva Molina, Óscar Aceña,
and Juan Carlos López
1 Introduction
Integration of sensor networks1 in enterprise information systems is still trouble-
some. The interaction among applications and sensor services usually requires
specific mechanisms often centralized on application-level gateways. Sensor data
comes through a single node with two interfaces, one for the sensor data (e.g
802.15.4 wireless interface) and another for the enterprise information network
(e.g Ethernet). In many situations this gateway runs custom software to adapt
the protocol stack from the sensor domain to the enterprise domain. However
this kind of infrastructure implies a single point of failure. From an engineering
point of view, a single gateway implies a single point of failure, attending to
specific application, it could make difficult and inefficient network topology, etc.
i.e. the network topology between domains could be arbitrary and not dependent
on integration logical infrastructure.
At logical level, it is inconvenient for actor/actuator nodes and complicates,
or just makes impossible, free interaction among applications and sensor nodes
and particularly interaction among sensor nodes.
1
This research was supported by the Spanish Ministry of Science and Innovation
and CDTI through projects DREAMS (TEC2011-28666-C04-03), ENERGOS (CEN-
20091048), PROMETEO (CEN-20101010), by the Regional Government of Castilla-
La Mancha and ERDF under project SAND (PEII11-0227-0070), and by Cátedra
Indra UCLM.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 85–92, 2012.
c Springer-Verlag Berlin Heidelberg 2012
86 D. Villa et al.
This paper introduces a novel approach to achieve a more flexible and de-
coupled way to provide and request sensor services supporting several gateways
among sensor and enterprise domains (sometimes called multi-sink ). By means
of a common application-level protocol, sensor nodes and applications can in-
teract in any scenario, even among sensor nodes belonging to remote networks.
Instead of designing a new application protocol from scratch, we focus our atten-
tion in the protocols used by object-oriented middlewares (i.e CORBA or ZeroC
Ice). These middlewares has been traditionally used in scalable and efficient
distributed heterogeneous applications so we start with well-known and tested
protocols. The application protocol used in these middlewares really marshall/
un-marshall invocation messages between distributed objects. We already inte-
grate these type of protocols in wireless sensor networks [9] but always inside
of the same sensor network domain. Of course, that allows to use an object ori-
ented middleware, which transforms the sensor network integration in a case of
distributed heterogeneous programming.
2 Related Work
In a way [1] aims at similar goals: “Unlike application-level gateway, that re-
quire semantic knowledge of each application in order to make a routing deci-
sion, the overlay gateway routes based on sensor network layer information”.
They propose an overlay network to interconnect applications with sensor nodes
extending the sensor network internal protocol over Internet. It works as a vir-
tual sensor network thanks to overlay gateways. In their words: “It is a sensor
network overlaying IP”. The gateway encapsulates the sensor network protocol
packets (including network, transport and application headers) on TCP or UDP
segments. As the sensor network stack is preserved, components at hosts (virtual
sensors) need to process all of these strange headers at the application layer in
order to maintain the illusion of a single flat network.
SenseWrap [3] takes the other way. They refer to virtual sensors as the
wrapped versions of the actual sensors. They focus on self-configuration pro-
viding standard Zeroconf to discover and find sensor services. It is a middleware
to get IP overlaying to the sensor network. SensorWrap uses a single application-
level gateway, the model that we try to avoid. Tenet [10] is a more sophisticated
network architecture that divides the sensor network in a set of tiers. Each tier
has a master and several nodes. Most of processing and application specific tasks
run in the masters. Hence it is a multi gateway approach that avoids a single
point of failure but may significantly degrade the network performance if some
of the masters fail.
The “all over IP” approach could solve the problem but introduces overhead
even in low-footprint implementations (i.e uIP [2]) and may not be affordable for
some sensor domains. Other protocols like Message Queue Telemetry Transport
protocol [5] (MQTT) from IBM are for telemetry applications so it does not
support actuators and individual sensor-to-actuator interactions. Our approach
uses the network and transport protocol stack most appropriate at each domain,
Sensor Network Integration by Means of a Virtual Private Network Protocol 87
UVPN (Ubiquitous Virtual Private Network) uses the same VPN concept avail-
able in TCP/IP networks although implemented at a higher abstraction layer.
Each host may have one or more object adapters. An object adapter is respon-
sible to expose local objects to the network. Each object adapter is accessible
through endpoints, i.e. logical network connection points. Emulating the con-
ventional VPN model, we assign homogeneous addresses to all involved compo-
nents, regardless of whether they are physical sensor nodes or PC applications.
To achieve this, we need a new kind of endpoint (UVPN endpoint ). The UVPN
endpoint, as the conventional VPN counterpart, uses the same underlying trans-
ports, ie. TCP, UDP or SSL in the Internet case. We are using ZeroC Ice [4] in
our current prototypes, although any other object-oriented middleware could be
used instead. Sensor nodes with minimal footprint are able to process applica-
tion messages using the underlying protocol [9]. Of course, those virtual logic
addresses need to be mapped to the corresponding underlying address (equiva-
lent to neighbor discovery in conventional protocols). Because this kind of trans-
lation may be expensive and complex for a sensor network, our approach here
uses identifiers which may be directly mapped to the node physical addresses.
UVPN endpoints encapsulate their own communication details. An example of
proxy to a remote sensor object using the UVPN endpoint would be “OBJ2
-d:uvpn -h 0x01” where “0x01” is the native sensor node address. These virtual
nodes may hold sensors and actuators fully indistinguishable from their actual
2
GENI Project http://www.geni.net/
88 D. Villa et al.
For better transparency the UVPN endpoint (in a the conventional computer
side) performs the registration on behalf of the adapter. When an endpoint
is instantiated, it invokes the Switch.add() method in the designated remote
switch to bind a sensor network address (addr) to a callback object provided by
the UVPN endpoint. Later, the switch can resolve remote virtual sensor nodes
using these associations between addresses and endpoints. This mechanism is
functionally equivalent to creation of a tunnel in an conventional VPN.
This section discusses the different communication scenarios among node services
and applications, or among nodes themselves.
Application to node. In the simplest scenario, a client application requests
the status of a remote sensor. In the application-level bridge-based ap-
proaches [7,11,8], the sensor network specific protocols store last measured sen-
sor values in the bridge. Later the client explicitly query the bridge giving some
sensor identifier. That was the cause of many important problems: bridge com-
plexity, lack of node autonomy, single point of failure, etc.
With UVPN, the client (in the trunk network) performs a conventional remote
object invocation on a proxy representing the remote sensor. Under the scene,
the UVPN endpoint knows (by configuration) where is the switch and invokes
Transceiver.send() passing the whole client invocation message as argument.
The switch receives the message and tries to find (by means of the Switch.find()
method) a transceiver (virtual node) for the node address specified as the first
argument in the send() invocation. In this case, the address is not found, the
message is sent to the sensor network physical interface, directly connected to
the computer running the switch service. The message goes to the air and should
be received by the target node.
Node to application. UVPN allows sensor nodes to behave as clients, that is,
nodes can transparently invoke remote objects in the trunk network. That is a
Sensor Network Integration by Means of a Virtual Private Network Protocol 89
very rare feature in sensor network middlewares which was previously handled
with ad-hoc non-generic solutions.
In this case, the sensor node just sends a conventional invocation preceded
by the destination node address. The switch receives the message through the
radio interface and looks for the destination address (with Switch.find()). In
this scenario, the method returns a virtual node proxy. The switch uses it to
forward the invocation to the computer. The UVPN endpoint in the computer
application receives the message and gives it to the object adapter. Finally, the
corresponding servant method is executed.
Node to neighbor node. Any node may send method invocations to any other
neighbor in the same physical network using exactly the same mechanism de-
scribed in the previous section. This means the invocation mechanism is location
transparent, i.e. the is not aware of the exact location of the destination object
(sensor node or computer application).
In this case, the switch will not find a remote virtual node and it will send the
message to the radio interface again. This is also useful when the sensor network
does not implement multi-hop routing because the switch will forward messages
automatically. If destination nodes receive replicated messages through different
paths they are automatically discarded by just checking the sequence number in
the header.
Node to non-neighbor node. There is a more interesting use case, also very rare
in previous works. Two or more distant sensor networks (with their respective
UVPN switches) connected to the same trunk network (and this network may
be Internet). In this situation, a sensor node may invoke other remote sensor
node (in a different network) using switches to forward the message towards the
trunk network. As explained before, this requires that the local switch knows
whether the remote object is accessible through itself. It implies the registration
of all sensor nodes in a central switch (the the root switch). Obviously, were are
talking about a hierarchical switching protocol. Local switches have a fallback
switch (a default path) that knows where is each sensor node. Figure 1 illustrates
the described scenario.
Figure 1 makes clear that UVPN works like a tunneling protocol although
it is built on the application layer. The switch operation is a bit more complex
when more that one sensor network participates in the communication. When
a sensor node (as client) wants to send a message to another node, it builds
the message as if destination node were a neighbor, although it is in different
physical network. The local switch will receive the message, and check whether
the destination address is registered on its table. If so, it sends the message using
the associated Transceiver. Otherwise, it will use the fallback switch to send the
message (using the method Transceiver.send()).
When this fallback switch (or root switch as stated above) receives a message,
it will check if destination was already registered. If so, it sends the message
again, using the matching transceiver. If destination is unknown, then it will
deliver the message to the remaining transceivers configured, one on each port
(flooding). Only discards the arriving gate (in order to avoid loops).
90 D. Villa et al.
When the last switch receives the message, it will check again its table. If it is
registered as a virtual node, it will send the message using the given transceiver.
If not, the message will be sent to the other interfaces (i.e. the radio interface,
or other registered switchs, as well as are different from the arrival one).
Application to application?. It is also possible to communicate no-sensor clients
and objects using UVPN. However, given that the middleware supports several
transports at same time, it is more convenient to use TCP/IP endpoints. Without
any loss of generality this means that UVPN is used only when needed, i.e. only
when a sensor node is involved either as client or object.
subscribers) receive the message; those of them without any device plugged in
immediately cut the line and turn on a red LED to visually warn the users and
preventing overload condition; 5) when some of the appliances are unplugged
(or turned off) the loadMonitor can set load state to lower states allowing new
appliances to be plugged or activated.
Fig. 2. UVPN simulation: direct remote communication among sensor nodes in distant
networks
This example illustrates UVPN direct and duplex communication among sen-
sor/actuator nodes and application objects running on conventional computers
in the trunk network. Communication may be initiated by any party and they
all may act as clients or objects. Besides, sensor nodes may act as publishers or
subscribers of event channels.
There is a more complete simulation involving 2 sensor networks connected to
the same IP network through their corresponding UVPN switches (see Figure 2).
It represents the same smart-grid application introducing bulbs and electrical
switches that are new electrical loads. To illustrate the communication among
remote sensor/actuator nodes, the switch node5 send messages set() true/false
to turn on/off the bulb node1. As in the previous case, nodes 1-4 send load
messages to the loadMonitor service and receive load state messages from the
event channel. There are other nodes (e.g. user) which receive notifications from
the loadMonitor service. In this case, user will alert people about an overload.
The simulations source code, brief documentation and screencasts are avail-
able for download at http://arco.esi.uclm.es/uvpn. UVPN has been imple-
mented with the ZeroC middleware in a demonstration kit called Motebox. This
kit is used to show different features on access and interconnection among PC
applications, middleware services and sensor nodes. More information may be
found at http://arco.esi.uclm.es/motebox.
sensor network protocol stack at virtual nodes. With UVPN, it does not mat-
ter which protocol stack is used as long as all peers use the same inter-ORB
protocol (at application layer) and the same addressing scheme, that is, the typ-
ical requirements of a inter-network protocol. Furthermore the middleware also
provides valuable common services such as object persistence, indirect binding,
location transparency, server deployment and many other advanced features.
As far as we know UVPN is the first solution able to communicate sensor
or actuator nodes among themselves and with software objects running on con-
ventional computers in a transparent way, without application specific delegates,
application bridges or ad-hoc protocols. However, there is a constraint: all sensor
nodes must use the same physical addressing scheme. As ongoing work we are
interested in solving that limitation by generalizing the UVPN approach using
a global addressing scheme (see section 3) and providing homogeneous dynamic
routing mechanisms through massively heterogeneous networks.
References
1. Dai, H., Han, R.: Unifying micro sensor networks with the internet via overlay
networking. In: Intl. Conf. on Local Computer Networks, LCN 2004 (2004)
2. Dunkels, A., Alonso, J., Voigt, T., Ritter, H., Schiller, J.: Connecting Wireless Sen-
sornets with TCP/IP Networks. In: Langendoerfer, P., Liu, M., Matta, I., Tsaous-
sidis, V. (eds.) WWIC 2004. LNCS, vol. 2957, pp. 143–152. Springer, Heidelberg
(2004)
3. Evensen, P., Meling, H.: SenseWrap: A service oriented middleware with sensor
virtualization and self-configuration. In: Intelligent Sensors, Sensor Networks and
Information Processing (ISSNIP), pp. 261–266 (December 2009)
4. Henning, M., Spruiell, M.: Distributed Programming with Ice. Revision 3.3.0. Ze-
roC Inc. (May 2008)
5. Hunkeler, U., Truong, H.L., Stanford-Clark, A.: MQTT-S - A publish/subscribe
protocol for wireless sensor networks. In: COMSWARE, pp. 791–798. IEEE (2008)
6. Jayasumana, A.P., Han, Q., Illangasekare, T.: Virtual sensor networks a resource
efficient approach for concurrent applications. In: International Conference on In-
formation Technology: New Generations (2007)
7. Levis, P., Culler, D.: Mate: A tiny virtual machine for sensor networks. In: In-
ternational Conference on Architectural Support for Programming Languages and
Operating Systems, San Jose, CA, USA (October 2002)
8. Madden, S.R., Franklin, M.J., Hellerstein, J.M., Hong, W.: Tinydb: an acquisitional
query processing system for sensor networks. ACM Trans. Database Syst. 30(1),
122–173 (2005)
9. Moya, F., Villa, D., Villanueva, F.J., Barba, J., Rincón, F., López, J.C.: Embed-
ding standard distributed object-oriented middlewares in wireless sensor networks.
Wireless Communications and Mobile Computing 9(3), 335–345 (2009)
10. Paek, J., Greenstein, B., Gnawali, O., Jang, K.-Y., Joki, A., Vieira, M., Hicks,
J., Estrin, D., Govindan, R., Kohler, E.: The tenet architecture for tiered sensor
networks. ACM Transactions on Sensor Networks (TOSN) 6(4) (2010)
11. Yao, Y., Gehrke, J.: The cougar approach to in-network query processing in sensor
networks. SIGMOD Record 31 (2002)
Design of a MAC Protocol for e-Emergency WSNs
1 Introduction
Wireless sensor networks (WSNs) are being deployed in diverse application fields,
including healthcare. Small biomedical sensor nodes are placed on the body of pa-
tients to allow monitoring diverse physiological signals and actions. These body
sensor networks (BSNs) can be grouped to form an e-health WSN.
E-health WSNs for emergency and intensive medical care (e-emergency) exhibit
particular characteristics and constraints. In such WSNs, sensor nodes may deliver
regularly heterogeneous traffic flows to a sink without significant temporal variabili-
ty. Moreover, direct data transmission from sensor nodes to a base station (BS) may
not be feasible nor energy efficient in healthcare WSNs, because of high path loss
around the human body and transmission energy cost. Thus, it is desirable that
e-emergency WSNs operate in two-tier network structures.
An e-emergency WSN should assure controlled delays to provide a real-time
service, as well as guaranteed bandwidth, high reliability and fairness. Since multiple
patients may be present in an emergency or intensive care unit, it should have capaci-
ty of coexisting with close-neighbor BSNs. It should also comprise autonomous
reconfiguration mechanisms to allow a fast adaptive response of the network to new
monitoring scenarios, such as patients’ clinical state changes.
To assure those properties, QoS techniques must be deployed in e-emergency
WSNs. The link layer plays a special role in the QoS support as the access and
reliability of the communication channel directly impact on the performance of upper
layer protocols. After discussing the unsuitability of current deterministic MAC
protocols for e-emergency WSNs in Section 2, a new MAC protocol to cover this
gap is presented in Section 3, and preliminary tests are shown in Section 4.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 93–100, 2012.
© Springer-Verlag Berlin Heidelberg 2012
94 Ó. Gama, P. Carvalho, and P.M. Mendes
Many MAC protocols available for WSNs use contention or reservation-based tech-
niques. Contention–based protocols work well under low traffic loads, but they
degrade drastically under higher loads [1]. Reservation-based MAC protocols are pre-
ferable for networks requiring significant traffic loads and low latency, because QoS
is more easily assured in a collision-free environment. Deterministic MAC protocols
have been proposed for WSNs, namely VTS [2], LMAC [3], PEDAMACS [4], I-EDF
[5], Dual-mode [6], CR-SLF [7], RRMAC [8], LPRT [9], CICADA [10], GinMAC
[11], TSMP [12], Bluetooth, IEEE 802.15.4/GTS.
To allow a comparative analysis, Fig. 1 illustrates the relative positions of those
protocols regarding relevant parameters for e-emergency applications. The left dia-
gram relates: (i) adaptability - capacity to reconfigure promptly the network operating
parameters; (ii) coexistence capacity - capacity of coexisting close-neighbor BSNs in
the same channel; (iii) robustness - capacity to improve the data transmission robust-
ness against communication failures. The right diagram relates: (iv) bandwidth
efficiency - capacity of allocating just the required bandwidth to a node; (v) power
efficiency - capacity to control the energy consumption; (vi) two-tier operability -
capacity of the WSN to operate in two-tier network structures. The label TRUE
means that the protocol provides the specific attribute.
Preferably, MAC protocols for e-emergency WSN should be positioned in the gray
cube of both figures in order to fulfill e-emergency requisites. As observed, none of
the surveyed protocols is able to accomplish such goal. This gap motivated the design
of the new Adaptive and Robust MAC (AR-MAC) protocol.
3 AR-MAC Protocol
can have a global view of the network, which is essential for WSN reconfiguration
purposes and to ensure fairness.
AR-MAC inherits some concepts from the IEEE 802.15.4 and LPRT, namely the
contention access period (CAP), the contention free period (CFP), the normal trans-
mission period (NTP), the retransmission period (RP), the non-active period (NAP),
and the NTP acknowledgment (ACK) bitmap. However, AR-MAC introduces novel
concepts and features to meet the required e-emergency requisites.
Beacon Period. As shown in Fig. 2, the high-grained superframe starts with the Bea-
con Period (BP). The BS broadcasts a new beacon frame in every BP. Beacon frames
are used for sending data to sensor nodes, synchronizing and announcing the WSN.
To improve the probability of a sensor node receiving the beacon, the BS transmits a
sequence of redundant beacon frames b1…bn equally spaced in time with consecutive
beacon numbers. Since a superframe always starts with the transmission of the first
beacon, the beacon number of the received beacon allows a sensor node to resyn-
chronize easily with the WSN. As the patients of an e-health WSN are normally mo-
nitored resorting to the same number and type of sensor nodes, beacons only carry
essential data for the proper operation of the WSN during its steady state. The use of
short-size beacons improves the power saving in each BSN and increases the beacon
delivery probability, because the beacon frame is less exposed to interferences, and so
the performance of the WSN improves too.
CAP. This period follows the BP and it may be used for sending MAC commands
and responses, as well as to convey low transmission duty-cycle traffic. The last time-
slot of the CAP is announced in all beacon frames.
CFP. This period uses TDMA and is composed of the NTP and the Retransmission
Period (RP). The NTP is used for sensor nodes to transmit new data. Lost data are
retransmitted in the RP, which is composed of the Normal RP (NRP) and the Extra
RP (ERP). Data packets transmitted to the BS during NTP are acknowledged through
the NTP ACK bitmap present in the beacon of the next superframe. The BS sends the
NTP ACK bitmap only if one or more packets failed to be transmitted in the NTP of
the last superframe. Unacknowledged packets in the NTP ACK bitmap are retransmit-
ted in the NRP of the current superframe. Data packets sent in the NRP are acknowl-
edged through the NRP ACK bitmap broadcasted in the next superframe, as described
in the following topic. Data packets not acknowledged by the NRP ACK bitmap are
96 Ó. Gama, P. Carvalho, and P.M. Mendes
retransmitted once in the ERP. NRP and/or ERP are present in a superframe only if
retransmissions are required in the respective periods. As the RP size varies along the
superframes in accordance with the number of required retransmissions, the CAP size
varies from a predefined minimum size to a maximum value imposed by the NTP
size. If a sensor node does not receive any beacon during the BP, it may continue to
send its new data in the NTP, since a sensor node’s clock drift in the order of micro-
seconds allows the WSN to continue synchronized during a few consecutive beacon
intervals. However, a sensor node cannot retransmit data in the RP because the ACK
bitmaps are not available, and so it does not know how the RP time-slots are being
allocated to the others sensor nodes. As the timers of the sensor nodes are imprecise, a
small number of safeguard slots are required to avoid the superposition of adjacent
transmissions.
NRP ACK Bitmap. To show the use of the NRP ACK bitmap, let us consider a su-
perframe without retransmission requests from the BS and that some critical data
packets were lost during NTP. The lost packets are identified through the NTP ACK
bitmap sent in the beacon of the next superframe. According to this bitmap, sensor
nodes retransmit the lost data packets once in the NRP, independently of being critical
or not. Then, critical data packets are retransmitted as many times as possible in the
remaining available slots in the NRP. These available slots must be fairly distributed
through the sensor nodes with critical data packets to retransmit. A sensor node stops
the retransmission trials after receiving the ACK frame. Only critical data packets are
acknowledged, except in the last retransmission. If critical data packets fail to be re-
transmitted in the NRP of the superframe, then the BS includes the NRP ACK bitmap
in the beacon of the following superframe. So, critical data packets may be retransmit-
ted once again in the ERP, improving the probability of being delivered. The BS
sends the NRP ACK bitmap only if one or more critical packets failed to be retrans-
mitted in the NRP of the last superframe.
The use of the NRP ACK bitmap mechanism may contribute to out-of-order packet
delivery and increase the delay, although keeping it controlled and bounded to a max-
imum value. In fact, the maximum packet delay is always below twice the superframe
duration.
Criticality and Activity Bitmaps. During the reconfiguration of a WSN, the BS
announces in the beacon frames the superframe specifications and the ACK bitmaps,
as well as the criticality bitmap, the activity bitmap, and the new operational parame-
ters of some sensor nodes along with other relevant information. The criticality bit-
map informs the WSN about the signals considered critical by the BS, in order to
improve or protect the QoS of such signals, as the packet delivery ratio. The activity
bitmap allows for the BS informing on the activity state of all sensor nodes in the
WSN, so that sensor nodes are capable of optimizing the time-slots utilization without
bandwidth waste. The BS considers a sensor node inactive if it does not receive data
from that sensor node after a number of consecutive superframes. The BS also uses
the activity bitmap to inform specific sensor nodes for not transmitting data. A sensor
node can only transmit data when the respective activity flag is set.
Design of a MAC Protocol for e-Emergency WSNs 97
Coloring Scheme. Sensor nodes with low sampling rates and flexible time delays
should not transmit in every superframe in order to save energy and free time-slots for
retransmissions, thus contributing to improve data delivery robustness. To implement
this strategy, a coloring scheme of C colors is applied to sensor nodes and super-
frames. In this scheme, each color assumes a value 2k (with 0 ≤ k < C), that represents
a reference threshold for transmission purposes. The color of each sensor node is
constant during the steady state operation and can only be changed through a reconfi-
guration procedure. If C > 1, the color of a superframe changes successively along the
time in a round-robin fashion. For instance, if a superframe is of color 2k’, the next
superframe will be of color 2k, with k = k’+1, and when k equals to C, k becomes zero.
When the BS sends a beacon of color 2k, all sensor nodes with color not above 2k may
transmit in the current superframe. So, sensor nodes can only transmit in superframes
with the same or a superior color. Regarding retransmissions, a sensor node of color
2k may resend a lost packet in the NRP of the superframe of color 2k+1 or in the ERP
of the superframe of color 2k+2, respecting the round-robin color scheme. The color c
of a superframe is identified in the beacon header, as seen in Fig. 1, where beacons of
two colors are represented.
Time-Slots Assignment. The BS only sends the superframe specifications and the
ACK bitmaps during the steady state of the network. As the BS does not assign di-
rectly the time-slots to the sensor nodes, these must run a distributed algorithm to
compute which time-slots should be used to (re)transmit data without interfering with
each other, in accordance with a predefined order schema. An algorithm for this goal
is proposed in [13].
Cluster Mode. AR-MAC in clustered WSNs uses a combination of TDMA and
FDMA techniques to avoid inter-cluster interferences. Sensor nodes communicate
with the cluster-head and cluster-heads communicate with the BS using AR-MAC for
one-hop networks. Clusters should use superframes with CAP to allow the
(dis)association of sensor nodes. The active period of the cluster superframe must
occur during the CAP of the ascendant cluster superframe. As both superframes occur
in distinct channel frequencies, the transmissions in these time periods do not interfere
mutually. Clusters must only have a few sensor nodes to guarantee that the cluster-
head can receive data from all sensor nodes in the limited active period. Also, the
superframe active period of a cluster becomes shorter as its level in the tree decreases.
Let us consider a network with the clusters grouped hierarchically in a two-level
tree. The second level of the tree holds the leaf sensor nodes monitoring the physical
signals. The first-level is composed of the sink sensor nodes of the leaf sensor nodes.
A sink sensor node may also monitor a physical signal. After receiving a beacon from
the BS during the BP, the cluster-head switches the radio to the channel frequency of
its cluster, as shown in Fig. 3. For simplicity, only one beacon frame is represented in
the BP of the first and second-level superframes. Cyan (dark)-colored packets are sent
in channel k1 and green (light)-colored packets are send in channel k2. Transmitted
and received packets are depicted above and below the time axis, respectively. During
the CAP of the first-level superframe, the cluster-head sends a beacon at the start of
the second-level superframe. During the NTP of the second-level superframe, the
98 Ó. Gama, P. Carvalho, and P.M. Mendes
cluster-head collects the data packets from the leaf sensor nodes. In Fig. 3, the cluster-
head received new data from sensor nodes A, B, C. Once finished the CAP of the first-
level superframe, the cluster-head switches the radio frequency and delivers the
aggregated data to the BS. If the aggregated data cannot be hold in a single packet, the
cluster-head sends two or more data packets to the BS in the NTP of the first-level, as
shown in Fig. 3.
Each aggregated data packet sent by the cluster-head must be individually ac-
knowledged by the NTP (and NRP) ACK bitmap. New data is transmitted to the BS
in the NTP of the first-level superframe. If the cluster-head failed to deliver success-
fully an aggregate data packet to the BS in the NTP of the last first-level superframe,
retransmission trials should occur in the NRP of the current first-level superframe. If a
leaf sensor node fails to deliver a data packet to the cluster-head, the cluster-head
should recover the lost data during the NRP of the next second-level superframe, as
shown in Fig. 3 with packet C1. Then, the cluster-head retransmits the recovered data
in the NRP of the current first-level superframe.
The single-hop WSN testbed contains four BSNs, each one containing sensor
nodes to monitor the ECG, the blood pressure (ART), the oximetry (OXI), the respira-
tory rate (RR), and the temperature (TEMP), as shown in Fig. 4. Sensor nodes sample
at the recommended rates for good signal quality, originating data packets with aver-
age payloads of 90, 60, 30, and 10 bytes, respectively, considering a beacon interval
of 250 ms. Temperature traffic is ignored, due to its very low transmission duty-cycle.
The period of 250 ms was chosen to guarantee a maximum delivery delay of 500 ms,
cf. IEEE 1073. The maximum number of guaranteed timeslots in the IEEE 802.15.4
superframe was increased so that one GTS was allocated to each sensor. The real
behavior of the software and hardware components within the WSN devices was con-
sidered. The software performance of the BS is identical to a sensor node. To test the
data transmission robustness, IEEE 802.15.4 interference packets were sent regularly
every 25 ms.
The average packet loss ratio in the WSN and the average power consumed per
BSN were the considered metrics. In all tests, nodes enter in sleep mode after trans-
mitting. To better evince the impact of the MAC protocols on the energy cost, only
the radio (AT86RF230) consumptions were considered. As IEEE 802.15.4 is mostly
deployed in non-beacon enabled networks, tests were also run using the CSMA-CA
algorithm with its default values.
The results in Fig. 5 reveal a notorious data delivery robustness presented by AR-
MAC, when comparing to its competitors. This good performance was achieved
without aggravating the power consumption significantly.
5 Conclusions
AR-MAC was designed to target relevant characteristics of e-emergency WSNs, name-
ly to provide QoS guarantee regarding reliable and timely data delivery, power effi-
ciency, network reconfiguration mechanisms, two-tier operability, and coexistence
capacity. These goals cannot be simultaneously accomplished using deterministic
MAC protocols currently available for WSNs. Preliminary tests revealed that
100 Ó. Gama, P. Carvalho, and P.M. Mendes
References
1. Chevrollier, N., et al.: On the Use of Wireless Network Technologies in Healthcare
Environments. In: Proc. 5th Workshop on Applications and Services in Wireless Netws.,
France (2005)
2. Egea-López, E., et al.: A Wireless Sensor Networks MAC Protocol for Real-Time
Applications. Journal of Personal and Ubiquitous Computing 12(2) (February 2008)
3. Hoesel, L., Havinga, P.: A Lightweight Medium Access Protocol (LMAC) for Wireless
Sensor Networks. In: Proc. 1st Inter. Conf. on Networked Sensing Systems, Japan (2004)
4. Ergen, S., Varaiya, P.: PEDAMACS: Power Efficient and Delay Aware Medium Access
Protocol for Sensor Networks. IEEE Trans. on Mobile Computing 5(7), 920–930 (2006)
5. Caccamo, M., Zhang, L., Sha, L., Buttazzo, G.: An Implicit Prioritized Access Protocol for
Wireless Sensor Networks. In: 23rd IEEE Real-Time Systems Symp. (December 2002)
6. Watteyne, T., et al.: Dual-Mode Real-Time MAC Protocol for WSNs: a Validation /
Simulation Approach. In: Proc. 1st Conf. Integrated Internet Ad Hoc and Sensor Netws.,
France (2006)
7. Li, H., et al.: Scheduling Messages with Deadlines in Multi-Hop Real-Time Sensor
Networks. In: Proc. 11th Real-time and Embedded Techn. and Applications Symp., U.S.A.
(2005)
8. Kim, J., et al.: RRMAC: A Sensor Network MAC for Realtime and Reliable Packet
Transmission. In: Proc. Intern. Symposium Consumer Electronics, Portugal (April 2008)
9. Afonso, J.A., et al.: MAC Protocol for Low-Power Real-Time Wireless Sensing and
Actuation. In: Proc. 11th IEEE Conf. on Electronics, Circuits and Systems, France
(December 2006)
10. Latré, B., et al.: A Low-delay Protocol for Multihop Wireless Body Area Networks. In:
Proc. 4th Annual Conf. on Mobile Ubiquitous Systems Networks and Services, U.S.A.
(August 2007)
11. Suriyachai, P., Brown, J., Roedig, U.: Time-Critical Data Delivery in Wireless Sensor
Networks. In: Rajaraman, R., Moscibroda, T., Dunkels, A., Scaglione, A. (eds.) DCOSS
2010. LNCS, vol. 6131, pp. 216–229. Springer, Heidelberg (2010)
12. Pister, K., Doherty, L.: TSMP: Time Synchronized Mesh Protocol. In: Proc. of Interna-
tional Symposium on Distributed Sensor Networks, Orlando, Florida, U.S.A. (November
2008)
13. Gama, O., et al.: An Improved MAC Protocol with a Reconfiguration Scheme for Wireless
e-Health Systems Requiring Quality of Service. In: Proc. 1st Wireless Vitae, Denmark
(May 2009)
Discount Vouchers and Loyalty Cards Using NFC
Abstract. This paper describes a framework using Near Field Technology for
the full management of mobile coupons. The system is responsible for dissemi-
nation, distribution, sourcing, validation, and managing of vouchers, loyalty
cards and all kind of coupons using NFC technology. Security of voucher is
granted by the system by synchronizing procedures and a secure encriptation
algorithm.
1 Introduction
The current global crisis is pushing citizens to save money and to seek cheaper prices
when they purchase products and services. Thus, marketing and loyalty techniques are
change help by extending of the smart phones over traditional mobile phones, creating
the concept of m-coupons or mobile coupons.
In the market, we can find some mobile coupons applications. Hsueh and
Chen [1] propose a mobile coupon sharing scheme that uses mobile devices to
distribute coupons among existing social networks increasing mobile coupons
exchange. In this scheme, the company first selects targeted members and then sends
a virtual coupon book and a virtual sharable coupon book to each of these targeted
members.
Systems such as Groupon [2] offer deals in most of markets around the world. This
type of offerings is known as “deal of the day”. The user pays first and then he/she
gets the coupons that will be able to exchange at the indicated establishment. These
types of deals are exchange through the traditional paper coupons or through its mo-
bile application using QR codes [3] that the partner establishment reads using a QR
reader or checking and writing down the coupons code.
Although some existing systems are devoted to the elimination of the tradition
paper voucher using new technologies and mobile devices, some challenges are
unsolved yet. Valuables aspect required to pervasive systems like mobile applications
are not considered, such as:
*
Corresponding author.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 101–108, 2012.
© Springer-Verlag Berlin Heidelberg 2012
102 F.M. Borrego-Jaraba et al.
─ Security: vouchers are managed in the mobile devices as real coupons. Once
the voucher is bought in the provider website, security can be only checked by the
commerce system. Aspects regarding to the loss of the vouchers, error in the
redemption process, error in the manipulation process by the user or device,
fraudulent copying, etc., are not well solved.
─ Voucher type: current systems usually consider just one type of vouchers: discount
coupons that users must pay in advance and later redeemed at a setting establish-
ment. However, the provider vouchers and commerce need a more wide type of
vouchers.
─ Architecture: websites are in charge of the voucher publicity, provisioning and
payment, and the commerce infrastructure, if there exists, is in charge of the check-
ing and exchanged. Thus, current proposals are not open to providers’ requirement
where providers system can be involved in the process having in charge any or
several of the process chain.
─ Loyalty cards: because the above mentioned problem, consideration of the loyalty
card corresponding to providers and/or commerces are not considered. Thus,
these systems not integrate in a unique product all type of vouchers and loyalty
products.
─ Adaptive vouchers: current systems do not consider user preferences in order to
offer and supply vouchers. Because of the above mentioned paradigm “deal-of-the-
day”, they are offering just a few vouchers each day that do not adjust to the user
preferences but only the city selected for the redeem process.
─ Getting vouchers: as mentioned above, current systems supply vouchers only by
the website. This does not adjust to a pervasive system. Thus, the process of get-
ting vouchers should be distributed by getting them anywhere easily with a smart
phone.
This paper presents a framework aimed to solve all the above commented problems.
The framework, called WingBonus [4], is not only an application devoted to mobile
coupons management. WingBonus is an ecosystem that can be fully tailored to pro-
viders, commerces and users’ requirements. Thus, vouchers publicity, sourcing and
redemption can be performed by any infrastructure belong to any of the actors. By
other hand, security of voucher is always granted through unique secure keys, infor-
mation encryption and double or triple secure validation using WingBonus, providers
and/commerce infrastructure.
Finally, WingBonus integrates NFC (and QR) technology allowing the sourcing
and redemption of any kind of vouchers using the available technology: smart posters,
NFC readers and QR readers for commerces, environment and user devices.
Three different types of vouchers are managed by the systems: a) Coupons: a coupon
allows the end customer to acquire an offer with a certain discount, b) Bonds: a bonus
is a set of coupons, so user can take advantage of a bulk purchase of coupons obtain-
ing a notable discount, and c) Chits: a “chit” is a small reward for the purchase of a
product or service. Moreover, WingBonus manages loyalty cards, any loyalty card
can be managed as an independent folder allowing entry and withdraw movements.
Vouchers are defined at different status depending of the process performed over
them by the actors participating in the system. Figure 1, shows a sequence diagram of
the voucher lifecycle and the process in charge of move a voucher from a status to
another one.
104 F.M. Borrego-Jaraba et al.
When a voucher is marked by a user, the voucher passes to the “Voucher marked”
status and a unique identification is given for it, considering the user and voucher
identification. Sourcing of voucher can be performed automatically or by user de-
mand. When a voucher is sourced to the user phone, the voucher passes to “Voucher
sourced” status. Voucher sourcing is performed by a synchronizing process belong to
the WingBonus mobile application. Finally, the voucher passes to “Voucher ex-
changed” status when the voucher is redeemed by the user in some of the accepted
shops. In the redemption process synchronization can be carried out of several ways
depending of user and shop infrastructure.
Fig. 2. Flow of information between the systems actors for secure and unsecure vouchers
determines in what voucher lifecycle the system grants the voucher authentication and
therefore the system stores all information about the voucher, process and participating
actors.
Vouchers can be defined as unsecure for sourcing. Thus, identified and unidenti-
fied users can get vouchers. Selected and sourced vouchers from WingBonus website
are stored in the system database. When users access to provider website, they can
select and download voucher, in this process provider website sends a request to a
WingBonus server service being WingBonus system in charge to source the voucher
to the user and storing the information in the system database.
By other hand, user can get vouchers from Smart Posters. Smart Posters can store
full or partial information of the voucher depending of the source security property. If
the voucher is secure for sourcing, the partial information got from the Smart Poster is
sent to the system and full voucher information is send back to the user. These vouch-
ers cannot be sourced in the mobile device until synchronization with the system is
performed, although temporary information about the selected voucher is stored in the
phone. If the voucher is unsecure for sourcing, the mobile application generates
unique voucher identification, which is temporal until the next synchronization is
performed. Shops are considered as Smart Posters although infrastructure as NFC
readers was used.
Redemption process (see Fig. 2) also can be performed by identified and unidenti-
fied users. In this process the user shows the voucher to the shop to be exchange by a
product or service. If security is not defined for redemption, user can exchanged a
source voucher in the allowed shop. For that, information of allowed shops for re-
demption the voucher must be stored in the mobile phone, otherwise synchronization
is need, and in this moment the voucher is validated, or not, updated in the mobile
phone and stored in the server.
106 F.M. Borrego-Jaraba et al.
(a) (b)
(c) (d)
(e) (f)
In addition, vouchers also can be acquired using Smart Posters. The storage capaci-
ty of an RFID tag is enough to store a complete electronic voucher. The most impor-
tant feature of WingBonus is the secure exchanging of vouchers using NFC (Fig. 3f).
A voucher that has been already provisioned and stored in the mobile phone database
can be exchanged selecting the appropriate option from the detailed view of such
voucher. The mobile application supports two ways to redeem vouchers:
─ Using the NPP (NDEF Push Protocol) in order to send the information of the
voucher to the NFC reader, and wait by a response that is sent from the server to
mobile phone through the NFC reader in order to complete the P2P communication.
─ Although the ideal is exchanging via NFC, redemption can be performed through
QR codes and traditional barcode. WingBonus uses the ZXing core, in order to
generate the QR barcode that contain all the information necessary to redeem the
voucher. Furthermore, in order to facilitate the integration of the application on the
market, but is strongly discouraged, WingBonus mobile also support traditional
barcode like EAN13.
108 F.M. Borrego-Jaraba et al.
References
1. Chen, J.W., Luo, D.S., Hsieh, C.C.: An Institutional Analysis on the Vicissitudes of a
Micro-Payment Platform. In: 2008 IEEE Asia-Pacific Services Computing Conference,
vol. 1-3, pp. 975–980 (2008)
2. Groupon, http://www.groupon.com (access date: June 2012)
3. Al-Khalifa, H.S.: Utilizing QR Code and Mobile Phones for Blinds and Visually Impaired
People. In: Miesenberger, K., Klaus, J., Zagler, W.L., Karshmer, A.I. (eds.) ICCHP 2008.
LNCS, vol. 5105, pp. 1065–1069. Springer, Heidelberg (2008)
4. FiveWapps SL., WingBonus, http://www.wingbonus.es/ (access date: June 2012)
5. Backbone, http://backbonejs.org/ (access date: June 2012)
6. JSON, http://www.json.org/ (access date: June 2012)
7. BlowFish algorithm, http://www.design-reuse.com/articles/5922/
encrypting-data-with-the-blowfish-algorithm.html (access date:
October 2011)
8. ACR122u, http://www.acs.com.hk/index.php?pid=product&id=ACR122U
(access date: June 2012)
Extending Near Field Communications to Enable
Continuous Data Transmission in Clinical Environments
Antonio J. Jara, Pablo López, David Fernández, Benito Úbeda, Miguel A. Zamora,
and Antonio F.G. Skarmeta
Abstract. There are several communication technologies that make feasible the
integration of things into Internet. NFC, Bluetooth, WiFi Low Power and
6LoWPAN are some of the most extended technologies, which are making
possible the wireless transmission for sensor networks communication and inte-
grate this networks with the Internet, in order to reach an Internet of Things
(IoT). These networks allow us collect a lot of information process and under-
stand it and act, according to the situation required, effectively. Our research
work is focused for the integration of NFC to control the heart status of a
patient continuously. This work analyzes the capabilities from NFC to make
feasible the integration of continuous data transmission. The main goal of this
continuous data transmission is enable a new generation of mobile health tools
that make feasible to analyze process and make a preliminary diagnosis of the
state of a patient's heart anytime/anywhere. For this purpose, it is presented how
an electrocardiogram clinical device is integrated with a mobile phone enabled
with NFC. This is evaluated the performance with the native communication
model from the sensor concluding that it is necessary some preprocessing to
work properly in real time. For that reason, it is also proposed, evaluated and
prototyped a preprocessing module to solve the mentioned communication chal-
lenges for NFC in clinical environments.
1 Introduction
Currently Internet of Things is still the focus of research in many fields; this has
resulted to several branches, such as AAL (Ambient Assisting Living) [5], home au-
tomation [3] and intelligent transportation systems [4]. The flexibility offered in con-
junction with the processing and computation capabilities offered by the new devices
are allowing the design and development of services and autonomous applications to
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 109–116, 2012.
© Springer-Verlag Berlin Heidelberg 2012
110 A.J. Jara et al.
make people's lives more comfortable. For example facilities for older people can be
monitored at home with the help of a caregiver, or even that they may be able to know
their health status so easy with NFC.
Flexibility, ubiquity, global access capabilities and mobility support are the fea-
tures from the Future Internet. The sensors integrated in the IoT presents a high hete-
rogeneity: it is located from sensors which transmit a discrete value each several
hours or even days, to sensors with high requirements to support continuous data
transmission. Besides the future Internet is complemented by mobile computing, more
and more advanced than we can, we can have multiple applications to make purchas-
es, control the home environment, the status of your friends and to control the state of
health that show later.
Specifically, this work presents an analysis of the performance of communication
capabilities offered by NFC, with the communication protocol defined over NFC Data
Exchange Format (NDEF) and the NDEF Push Protocol (NPP) to transmit data conti-
nuously from an RFID/NFC reader connected via USB (ACR 122 from ACS [7]) to a
smart phone with NFC supports (Google Nexus S from Samsung). It is presented the
capabilities for real-time communication from NFC technologies, and the require-
ments from the continuous data transmission of data from an electrocardiogram.
Comparing the capabilities and requirements, it is concluded that a pre-processing
technique is required. For that reason, it is also presented a proposal to make feasible
this communications through a pre-processing and data aggregation module. Finally,
this work also presents a comparative and evaluation about send raw data from ECG
or pre-processed.
Clinical sensors present native protocols for communications, which provide from
RAW data to sensors which formatted format following a standard such as Health
Level 7 (HL7) and IEEE 1073 (X73). All of them require to be pre-processed from
their original protocol to NDEF records, in order to make feasible the data transmis-
sion via NFC. This pre-processing and adaptation tasks allows carry out complex data
analysis for anomalies detection, data compression and security techniques applica-
tions. For example, it can be included a Cyclic Redundancy Code (CRC) for integrity,
digital summary and signatures for authentication and encryption for protection [6].
The sensor considered is an electrocardiogram (ECG). Specifically, the ECG mod-
ule chosen is the EG 01000 from Medlab (see Fig. 2). This provides a continuous data
channel through a serial interface. This transmits the wave trace of the called V2 in
cardiology. The original protocol has a sampling rate of 300 samples per second (Hz),
and a high resolution mode with an accuracy of 150 values per mV.
Thus, let a sampling frequency (ω), with a value of 300 Hz, and where β is the
bpm. It is required in total for each pulse of χ bytes, equal to 236 bytes for the case of
76 bpm following the equation 1. Fig. 1 show how many bytes are transmitted accord-
ing to the bpm.
60*ω/β = χ (1)
In addition, it is important to determine how much time is required for each byte, in
order to calculate the relevant medical intervals, which are able to be used for a pre-
diagnosis analysis. The time required per byte is determined following the equation 2.
1 byte/sec / 300 bytes/sec = 3,3 ms/byte (2)
112 A.J. Jara et al.
400
Frame size
300 Frame
(bytes)
size
200
according
100 to the
0 bpm
Fig. 1. Size of the frame according to the Beats Per Minute (BPM)
YOAPY module is based on detect maximums and minimums values in each wave
of the PQRST. In addition, it is processed and considered the more descriptive seg-
ments, what permit us transport, and redraw the PQRST complex. The segments can
be differentiated on the presented and described Fig. 3.
YOAPY format contains five maximum/minimum values, six segments, the heart
beats per minute and one byte for describe detected anomalies founded by a simple
analysis about the length of some segments, and this is 13 bytes that are ordered as
shown in Table 2. The 5 maximum / minimum values describe the difference be-
tween the beginning of the wave and the maximum / minimum, i.e: (P - init_P) 137 -
127 = 10. The 6 segments indicate the length in bytes of the segments, from which
are obtained the medical intervals by adding some of these segments and multiplying
its length by byte time. For a better understanding of the medical interval, check our
previous work located at [10]. BPM represents de beats per minute (bpm). Diagnostic
Byte indicates through his bits some diagnostics.
Extending Near Field Co
ommunications to Enable Continuous Data Transmission 113
Wave tracce of
reference ECG
curve
Fig. 3. Trace representation off pre-processed ECG. In the upper left is presented the refereence
wave. The points are P: green, Q: yellow, R: Pink, S: blue, and T: dark blue.
All values except the TP segment (S_TP) and BPM are represented by signedd in-
tegers because their values never overflow the limit of 127.
Tab
ble 2. Pre-processed format (real values)
0 1 2 3 4 5 6 7
P Q R S T S_P S_PQ S_Q
QS
S_ST S_T S_T
TP BPM DIAG
6 Performance Ev
valuation
An evaluation has been caarried to determine the time and delay for pushing ND DEF
messages for a continuous monitoring from the USB RFID/NFC reader to the sm mart
phone. It has been compareed between a version of the solution based on the full wwave
trace transmission, i.e. the 250-336
2 bytes per heartbeat from the RAW mode, and the
YOAPY pre-processed mod de of sending only 13 bytes per heartbeat.
It has been found that foor sending the 250-336 bytes received in RAW mode frrom
the ECG, it is required to send
s from 2 to 3 frames and each of them is a partitionn of
one complete raw trace.
All comparisons shown below have been made in “T=0” mode at 106Kb/s ((Al-
lowed mode by low cost devices).
d Twenty samples were taken in order to obtainn an
average that approximates better the delivery time, which will be used below to pper-
form certain calculations. The
T average for the times for transmissions measured are
2372,25 ms (2 seconds) forr RAW mode, and 22,5 ms (0,02 seconds) for the soluttion
1
DEMO video of the monitoring system:
s http://www.clitech.eu/ECG_continuous.mp4
114 A.J. Jara et al.
based on the YOAPY modee. The high value of delivery time in RAW mode is duee to
the need to reconnect the seession to send a message NDEF causing a significant deelay
by the need to send two pacckets to recover the session.
Fig. 4. ECG Wave reconstruction from Yoapy fields and Gauss function
In conclusion, the RAW W mode transmission produces a delay for real-time and
continuous monitoring of vital
v signs. Since this requires more than 2 seconds for ddeli-
vering a sample which is obbtained each less than 1 second (76 bpm, means a heartbbeat
each 0,79 seconds). For exaample, when the patient is monitored for 1 hour, the sam
mple
displayed corresponds to 404 minutes ago. These calculations it is obtained by the
following formulas (3 and 4):
4
3600sec/2,377225sec per frame = 1517.54 frames (3)
0,79sec per heart beat * 1517.54 = 1198.85 sec (4)
That is, being 2.37225 seconds the average time to send a complete frame in RA AW
mode, at 3600 seconds in one o hour, it is sent the number for plotting equal to 15518.
Thereby, 1518 value corrresponds to the pulse generated in the second 11988.85
(minute 19.98), being the heartbeat
h time 0.79 seconds to 76 bpm. An example of this
accumulative delay is show wn below in Fig. 6 and 7.
Therefore, the use of RAAW mode is not feasible, since it produces an accumulattive
delay. However, the use off YOAPY mode and its compression to send this inform ma-
tion allows reaching a shorrt delay, around 0.02 seconds, which is under the threshhold
of the 0.79 seconds. This deelay can be redeemed by sending 2 or more frames (up tto 9
frames) if necessary in a sin
ngle NDEF message to update the delay.
YOAPY needs two fram mes ECG, current and previous. Thus, it is spent 15mss on
average in order to initializze a heartbeat, in case of previous heartbeat (i.e., the ppro-
gram has just started or errooneous frames for patient movement) and 0.1ms during the
program because, we have the t above pattern of the heartbeat.
Extending Near Field Co
ommunications to Enable Continuous Data Transmission 115
40000
Time (ms)
30000
20000
10000
0
0 5 10 15 20
Frame(He
eart Beat) Delivery Generated
Fig. 6. Delay produceed between generated and delivered frame in RAW mode
20000
introduced
Delay
10000
0
0 10 20 30
Frame (He
eart Beat) Delay (Generated+Delivery)
Regarding the small delaay produced by the pre-processing module (YOAPY), this
will make feasible the reaal-time transmission, in addition to the extra analysis for
diagnosis, which can be hellpful for caregivers, and nursing homes.
7 Discussion
The evaluation has been caarried out via NFC with the NPP library from Android O OS.
But, it is not suitable the co
ontinuous data transmission, since this presents a bandwiidth
requirements over 127 bytees per second, since this requires around 2 seconds to traans-
fer a complete heartbeat off approximately 250 bytes. This limitation for sending the
payload attached to the sho ort duration of the connection with NPP makes NFC not
viable for sending data in large
l quantities and continuously, compared to other teech-
nologies such as Bluetooth 2.1. However, it can be interesting to send a few fragmeents
of data asynchronously, an nd mainly it is highly intuitive for the interaction with de-
vices and sensors. There is another problem in hardware because low cost devices are
not able to the extension of o payload. Anyway, YOAPY always offer more beneefits
although these problems aree solved.
8 Conclusions and
d Future Work
This work presents the inteegration of a clinical device with continuous data transm
mis-
sion requirements in NFC technology.
t This has been concluded that the direct traans-
mission of the collected data
d from an electrocardiogram is not feasible, since the
delay introduced for the transmission of multiple NDEF messages. It has bbeen
116 A.J. Jara et al.
Acknowledgments. This work has been made possible by the means of the
Excellence Researching Group Program (04552/GERM/06) from Foundation Seneca,
FPU program (AP2009-3981) from Education and Science Spanish Ministry, with
funds from the frames of the IoT6 European Project (STREP) from the 7th
Framework Program (Grant 288445).
References
[1] Martínez, C.S., Jara, A.J., Skarmeta, A.F.G.: Real-Time Monitoring System for Water-
course Improvement and Flood Forecast. In: Lee, G., Howard, D., Ślęzak, D. (eds.)
ICHIT 2011. CCIS, vol. 206, pp. 311–319. Springer, Heidelberg (2011)
[2] Atzori, L., Iera, A., Morabito, G.: The Internet of Things: A survey. Computer
Networks 54(15), 2787–2805 (2010)
[3] Zamora, M.A., Santa, J., Skarmeta, A.F.G.: Integral and networked home automation
solution towards indoor ambient intelligence. In: Pervasive Computing (2010)
[4] Castro, M., Jara, A.J., Skarmeta, A.F.G.: Extending Terrestrial Logistics Solutions Using
New-age Wireless Communications based on SunSPOT. In: V International Symposi-
umon Ubiquitous Computing and Ambient Intelligence, UCAmI 2011 (2011)
[5] Istepanian, R.S.H., Jara, A.J., Sungoor, A., Philips, N.: Internet of Things for M-health
Applications (IoMT). In: AMA IEEE Medical Tech. Conference on Individualized
Healthcare, Washington (2010)
[6] Jara, A.J., Marin, L., Zamora, M.A., Skarmeta, A.F.G.: Evaluation of 6LoWPAN Capabil-
ities for Secure Integration of Sensors for Continuous Vital Monitoring. In: V Internation-
al Symposium on Ubiquitous Computing and Ambient Intelligence, UCAmI 2011 (2011)
[7] ACS ACR122 NFC Reader Specification (2011), http://acs.com.hk/drivers/
eng/API_ACR122U.pdf
[8] NFC Forum, Innovision, Near Field Communication in the real world - Turning the NFC
promise into profitable, every day application, Near Field Communication in the
real world-Using the right NFC tag type for the right NFC application, and Logical Link
Control Protocol (2011)
[9] NXP Forum, PN532 transmission module for contactless communication at 13.56
MHz used in ACR122, http://www.nxp.com/documents/user_manual/
141520.pdf, and PN532 Application Note, http://www.adafruit.com/
datasheets/PN532C106_Application%20Note_v1.2.pdf (2011)
[10] Jara, A.J., Blaya, F.J., Zamora, M.A., Skarmeta, A.: Anontology and rule based intelligent
information system to detect and predict myocardial diseases. In: IEEE Inf. Tech. App. in
Biomedicine, ITAB (2009)
[11] Istepanian, R.S.H., Petrosian, A.A.: Optimal zonal waveletbased ECG data compression
for a mobile telecardiology system. IEEE Trans. Infor. Tech. in Biomedicine 4(3),
200–211 (2000)
[12] Jara, A.J., Zamora, M.A., Skarmeta, A.: An Internet of Things – based personal device for
diabetes therapy management in AAL. Personal & Ubiquitous Computing 15(4), 431–440
(2011)
Tailoring User Visibility and Privacy in a Context-Aware
Mobile Social Network
Keywords: Mobile social networks, instant messaging, NFC, address phone book.
1 Introduction
Lately, interest in mobile applications and web portals oriented to message and talks
exchange among users have experienced a great increase, resulting in the develop-
ment of a large number of applications oriented to social networks and instant mes-
saging. In fact, a social network is a new way of content sharing between people lo-
cated anywhere in the world. Besides, the rising of mobile technologies with new
smart devices able to support these applications allow users to share information,
experiences anywhere and anytime.
Users can use social networks applications like Facebook1, Twitter2 and so on with
their smart phone at anywhere trough 3G/WiFi technology, exchanging any comment
or situation lived, images, files, etc. with selected contacts, or simply discuss about
other user comments. In other applications, like Whatsapp3, these wireless technologies
are oriented to the communication between users, establishing conversations between
couples or groups of users stored in the user’s address book.
Thus, applications such as Smart Contacts4, Contapps5 or Youlu address book6,
available for the most common mobile operating systems, allow users to manage their
*
Corresponding author.
1
Facebook url: http://www.facebook.com
2
Twitter url: http://twitter.com
3
Whatsapp url: http://www.whatsapp.com
4
Smart Contacts url: https://play.google.com/store/apps/details?id=com.
bondaii&hl=es
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 117–124, 2012.
© Springer-Verlag Berlin Heidelberg 2012
118 F.M. Borrego-Jaraba et al.
address book, already synchronized in the cloud, to display and search the most fre-
quent contacts and to link each contact with social networks, so they can publish
contents, create notifications, chat with users or groups, etc.
Nowadays, WhatsApp Messenger is the most used application for message ex-
change. It is a free and multiplatform application that allows users to manage con-
tacts, groups, conversations and send messages including text, images, video and
audio. This application uses instant messaging (push/pull) avoiding the loss of infor-
mation even when the application is not running.
Although these systems allow certain degree of tailoring, the customization is re-
stricted to the management of the contacts, from the phone address book, and to person-
alize the user’s status, displaying some characteristics or information about the user.
This trivial and basic personalization of user communication has not taken into ac-
count some of the most important preferred characteristics of context-aware systems,
for instance:
─ In these systems, the communication between users is always user guided.
─ Contact visibility or contact status has simply an informative context.
─ User privacy is lost once other users have stored his/her data.
─ Context is never considered, so users receive messages from other user independ-
ently of the sender and receiver status, the type of message or the message content.
2 Related Works
Mobile social networking has been an active field of research in the past years, where
a wide variety of proposed systems have being developed. Sadeh et al. [1, 2], describe
5
Contapps url: http://www.contapps.com
6
Youlu address book url: https://play.google.com/store/apps/ details?
id=com.youlu&hl=es
Tailoring User Visibility and Privacy in a Context-Aware Mobile Social Network 119
an application that enables cell phone and laptop users to selectively share their loca-
tions with others, such as friends, family and colleagues.
Ankolekar et al. [3], have designed and implemented Friendlee, a mobile social
networking application for close relationships. Friendlee analyzes the user’s call and
messaging activity to form an intimate network of the users’ closest social contacts
while providing ambient awareness of the user’ social network in a compelling, yet
non-intrusive manner.
Veneta [4] is a mobile social network platform able to explore the social
neighbourhood of a user by detecting common friends of friends which are in the
user’s current proximity detection having into account the user phone contact entries.
Similarity, WhozThat [5] is a system that ties together online social networks with
mobile smart phones to answer the common and essential social question “Who’s
that?” WhozThat offers an entire ecosystem on which increasingly complex context-
aware applications can be built; that is, once the environment knows who one is, the
environment can adapt its content based on the individual’s identity or even the
collective tastes of a local group.
Ziv and Mulloth [6] describe Dodgeball a case study of a mobile social network.
Dodgeball is a New York City-based service that merges location based services with
social networks helping users to connect with the people and places around them.
Dodgeball is a mix of social networks tools, simple cell phone messaging, and
mapping software.
In PeopleTones [7], users get an alert whenever a friend or buddy is in close prox-
imity. The goal is simply to be informed about such a “nice to know” situation and to
be able to contact with other user directly, enabling some kind of spontaneous interac-
tion not possible before (e.g., to have a cup of coffee together right now). As the goal
is to support spontaneous activities whenever there is time for it, the alert can be un-
obtrusive (e.g., by phone vibration) not requiring the user to look the phone con-
stantly, or to be disturbed by alerts or notifications.
A remarkable contribution from the academic world is the Social Serendipity pro-
ject [8]. The Serendipity system senses a social environment and cues informal inter-
actions between nearby users who don’t know each other, but probably should. The
system uses Bluetooth hardware addresses to detect and identify proximate people
and matches them from a database of user profiles.
Finally, Li et al. [9], propose FindU, the first privacy-preserving personal profile
matching schemes for mobile social networks. In FindU, an initiating user can find
from a group of users the one whose profile best matches with his/her; limiting the
risk of privacy exposure, only necessary and minimal information about the private
attributes of the participating users is exchanged.
3 WingContacts Architecture
The server side is responsible of the management of all information about users and
the communications among them. As will described in the next section, the mobile
application does not store the full information about the user contacts, but the infor-
mation that can be collected from the contacts depending on the role defined for them
by the user.
Figure 1 shows some of the main database objects of the system. User class stores
all personal user data, information about the social network of the user, information
about the user’s identification, phone, registration, and so on. Therefore, user’s
personal data can be hidden at anytime from other users/contacts depending on the
visibility and privacy defined by each user.
Preferences defined by the user are management by the Preferences class: commu-
nication and synchronization functionalities, device phone, etc.
Users can define a set of contact groups. By default, there is always a default group
maintained by the Groups class. The default group includes all the contacts defined
by the user. Each group has assigned a default user status. Users can define as many
statuses as needed (see class Status). A status represents a user context, for instance,
‘at work’, ‘sleeping, ‘looking for party’, etc. This personalization is improved by
means of the definition of roles. The class Role allows users to define a set of
pre-established roles. Therefore, for each user’s status, a different role can be assigned
to each group, thanks to the class ContextGroup.
The relationships between groups, roles and status allow users to manage their
visibility and privacy preferences for each contact or group. Hence, the user context is
at real time personalized.
Tailoring User Visibility and Privacy in a Context-Aware Mobile Social Network 121
Fig. 2. Preference management used for tailoring user’s visibility and privacy
In the synchronization process with the server, updated information about contacts
is send back to the users, updating their phone address book and so, the contact’s
information is modified with the new established values. Therefore, the current con-
text of a contact is set by the current defined status and role assigned for other users,
determining the visibility and privacy of the users for any other system users.
Users must synchronize with the server side before executing some functionalities
of the mobile application, for instance, looking for contacts. Furthermore, if a syn-
chronization is not required for some mobile functionality like, for instance, sending a
message, the information stored in the server database prevails over that existing in
the mobile database.
Figure 3 shows some snapshots of the mobile application. The main menu (Fig. 3a)
allocates the main functionalities: access to the address book, chats, definition of the
status and roles and assignment to contacts and contact groups, as well as to search
contacts.
Fig. 3b shows the functionality related to contacts’ management. The contacts’ in-
formation stored in the user’s phone address book is always dynamic. This informa-
tion changes when the contact’s status changes, and therefore modifies the visibility
and privacy of the contact. Users can define contact groups (Fig. 3c), and assign a
defined status to each group (Fig. 3d). Although the system includes many predefined
status and roles, the user can define as much status as desired or modify the existing
ones. Fig. 3e and Fig. 3f show the user friendly interface developed for these func-
tionalities. Search functionality allows the users to view information about known
(contacts) and unknown users of the system in a map. Users can select some search
criteria based on user’s status resulting in a map that contains the user location.
The information shown in the map (see Fig. 3g) about the users depends on the us-
ers/contacts current status and, therefore, on current privacy and visibility preferences.
Through the map users can establish chats with known and unknown users using
the contacts list or the chat functionality in case the contacts’ status of the receiver
allows it. Fig. 3h shows a snapshot of a chat, which is similar to other well-known
Tailoring User Visibility and Privacy in a Context-Aware Mobile Social Network 123
mobile instant messaging applications. The main difference with other applications is
that the management of the chat depends on the visibility and privacy settings defined
by the status of both the receiver and the sender.
Hence, a user can select a set of contacts as receiver of a chat, and: a) different re-
ceivers can receive different personal information about the sender, depending on the
role defined by the sender for the receiver or group assigned to the receiver, b) differ-
ent receivers can manage the sender information on different ways, for instance, using
this information to be stored in the phone address book or not, c) receivers can use the
list of users of the conversation in very different ways, depending on the status de-
fined for each user of the list, and so on.
The application uses geo-located augmented objects by automatically update the
user’s context information. Thus, when a user touches a tag, for instance, at the uni-
versity library, a predefined status “at the library” is set, updating the visibility and
privacy of the user for the other users as defined in the roles assigned to that status.
Manage user visibility and privacy in instant messaging or social networks mobile
applications is a problem not yet solved by the current and most used applications.
These applications have not taken into account that users need to personalize their
private information depending on the current context, that is, depending on aspects
related with the scenario, current location, contacts characteristics, activity, mood and
wishes, information transmitted in the chat, etc.
In this paper we have described a system proposal, called WingContacts, oriented
to the context-aware management of the user’s information in instant messaging ap-
plications. The proposal is based on the definition of users’ status and roles as prefer-
ences assigned to user’s contacts or user’s contact groups. Visibility and user privacy
are specified by the different roles defined. User’s statuses defined for other known or
unknown users are context updated, changing the information about a user that can be
knows by other users. Contacts management information is stored in a virtual phone
address book. Thus, this information is updated when the status defined by the contact
is also updated.
124 F.M. Borrego-Jaraba et al.
Currently, WingContacts is an operating prototype that has been proposed for the
analysis of its deployment.
References
1. Sadeh, N., Hong, J., Cranor, L., Fette, I., Kellet, P., Prabaker, M., Rao, J.: Understanding
and Capturing People’s Privacy Policies in a Mobile Social Networking Application.
Personal and Ubiquitous Computing 13, 401–412 (2009)
2. Consolvo, S., Smith, I.E., Matthews, T., LaMarca, A., Tabert, J., Powledge, P.: Location
Disclosure to Social Relations. Why, When and What People Want to Share. In: Proceed-
ings of Conference on Human Factors in Computing Systems (Chi 2005), pp. 82–90. ACM
Press (2005)
3. Aukolekar, A., Szabo, G., Luon, Y., Huberman, B.A., Wilkinson, D., Wu, F.: Friendlee. A
Mobile Application for your Social Life. In: Proceedings of the 11th International Confer-
ence on Human Computer Interaction with Mobile Devices and Services (MobileHCI 2009),
pp. 27:1–27:4. ACM Press (2009)
4. Von Arb, M., Bader, M., Kuhn, M., Wattenhofer, R.: VENETA. Serverless Friend-of-Friend
Detection in Mobile Social Networking. In: Proceedings of 4th IEEE International Confer-
ence on Wireless and Mobile Computing Networking and Communications (WiMob 2008),
pp. 184–189. ACM Press (2008)
5. Beach, A., Garthell, M., Akkala, S., et al.: Whozthat? Evolving an Ecosystem for Context-
Aware Mobile Social Networks. IEEE Network 22, 50–55 (2008)
6. Zif, N.D., Mulloth, B.: An Exploration on Mobile Social Networking. Dodgeball as a Case
in Point. In: Proceedings of the International Conference on Mobile Business (ICMB 2006),
p. 21. ACM Press (2006)
7. Li, K.A., Sohn, T.Y., Huang, S., Griswold, W.G.: Peopletones a System for the Detection
and Notification of Buddy Proximity on Mobile Phones. In: Proceedings of the 6th Interna-
tional Conference on Mobile Systems Applications and Services (MobiSys 2008), pp. 160–
173. ACM Press (2008)
8. Eagle, N., Pentland, A.: Social Serendipity, Mobilizing Social Software. IEEE Pervasive
Computing 4, 28–34 (2005)
9. Li, M., Cao, N., Yu, S., Lon, W.: FindU, Privacy-Preserving Personal Profile Matching in
Mobile Social Networks. In: 30th IEEE International Conference on Computer Communica-
tions (INFOCOMP 2011), pp. 2435–2443. IEEE Computer Society (2011)
RFID and NFC in Hospital Environments:
Reaching a Sustainable Approach
1 Introduction
Nowadays, one of the greatest obstacles of ensuring patient safety in a healthcare
environment is the appearance of adverse events during the care process. Various
studies show that 38% of adverse events occur during the process of prescribing-
validation-dispensing-administration of drugs [1].
Furthermore, patient traceability represents a monitoring process at the hos-
pital, enabling information acquisition about real time situations as well as the
routes followed during the stay at the hospital. For example, this allows to mea-
sure the time that the patient waits for a diagnosis or for a test or simply the time
spent waiting. Traceability could also be applied to the identification of patients,
to the dosage units of medication and to the binomial patient/drug prescribed.
This paper presents the application of RFID1 technology for obtaining the
traceability of patients and medicines in Emergency and Pharmacy departments.
Besides, we also propose a more sustainable complement to the RFID system
by using NFC2 technology on patients, nurses and drugs for identification, and
control of medication administration, preserving the main functionalities and
maintaining the user interactive effort.
1
http://www.rfid.org/
2
http://www.nfc-forum.org/home/
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 125–128, 2012.
c Springer-Verlag Berlin Heidelberg 2012
126 M. Martı́nez et al.
2 Related Work
– Patient tracking. To know with great accuracy the location of the patient in
the Emergency Department and analyse the course of a patient from their
entry until they leave the hospital allows us to increase the efficiency of the
service. In this case, the patient has an active RFID WIFI tag on the wrist
bracelet and the RFID locating engine calculates its position with an accu-
racy of between 1 and 4 meters. An advantage of the selected architecture
is that it is possible to use the WIFI network available in the hospital and
the manufacturer’s software components enable the configuration hardware
devices among other features.
RFID and NFC in Hospital Environments: Reaching a Sustainable Approach 127
Fig. 1. Medication administration through: (a) Handheld with RFID capabilities, (b)
NFC-enabled mobile phone
128 M. Martı́nez et al.
Features
Development resources Infrastructure Cost User-Device Interaction
NFC capabilities have a Deployment of the The general cost of In both cases, the sys-
reduced cost of manu- RFID platform needs the RFID system tem needs a simple inter-
facturing. In fact, mo- more architectural is higher than with action between the user
bile phones can integrate elements such as an- NFC technology device and the tag to de-
NFC technology. NFC tennas, reader/writer because of the ploy services. Otherwise,
application development devices, passive and infraestructure deployment of patient lo-
depends on the Opera- active tags and the components. With cation service depends on
tive System of the mo- related software NFC, we need a the RFID signal without
bile device and it does components. NFC mobile device with requiring an explicit in-
not depend on the spe- uses only the mobile NFC reader (e.g. teraction. User interacts
cific hardware according phone, passive tags NFC mobile phone) in a natural way by “tag
to manufacturer’s speci- and the software and passive tags. touching” actions.
fications as with RFID. application.
References
1. Andrew Aranaz, J.M., Aibar Remon, C., Vitaller Burillo, J., Ruiz Lopez,
P.: National Study on Adverse Effects linked to Hospitalisation. In: AE-
NEAS 2005, Consulted on December 17, Ministry of Health, Madrid (2011),
http://www.seguridaddelpaciente.es/contenidos/castellano/2006/ENEAS.pdf
2. Darianian, M., Michael, M.P.: A low power pervasive RFID identification system for
medication safety in hospital or home Telecare. In: Proceedings on 3rd International
Symposium on Wireless Pervasive Computing (ISWPC 2008), pp. 143–146 (2008)
3. Benelli, G., Pozzebon, A.: Near Field Communication and health: Turning a mobile
phone into an interactive multipurpose assistant in health scenarios. CCIS, vol. (52),
pp. 356–368 (2010)
4. Marcus, G., Law, D., Verma, N., Fletcher, R., Khan, A., Sarmenta, L.: Using NFC-
Enabled Mobile Phones for Public Health in Developing Countries. In: First Inter-
nationl Workshop on NFC, Hagenberg, Austria (2009)
5. Fontecha, J., Hervás, R., Bravo, J., Villarreal, V.: An NFC Approach for Nursing
Care Training. In: Third International Workshop on Near Field Communication
(IEEE), Hagenberg, Austria, pp. 38–43 (2011)
6. Lahtela, A., Hassinen, M., Jylha, V.: RFID and NFC in healthcare: Safely of hos-
pitals medication care. In: PervasiveHealth, pp. 241–244 (2008)
Delay-Tolerant Positioning for Location-Based Logging
with Mobile Devices
1 Introduction
Sensing is critically important for Ambient Intelligence, which relies to a large extent
on the ability to perceive the physical environment and the activities taking place in it.
Mobile phones, with their already substantial data capture and connectivity capabili-
ties, have a unique potential to become powerful sensing devices for uncovering new
knowledge about the realities of our world and the patterns of Human behaviour [1].
In particular, considering their widespread use and their continuous presence in peo-
ple’s lives, they represent a major resource for location-based data collection. For
example, to study mobility patterns within cities, there is a need to collect traces of
users moving across the city in their daily life. In Experience Sampling studies [2, 3],
there is a need to register, either implicitly or as part of an explicit user action, events
as they occur in people’s daily lives and annotate them with location information that
will normally be crucial for their interpretation.
Location-based logging is fundamentally shaped by the need to combine frequent
device positioning with the consequences that the process can have for users. These
processes normally involve recruiting people to run the data collection applications on
their own mobile phones and as part of their normal daily activities. This is crucial for
generating realistic data and enabling larger scale studies. However, if the data collec-
tion implies significant energy, communication or privacy costs for users, it will
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 129–136, 2012.
© Springer-Verlag Berlin Heidelberg 2012
130 A. Coelho, F. Meneses, and R. José
become a severe obstacle to large scale use and volunteer recruitment. Therefore, in
our location logging processes we have included two important design principles: the
first is avoiding the use of GPS. We would always need an alternative solution be-
cause we need to consider indoor locations, but the key issue is that the continuous
use of the GPS would necessarily have a very high cost in terms of power consump-
tion [6]; the second implication is to avoid depending on connectivity. In part this is
also important to save energy, but since many people will not have a flat-rate data
plan, there is also the issue that they will not accept the potential costs associated with
data communications. The independence from connectivity would also allow us to
perform the positioning without having to wait for the availability of a network
connection.
To comply with these principles, we introduce the concept of delay-tolerant posi-
tioning. In most location-based services, location is normally part of an interactive
feature and thus needs to be immediately available. On the contrary, in location-based
logging, location information is needed to annotate an event. Therefore it is possible
to just store the information needed to determine location, and leave the actual loca-
tion calculation to some later point in time. Our data collection application, stores on
the device the radio and Wi-Fi data generated that is used by the location API to de-
termine location and when a connection becomes available, a batch of GSM and Wi-
Fi information is sent to a server that will then use that information to calculate the
positions. This approach does not make any use of the GPS and works very well with
only occasional connectivity. For location-based logging applications, this means that
frequent positioning records can be generated without forcing the device owner to
incur in significant power or network costs.
In this paper, we assess the viability of this delay-tolerant approach for location-
based logging. In particular, we aim to compare the level of accuracy with what can
be obtained directly on the device and also to study the effect of other variables in that
accuracy, such as the effect of time spent at the location before the location estimation
and the nature of the Wi-Fi landscape at the point where location was estimated.
After revising related work in the next section, we describe in section 3 the essen-
tial steps of our research methodology. In section 4 we present the main results
regarding each of the proposed objectives of the study and, finally, in section 5 we
provide our concluding remarks.
2 Related Work
Yoshida et al. detail the creation and deployment of a localization system based on
Wi-Fi fingerprinting [7]. The paper focuses on the effect and efficiency of the method
used for acquisition of fingerprint data and the influence that it could have on the
accuracy of the localization results produced. Even though this work has different
objectives, it has informed us about the need to devise a data collection protocol that
could accommodate for the variation in the network landscape.
Zandbergen [8] describes a study to estimate accuracy of positioning techniques,
using a similar methodology. A mobile phone (iPhone 3G) was used to collect
Delay-Tolerant Positioning for Location-Based Logging with Mobile Devices 131
location data (A-GPS, Wi-Fi and Cellular positioning) at several distinct metropolitan
locations, and test the accuracy of each of the iPhone’s positioning methods against a
benchmark location (ground truth). We aimed to test the accuracy of a delay-tolerant
location calculation (with Wi-Fi and GSM data), against the location data provided by
the device on-site, and also in relation to ground truth.
PlaceLab [9] is a positioning system that allows users to locally (on their device)
calculate a location, based on the BSSIDs and signal levels of nearby Wi-Fi access
points. This location is calculated by crossing the data gathered in real-time by the
device with information stored locally on a database, to which the user previously
subscribed. While this may address some of the connectivity issues we have identi-
fied, it would have important disadvantages in terms of deployment, given the need to
install additional client-side software and database.
Herecast [10] allows users to determine their symbolic location, e.g. building floor.
Information about locations is kept in a database that is maintained and accessed by
the community. While an alternative for cases in which information only needs to
be generated occasionally and in familiar locations, this is not suitable for frequent
location logging.
BikeNet [11] supports the collection of data related to performance, environmental
and health aspects of cycling, and provides an example of a sensing system that dem-
onstrates the effectiveness of the overall approach of relaying sensing data back to a
server to address specific requirements of the sensing process.
position is returned. In our application, we disabled the GPS provider and used only
data from the network provider. When a position was to be estimated, the procedure
involved the following steps: 1) obtain data generated by the network provider 2)
determine location using the LocationManager API; 3) generate a position record to
be stored on the mobile device and uploaded to the server when appropriate. Each
record comprises the following information:
(1)
In formula (1) d is distance, r is radius (the radius of the Earth in our particular case),
finally ф1 and ф2 and Ѱ1 and Ѱ2 are respectively, the X and Y coordinates for the
relevant points. We solve the haversine for d and obtain the distance. Afterwards,
having performed the distance calculations, we chose a vertex from each square,
along which we would be travelling the previously determined distance, along a fixed
bearing according to the following formulas for latitude and longitude:
(2)
(3)
For formulas (2) and (3), d and r are again distance and radius (Earth’s radius) respec-
tively, Ѳ is the bearing (in radians, clockwise from north, i.e., North = 0, East = 90,
South = 180 and West = 270, given a conversion from degrees to radians), the rest of
the variables are self-explanatory in name. In Formula (3) we use atan2(Y, X) which
is a variant of the arctangent function which returns the arctangent of Y/X in the range
–Π to Π (mathematical PI).
4 Results
Using our data collection application, we generated position records at the 11 chosen
locations during a three week period, with three weekly observations at each of the
1
http://www.google.com/loc/json
134 A. Coelho, F. Meneses, and R. José
reference locations, two times a day. This resulted in a total 54 readings per location,
or 594 readings in total. These were processed at the server at different moments,
generating a total of 2816 delay-tolerant positions.
To assess the accuracy of the delay-tolerant positions, we have used two different
types of reference data: the estimated error (reported) given by the Google location
API at the server and at the device; and our own estimation of that error (real) based
on the ground truth positioning data calculated for each of the locations. Having
determined all the ground truth points, we then calculated for each location the
distance from this reference position to each of the position records, including the
delay-tolerant positions.
Fig. 1. Real-time (RT) vs delay-tolerant (DT) error results in reading 1 (left – all locations
(AL), right – outdoor locations (OL) and bottom – indoor locations (IL))
The main conclusion from Fig. 1 is that, when considering the error in relation to
the ground truth, there are no observed differences between the location determined in
real-time at the device and the location determined later at the server. This is the most
fundamental observation of this study in the sense that it backs up our initial hypothe-
ses regarding the viability of delay-tolerant positioning. It is also relevant to note that
despite the differences in accuracy, the similarity between real-time and delay-tolerant
locations existed for both indoor and outdoor environments. We can observe, how-
ever, that the error estimated by the Google Location API is shown to be lower in
real-time when compared to the error reported by the delay-tolerant estimation proc-
ess, for all cases but one. We have no explanation for this behaviour of the Google
API, but since the effective accuracy is not affected, it should not constitute any sort
of problem for location data logging.
Delay-Tolerant Positioning for Location-Based Logging with Mobile Devices 135
Fig. 2. Left – Real-time (RT) error results for all locations (AL) in readings 1, 2 and 3. Right –
Delay-tolerant (DT) error results for all locations (AL) in readings 1, 2, and 3.
Fig. 2 reveals the errors obtained for all locations in all three readings, for the real-
time and the delay-tolerant processes. For all scenarios the first reading is always the
best, with the third reading coming a close second and the second reading always
being the worst. In regard to the time when the location was estimated on the server,
we have not observed any meaningful effect.
The main conclusion of this work is that a delay-tolerant positioning process is a per-
fectly viable alternative approach for our usage scenario, especially considering that
in terms of real error (in relation to ground truth) both methods achieve the same per-
formance. This is a very important contribution to inform the design of any sort of
location-based logging tools in which, as in our case, the position information is not
needed at the moment of logging. The second conclusion is that time spent at the
target location does not improve the accuracy of the positioning process. Moreover,
there seems to be no gain whatsoever in staying more than 2 minutes at a given loca-
tion. Together these two observations suggest that this process will perform well for
the generation of location traces in high mobility scenarios. The only observed differ-
ence is in the estimated error that is associated with the location estimations, but this
should not have any impact for most application domains.
A limitation of this study is our lack of knowledge about the internals of the
Google location API. There are no public details about how it uses the radio and Wi-
Fi information to calculate position, and whatever the current approach might be, it
may suddenly change without any prior announcement. Such changes could possibly
affect the results obtained in this study and lead to potentially different conclusions.
Also, the internals of specific devices and particularly their support for capturing ra-
dio and Wi-Fi information may also vary and lead to potentially different results in
specific types of mobile phones. As such, one possible future direction of research
would be to conduct the experiment with several different devices and analyse the
results obtained to get a grasp on what sort of variability can be expected from differ-
ent devices in terms of positioning error. Another line of research would be to under-
stand the maximum validity of the network information stored by the mobile device
136 A. Coelho, F. Meneses, and R. José
and sent later to the server. As network information associated with a position
evolves, observations made in the past will eventually become unsuitable for an ade-
quate location determination. Understanding the timescale in which this effect may
become relevant would help in defining upload policies for location logging tools.
References
1. Reichenbacher, T.: Geographic relevance in mobile services. In: Proceedings of the 2nd
International Workshop on Location and the Web - LOCWEB 2009, pp. 1–4. ACM Press,
New York (2009)
2. Hektner, J.M., Schmidt, J.A., Csikszentmihalyi, M.: Experience sampling method:
Measuring the quality of everyday life. Sage Publications, Inc. (2006)
3. Consolvo, S., Walker, M.: Using the experience sampling method to evaluate ubicomp
applications. IEEE Pervasive Computing 2, 24–31 (2003)
4. Google: obtaining-user-location @ developer.android.com, http://developer.
android.com/guide/topics/location/obtaining-user-location.html
5. Google: LocationManager @ developer.android.com, http://developer.
android.com/reference/android/location/LocationManager.html
6. Kjærgaard, M.B., Langdal, J., Godsk, T., Toftkjær, T.: EnTracked: energy-efficient robust
position tracking for mobile devices. In: Proceedings of the 7th International Conference
on Mobile Systems, Applications, and Services - Mobisys 2009, p. 221. ACM Press, New
York (2009)
7. Yoshida, H., Ito, S., Kawaguchi, N.: Evaluation of pre-acquisition methods for position
estimation system using wireless LAN. In: Third International Conference on Mobile
Computing and Ubiquitous Networking (ICMU 2006), pp. 148–155 (2006)
8. Zandbergen, P.A.: Accuracy of iPhone locations: A comparison of assisted GPS, WiFi and
cellular positioning. Transactions in GIS 13, 5–25 (2009)
9. Schilit, B.N., LaMarca, A., Borriello, G., Griswold, W.G., McDonald, D., La-zowska, E.,
Balachandran, A., Hong, J., Iverson, V.: Challenge: Ubiquitous Loca-tion-Aware Compu-
ting and the “Place Lab” Initiative. In: Proceedings of the 1st ACM International Work-
shop on Wireless Mobile Applications and Services on WLAN Hotspots - WMASH 2003,
p. 29. ACM Press, New York (2003)
10. Paciga, M., Lutfiyya, H.: Herecast: An open infrastructure for location-based services
using WiFi. In: IEEE International Conference on Wireless And Mobile Computing,
Networking And Communications, WiMob 2005, pp. 21–28. IEEE (2005)
11. Eisenman, S.B., Miluzzo, E., Lane, N.D., Peterson, R.A., Ahn, G.-S., Campbell, A.T.: The
BikeNet mobile sensing system for cyclist experience mapping. In: Proceedings of the 5th
International Conference on Embedded Networked Sensor systems - SenSys 2007, p. 87.
ACM Press, New York (2007)
12. Robusto, C.C.: The cosine-haversine formula. The American Mathematical Monthly 64,
38–40 (1957)
A Friendly Navigation-System Based on Points
of Interest, Augmented Reality and Context-Awareness
Abstract. This paper presents a system to supply spatial orientation and support
for daily-activities from a friendly and understandable perspective. We propose
a model based on points of interest or well-known places, generating friendly
routes to a destination based on user context instead of classical street names
and quantitative distances. Moreover, the system offers augmented reality views
that include contextual information. This philosophy of navigation is closer to
the needs of users in their usual environments as well as in unknown places
(e.g. tourism, business trips, etc.). The proposal is focused on any kind of
people but it is especially useful for those who are not accustomed to use new
technologies, people with disorientation problems and, in general, for people
with some slight cognitive deficit. The system also includes mechanisms to
know the contextual situation of the user by relatives or friends, supporting the
desirable social requirements of nowadays applications.
1 Introduction
Nowadays, technological support is commonly required in daily activities of people;
every day, some of these people use software systems to receive reminders about their
tasks or events, and software for getting around places they need to go, in their usual
environment as well as in unknown places. In this sense, some scientific areas such as
Ubiquitous Computing and Ambient Intelligence promote the use of new technologies
focused on the specific situation of users to support them on their needs, improving
the performance of any daily activity. The use of mobile technology has become a
reference because it can be adapted to the user's environment with a minimum impact,
in a non-intrusive way. However, classical navigation systems are not designed
friendly. In particular, thinking about how people typically get guidance indications, it
is friendlier to follow instructions based on well-known places (buildings, business,
monuments, etc.) instead of instructions based on street names and absolute distances
(like traditional navigators). These issues are more critical in elderly people or those
with some slight cognitive deficit. For this reason, and taking into account that they
can find difficulties using a mobile device, technological solutions must be focused on
usability, intuitive interaction and easy learning. Additionally, classical navigation
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 137–144, 2012.
© Springer-Verlag Berlin Heidelberg 2012
138 J. Mª Luna et al.
systems do not are aware of user preferences, environment and activities. In general,
applications tend to forget an important concept: the user context.
We propose a system based on “well-known points” to suggest different routes ac-
cording to the user context, moving through these points to a given destination. The
information can be displayed on the screen through an augmented reality view, show-
ing the indications of the chosen path along the way, and also relevant contextual
information related to the user, e.g. relatives location, tasks to perform, etc.
The system is not only a solution to support activities and guide the user, but also a
social-system to the close relatives and friends who can check the user's activity,
knowing situations where he/she gets lost, establishing additional well-known points
or including events or activities at specific locations.
This paper is organized into 6 sections. Section 2 presents the related work. Section
3 shows the system and the proposed model. Section 4 describes a particular
implementation based on Android mobile platform. Finally, Section 5 analyzes the
conclusions of this work.
2 Related Work
Evolution of technology promotes the creation of intelligent environments and the
adaptation of devices and communication networks to solve many problems in our
lives. Today, these solutions must be transparent to users and distributed in the envi-
ronment, an idea known as Ubiquitous Computing. Particularly, there are many appli-
cations in the market that can display relevant information about the environment
where the user is, using augmented reality or others kind of user interfaces. However,
mobile-assistive-systems are an area to explore because most of these applications do
not have personalization mechanisms and intuitive interfaces, and only use a little
portion of user context.
Few recent publications have discussed these topics deeply. In this sense, Telema-
co [1] is a context-aware system based on Web 3.0 Technology that matches user
social profiles with semantic-marked information on the Web to obtain a personalized
route in touristic trips. A similar approach is described in [2] using semantic rule en-
gines. Another related proposal is MUGGES[3], a mobile platform to enable services
from mobile phones, integrating the system outputs with heterogeneous location me-
thods. In [4], authors studied a mechanism to incorporate user preferences on vehicu-
lar navigation systems through a fuzzy genetic approach.
Focusing on guidance for people with special needs, there are also interesting pro-
posals. In [5], authors propose a general and adaptive model to transform physical
information from objects in an augmented reality view to guide elderly people in their
daily activities.
3 System Overview
This section is focused on the system functionality, describing the main elements of
the context model, the algorithms and the designed metrics to get the best route for a
given destination based on well-known places or points of interest.
A Friendly Navigation-System Based on Points of Interest, Augmented Reality 139
• Point of interest (POI) or well-known points: known place in the user's environ-
ment that can be used as orientation reference. POIs can be defined and included
into the system by the user or their relatives and friends.
• Task: actions that user has to do at a specific place on the map. It can be defined by
the own user or by another person such as relatives and friends. Also, it is possible
to associate multimedia resources to a task.
• Contacts, person (typically a friend or relative) with a determined physical location
on the map. This person can share information with the user.
In Figure 1, these contextual items are specified. These ones are represented on a map
and in the augmented reality view by characteristic icons. Each one of these entities
has a specific physical location consisting of geographic coordinates of latitude, lon-
gitude and altitude.
calculated paths will be 2n). For each route, a quantitative value according to the sui-
tability of the path is obtained.
• The proximity of a known environment (e.g. user home and workplace among
other). If the user is far away of these usual environments, it is easier to get lost.
• The number of identified points corresponding to known user locations in which
the user can be easily guided. More points means an easier route.
• The distance to the destination point. A large distance to the destination point
means more possibilities of getting lost.
According to the previous assumptions, we will obtain a value to determine the best
route. For this, we consider the path to follow, how far the user is from his known
environment and the total number of well-known points along the path. The obtained
value (equation 1) is in the range [0-1], being 0 the worst value and 1 the best one:
(1)
With:
According to the inaccuracy to estimate which may be considered near or far away
depending on each user, mechanisms to handle this problem are needed. We propose
the use of Fuzzy Sets to represent this knowledge. The system takes into account the
previously-described assumptions related to the α factor, and consequently, the fol-
lowing linguistic variations are used in our fuzzy rules: (a) Route Distance [RD], (b)
Home Distance [HD], related to the closer familiar environment, and (c), Importance
of visiting well-known points [α]
Every linguistic variable has associated borderline values to define the fuzzy sets.
There are 3 values: small (S), normal (N), big (B). The system offers mechanisms to
configure these variables adapting the own system to specific user features, prefe-
rences or physical and cognitive situation. For example, a user could consider a dis-
tance of 500 meters very large due to some physical impairment and other users can
consider the same distance as short. The fuzzy sets are suitable to these subjective
A Friendly Navigation-System Based on Points of Interest, Augmented Reality 141
characteristics and they provide the needed flexibility. Besides, the bound values of
the fuzzy sets can be adapted to each user through the application’s personal configu-
ration. The fuzzy sets defined for each linguistic variation are described in Figure 3.
Fig. 3. Fuzzy sets defined to route distance (a), home distance (b) and known points (c)
Through the definition of the fuzzy sets and taking into account the main assump-
tions it is possible to define the following fuzzy inference rules:
All rules have been composed, as reasoning mechanism based on Mandani [6] me-
thod, used in many problems of automatic control. Firstly, the system calculates the
membership weights (see Equation 2). Secondly, each partial conclusion is built
through the aggregation method, and the final conclusion is obtained (see Equation 3).
Finally, a defuzzification process has to be applied to get the α value, calculating the
result through, the center of gravity technique from the fuzzy group.
With:
rd: route distance to follow by the user.
hd: distance from the current location to the user's friendly environment. .
μS(x): membership degree of the value x to the fuzzy set S.
the angle between user location and the entity location. We have applied the model
WGS84 [7] to get the initial bearing measured in degrees between two different loca-
tions used to draw the object into augmented reality based on azimuth (angular differ-
ence between the horizontal orientation from the device and the entity location) and
pitch (angular difference related to the elevation above sea level between the user and
the entity). Equation 4 shows the applied metric to obtain the entity position on the
screen based on display size and azimuth-pitch values.
(4).
4 Implementation
Fig. 4. Destination Selection (a), Best route (b), and Monitoring Screen (c)
obtaining the route based on points of interest (b), as we shown in Sections 3.2
and 3.3.
The user can interact with the augmented reality view touching on an entity. In that
case, the description of the area is shown, making possible to open the multimedia
resources related to this entity task or location.
In this paper we have described a mobile assistive system to guide and support daily
activities. We have proposed a generic model for obtaining routes or paths based on
contextual information, particularly users tasks, social relationships, points of interest
144 J. Mª Luna et al.
and well-known places. Thus, it is more useful to offer indications based on contex-
tual information than using street names and absolute distances. Additionally, the
proposed system includes an augmented reality mode allowing the user to know the
position of each relevant element or entity in the real environment. Through the well-
known points or reference points, the system generates the route to follow by the user
to a specific destination. We use a fuzzy logic approach to adapt the user's specific
characteristics or preferences to the best route calculation. Also, the system allows
relatives or friends to know the user situation and tasks, increasing the social interac-
tion between users and enabling monitoring tasks for people with special needs.
A future step of this work aspires to formalize contextual information using Se-
mantic Web principles based on COIVA architecture [8] and generates automatically
POIs based on user context as we previously explored in [9].
References
1. Martín-Serrano, D., Hervás, R., Bravo, J.: Telemaco: Context-aware System for Tourism
Guiding based on Web 3.0 Technology. In: 1st Workshop on Contextual Computing and
Ambient Intelligence in Tourism, Riviera Maya, Mexico (2011) ISBN: 978-84-694-9677-0
2. López-De-Ipiña, D., Klein, B., Pérez, J., Guggenmos, C., Gil, G.: User-Aware Semantic Lo-
cation Models for Service Provision. In: International Symposium on Ubiquitous Computing
and Ambient Intelligence, Riviera Maya, Mexico (2011) ISBN: 978-84-694-9677-0
3. Lamsfus, C., Martín, D., Alzua-Sorzabal, A., López-de-Ipiña, D., Torres-Manzanera, E.:
Context-Based Tourism Information Filtering with a Semantic Rule Engine. J. Sen-
sors 12(5), 5273–5289 (2012), doi:10.3390/s120505273
4. Chakraborty, B., Ching Chen, R.: Fuzzy-genetic approach for incorporation of driver’s re-
quirement for route selection in a car navigation system. In: IEEE International Conference
on Fuzzy Systems, Jeju Island, Korea (2009)
5. Hervás, R., Garcia-Lillo, A., Bravo, J.: Mobile Augmented Reality Based on the Semantic
Web Applied to Ambient Assisted Living. In: Bravo, J., Hervás, R., Villarreal, V. (eds.)
IWAAL 2011. LNCS, vol. 6693, pp. 17–24. Springer, Heidelberg (2011)
6. Mandani, E.H.: Applications of fuzzy algorithms for control of simple dynamic plant. Pro-
ceedings of the Institution of Electrical Engineers 121(12), 1585–1588 (1974)
7. Kaplan, E.D.: Understanding GPS Principles and Applications. Artech House Mobile Com-
munications (1996)
8. Hervás, R., Bravo, J.: COIVA: Context-aware and Ontology-powered Information Visuali-
zation Architecture. Software Pract. Exper. J. 41(4), 403–426 (2011)
9. Hervas, R., Bravo, J.: Towards the ubiquitous visualization: Adaptive user-interfaces based
on the Semantic Web. Interact. Comput. J. 23(1), 40–56 (2011)
Evaluating a Crowdsourced System
Development Model for Ambient Intelligence
1 Introduction
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 145–152, 2012.
c Springer-Verlag Berlin Heidelberg 2012
146 A. Santos, H. Rodrigues, and R. José
range of alternatives on how to design and organise such software have been
proposed and experimented. Multiple application types have been prototyped in
various settings, e.g. at home, at work, at a shopping environment, or hospitals.
Despite this significant effort and significant advances on areas such as wireless
communications, personal devices, global computing, sensors technology, com-
putation and storage power, we have not yet reached this vision [4, 12]. There
seems to be two prevailing problems that cut across existent approaches for sys-
tem support for Ambient Intelligence. The first one is concerned with the exact
definition of the appropriate type of system support to be offered to applica-
tions. Without well-established applications and reference scenarios, it is very
difficult to identify and prioritise system requirements for system support for
Ambient Intelligence. Without a rich and operational infrastructure it is very
hard to create an integrated environment where meaningful applications may
emerge [12, 5].
The second problem is concerned with the inherent challenges posed by eval-
uating systems that are designed to be seamlessly integrated into our everyday
lives [5]. Human activity is very dynamic and subtle, and most physical environ-
ments are also highly dynamic and support a vast range of social practices that
do not map directly into any immediate service needs. In those cases, identifying
what is valuable to people is very hard and obviously leads to great uncertainty
regarding the type of support needed and the type of resources needed to create
such support [12]. Moreover, traditional evaluation techniques such as labora-
tory studies allow researchers to study specific aspects of the system, but are not
satisfactory to evaluate the use of technology in real contexts over time [5]. This
is also pointed by [6], to whom the real problem is not focused on identifying
issues for research or presentation of solutions, but in testing these solutions
outside the laboratory environment.
In this paper we address the issues of system development and evaluation in
an integrated manner. Firstly we have adopted a Crowdsourced system software
development model [10], presented in [12] and briefly presented in section 2. With
this model we want to provide an open and flexible infrastructure as system sup-
port for Ambient Intelligence that are based on a balanced combination between
global services and situated devices: global services provide functionality that
can be relevant anywhere, thus obviating the need to create dedicated services
on a case-by-case basis; situated devices, such as displays, networks, and mobile
phones, provide context and enable meaningful links between global services and
the physical environment.
Secondly, we have built two Ambient Intelligence environments in real contexts
using the proposed infrastructure. All the main components of the infrastructure,
both local and global, have been reutilized in both settings. Adopting a crowd-
sourced system software development approach promotes the right setting for
continuous evolution of the proposed environments and the integration of con-
stantly changing requirements, in this way promoting also an alternative path
for system evaluation. This approach mainly promotes sustainability, flexibility
and incremental development, which we advocate as being important ingredients
Evaluating Ambient Intelligence Deployments 147
for promoting the right context for large-scale deployments, system evaluation in
real contexts and decreasing of cost of development and deployment of Ambient
Intelligence systems.
In particular, the development process of the provided case studies and sub-
sequent observation of users’ utilization have allowed us to raise an important
set of system design dimensions that remain as challenges for system support
for Ambient Intelligence.
Anywhere Places is a web platform that coordinates, under the unifying concept
of activities [2, 7], interaction from local resources and application logic from
applications. The concept of Activity mainly focuses on defining the execution
context for applications and interactions. This resembles to the traditional op-
erating systems concept of process, which defines the execution context of an
application. An activity generates content, such as activity sessions, resources
and active applications. Additionally, we also include common content types,
which may be found in classical Ambient Intelligence environments, such as
documents, photos, messages, presences, location and interaction information
obtained through a wide set of resources with different characteristics and pro-
viding different stimulus. This focus on data provides the main path towards
interoperability between the functionality offered by multiple applications.
Figure 1 provides an overview of the Anywhere Places platform. This archi-
tecture supports the adoption of a Crowdsourced system software development
model. This model brings together system architects, physical resources owners,
application developers, activity creators and end-users into an open collaboration
that is able to generate new added value for all the parties involved. These actors
operate in different constructs of a Crowdsourced system software development
model that are explained below: Kernel services and Peripheral services.
Kernel Services: Kernel services, instantiated in the Service and Content lay-
ers, are responsible for managing all the information about activities and
the associated applications and they are the element that glues the other
elements into an integrated execution. Kernel services handle sensing and
interaction information associated with activities and enable the develop-
ment of situated applications based on that data space. Situation creators,
or Pervasive Computing authors, can attach an open-ended set of applica-
tions to activities, enabling a broad range of activity-centric content to be
generated and exchanged as part of the usage in that activity. The nature of
requirements for Kernel services are not so related with end-user perceived
functions, but are normally determined by the system architects who are
proposing the system and its abstractions.
Peripheral Services: Peripheral services deliver the majority of end-users value.
In Ambient Intelligence, examples may include sending a SMS to the environ-
ment or being detected by a Bluetooth scanner, and visualising the context
148 A. Santos, H. Rodrigues, and R. José
3 Evaluation Deployments
In this paper, our objective is to report our findings in regard to the system sup-
port offered for building Ambient Intelligence environments and applications as
explained in section 1. The evaluation of system support for Ambient Intelligence
is very difficult because we cannot find globally accepted metrics and because
it is very difficult to evaluate large-scale and long-term Ambient Intelligence
environments in real contexts. We thus focus on building a few evaluation de-
ployments and report the main challenges we have faced on integrating Ambient
Intelligence underlying computing infrastructure and physical spaces, mapping
to patterns of situated interaction in everyday life. To support this study, we
have created two Anywhere Places based activities deployments, one located at
our University and the other at five middle schools in our region.
for a period of one month. The objective was to welcome students and pro-
mote a set of collaborative tasks which would inform students about different
events happening at the schools. This activity spanned five different physi-
cal spaces. Each physical space is equipped with three physical resources: one
public display, a Bluetooth OBEX service and a Bluetooth discovery service.
Additionally, we also have set up a virtual resource as a SMS Gateway. Stu-
dents and academic staff were free to explicitly interact with the system, both
sending SMS messages through the SMS gateway or Bluetooth OBEX service
or sending photos through the Bluetooth OBEX service. Users with Bluetooth
enabled devices could also interact with the system as they were implicitly de-
tected by the Bluetooth discovery service and registered as presences in the
activity.
We have also built a set of applications which could then run on the pub-
lic display or personal mobile devices and subscribed them to this activity.
We have specifically built an application that informed about the users as-
sociated in the activity (and detected by the Bluetooth discovery service),
an application that showed photos and messages shared in the context of
the activity and finally an application that built a collaborative story using
specific messages also shared in the context of the activity.
During the period of month, we have registered 1107 entities (mobile phones),
around 160 thousand implicit interactions through Bluetooth service discov-
ery and 70 explicit interactions which correspond to the sending of messages
and photos through the Bluetooth OBEX service and SMS gateway service.
Activity 2: University Campus visit. The second activity ran at the Uni-
versity of Minho, during two days, in the context of organised visits of stu-
dents attending middle schools in our region. The technological setting was
very similar to the one described in Activity 1. Here, the objective was to
promote the sharing of information about University departments between
students. Similar to the first deployment, students explicitly interacted with
the system through the sending of messages and photos and implicitly by
being ”discoverable” by the Bluetooth discovery services.
The activity organisers have associated the displaying photos, messages and
presence applications which were available for use across different activities.
Moreover, the visit organisers intend to provide to the students a historic view
of their visits at the end of the visit and following days, for which end we have
developed an application that was able to provide a summary of students visi-
tis. The process of subscribing applications with activities gives applications
access to user’s interaction events and activity properties. This also enables
applications to generate content that is specific to that particular activity. As-
sociating the public display to this activity also enables application content to
be available on the situated display. Applications can be created by any third-
party developer and published for use across activities.
During the period of two days, we have registered 197 entities (mobile
phones), around 30 thousand implicit interactions through Bluetooth ser-
vice discovery and 81 explicit interactions which correspond to the sending
150 A. Santos, H. Rodrigues, and R. José
of messages and photos through the Bluetooth OBEX service and SMS gate-
way service.
4 Analysis
In this section we report our main findings concerning system support for Am-
bient Intelligence. We have organised our analysis in three main dimensions:
physical integration, the relation between infrastructure and applications and
user association with the environment.
Infrastructure vs Applications
Defining the exact type of system support is ver difficult in system support
for Ambient Intelligence as Human activity is too dynamic and subtle to be
captured in the infrastructure of any physical space. The application model we
have suggested reduces the distance between resources and applications and also
strongly blurs the distinction between applications and infrastructure. Instead of
defining contracts for resource usage such as in PCOM [3], we define an activity
and consequently the application execution context in terms of people, resources
and interaction data [12]. The application model provides a flexible mechanism
for bringing into an activity the most appropriate combination of functionality,
possibly without having to create any specific applications.
We have developed a set of applications based on consuming content derived
in the context of an activity. We have reutilised with success a set of applica-
tions between different activities. However, we need a more advanced application
framework that should address the concepts, protocols, application development
interfaces and tools to automate and control the life cycle, sharing, and execu-
tion of applications. This framework is the basis for third-party developers to
experiment and contribute with applications to our infrastructure.
Evaluating Ambient Intelligence Deployments 151
5 Conclusion
In this paper we have addressed the problem of designing and evaluating Am-
bient Intelligent systems in an integrated manner. Firstly we have adopted a
Crowdsourced system software development model and secondly we have built
two Ambient Intelligence environments in real contexts. Our system software de-
velopment model offers a balanced combination between development effort and
the variety of Ambient Intelligence everyday life environments we may provide.
We have built on top of our development tasks and deployments observations a
brief analysis concerning system support for Ambient Intelligence and presented
some open challenges. We expect that this type of system software develop-
ment model for Ambient Intelligence will enable many new types of support
for everyday life situations to be built and evaluated and better contribute for
evaluating user acceptance over time of Ambient Intelligence environments and
applications.
References
1. Ballesteros, F.J., Soriano, E., Guardiola, G., Leal, K.: The plan b os for ubiquitous
computing. voice control, security, and terminals as case studies. Pervasive and
Mobile Computing 2(4), 472–488 (2006)
2. Bardram, J.E., Christensen, H.B.: Pervasive computing support for hospitals: An
overview of the activity-based computing project. IEEE Pervasive Computing 6(1),
44–51 (2007)
3. Becker, C., Handte, M., Schiele, G., Rothermel, K.: PCOM - a component system
for pervasive computing. In: Proceedings of the Second IEEE Annual Conference
on Pervasive Computing and Communications, PerCom 2004, pp. 67–76 (March
2004)
4. Caceres, R., Friday, A.: Ubicomp systems at 20: Progress, opportunities, and chal-
lenges. IEEE Pervasive Computing 11(1), 14–21 (2012)
152 A. Santos, H. Rodrigues, and R. José
1 Introduction
Ambient Intelligence (AmI) is a research area that has attracted a lot of efforts
by the scientific community in the last years [1]. It’s aim is to create environ-
ments in which users are able to interact in a natural and transparent way with
systems that help them carrying out their daily leisure and work activities [2].
Nowadays, AmI environments are heterogeneous and complex as they contain
different devices interconnected through multiple networking technologies. To
ease user interaction in these environments, this paper proposes to use Octopus
to provide an homogeneous and easily extensible distributed computing envi-
ronment over the Internet. In addition, we use the Mayordomo system as the
software layer that uses Octopus to provide multimodal access to appliances in
an Ambient Intelligence environment. The paper is organised as follows: Sec-
tions 2 and 3 discuss the Mayordomo and Octopus systems. Section 4 describes
briefly the ongoing interconnection of both systems. Finally, section 5 presents
the conclusions and posibilities for future work.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 153–160, 2012.
c Springer-Verlag Berlin Heidelberg 2012
154 N. Ábalos et al.
names of existing rooms in the house. The system is able to detect that the user
refers to all the rooms in the environment.
To search for the appliance element, the system proceeds in a similar way,
looking for the names of the existing appliances in the home. Moreover, since the
user can refer to the name of the appliances in an abbreviate way (e.g., “music”
instead of “piped music”), the system searches for fragments of appliances’ names
in the recognised sentence.
Mayordomo proceeds in a similar way to find the item attribute, that is, it tries
to find in the recognised sentence each attribute corresponding to the appliances
in the house. If an attribute is found, the system then considers whether an
appliance has been mentioned. If it is not found, the system assumes that the
user has omitted it and looks for appliances which have such attribute. If just
one appliance is found, the action is performed on that appliance. However, if the
attribute is associated with several appliances, the system prompts the user for
the target appliance. For example, the volume attribute can refer to television
or the piped music.
Finally, the system searches for the element value following the same proce-
dure, that is, it tries to find in the recognised sentence names of possible values
associated with attributes. For example, since some attributes are numeric (in
the range 0-10), the system tries to find numbers in that range within the recog-
nised sentence.
We have observed that users often utter sentences omitting attributes and
pronounce some values associated with that attributes. For example, in the sen-
tence: “Switch on the lights”, the attribute is “State” but it does not appear
explicitly in the sentence. To analyze these utterances, the system looks for
verbs which represent omitted values and/or attributes. In the previous example
“Switch on” is related with “State”.
To determine whether the user is asking a question or ordering an action to
be carried out on an appliance, the system analyzes the recognised sentence. If
it finds any word beginning with “wh-” or any conjugation of the verb “to be” in
present tense, it is assumed that the user has asked a question. More specifically,
if the word is “what ” or “which”, it is assumed that the user is asking for the
value of an attribute of a particular appliance in a particular room. If the word
is “where” it is assumed that the user is requesting information about places
where appliances are located.
Mayordomo must have information about it, for example, number of rooms and
appliances installed in each one. This information is provided to the dialogue
system through two kinds of configuration files.
The file for the environment (in our experiments, a house) contains a generic
description of it, including number of rooms and name for each room. These
names are recognised by the ASR and understood by the speech understanding
process. Each room has associated a set of appliances installed in it, which are
represented by means of identifiers which are handled by the dialogue system.
The configuration file for each appliance contains information about the func-
tionality of the appliance as well as details about attributes or characteristics
and possible values for each attribute. For example, for the TV the system uses
a configuration file that contains attributes such as volume and channel, as well
as possible values for these attributes.
Once the analysis of the sentence is finished, the dialogue manager must de-
cide the answer to be generated by the system. In particular it must determine
whether to provide information requested by the user or perform a specific action
on an appliance. To do this it checks if there is any lack of information in the
recognised sentence. This lack of information can refer to at least one of the four
types of data related with the action concept mentioned above. If there is no
data missing and the user is requesting information, the dialogue manager calls
the module Provide Information, which organises the information in sentences
to be provided to the user.
If the user wants to carry out an action, the dialogue manager calls the module
Perform Action which executes the action. If there is any lack of information,
the dialogue manager decides the appropriate question that must be generated
in order to obtain that data from the user. Whenever an action is performed, an
entry is made in the log file containing data on date and time of the action.
The Perform Action module changes, if necessary, the four types of data
discussed above (room, appliance, attribute and value). This change can affect
just a specific room or all the rooms in the house. For example, if the user turns
on the light in the kitchen, the field room is filled with the value “kitchen”, the
appliance field is filled with the value “light ”, the field attribute is filled with
the value “state” and the attribute value is filled with “on”. These changes are
made as well in the system’s memory.
3 Octopus
The Octopus is an operating system to enable AmI applications and personal
pervasive environments, fully described elsewhere [4,5,6,7,8].
Octopus permits to access the services of an heterogeneous set of devices and
machines in a portable and distributed manner. The key idea behind the system
is providing a unified name space and a small set of operations to access the
resources. It achieves this by using the file abstraction. All the resources are
exported as files through the network, providing an univocal name for each one.
The Octopus can run native or hosted on different operating systems. The
resources exported by Octopus can be accessed by the native operating, be-
cause they are reexported as files. We deem this approach, where resources are
reexported as low level abstractions so the whole operating system and its ap-
plications can benefit from it Upperware. Any language, platform and operating
system is capable of accessing files, so this approach can serve as glue with any
application, service or system which needs to access any of the resources exported
by the Octopus.
name space, and interacts with it using the standard set of open, read, write
and close operations1 . This should not be a surprise, indeed, it can be seen as
an evolution from object based systems. Their underlying idea is to focus on the
data, and not on the particular algorithms. Files are just that, but have a clean
and universal interface.
System calls targeted to files are translated by the runtime system to one
or more request messages which are sent to the server associated with it. The
corresponding replies are used to deliver the result of the operations to the client
that invoked them2 . This interaction is transparent for the client, even though
the server may reside on a remote machine.
The file structure witnessed by the client can be synthesized. Put in other
words, the server may report names and metadata for files that are not just a
set of blocks in a storage device. The file server can handle file operations and
process them according with its semantic. Every file operation can be associated
with (implicit) processing functionality, e.g., controlling a resource, accessing it
to push/retrieve data, or performing a computation. This way, the file server
provides the illusion of talking to a traditonal file system.
4 Working Together
4.1 Resource Abstraction for Mayordomo
As far as Octopus is concerned, the only interaction with the rest of the software
is that such software has to be able to read and write files, just like any other
application. If Mayordomo wants to switch off a light it only has to know how
to use write() to write the text off in the file representing the light switch.
To discover which sensors and actuators are present on the environment, it
suffices to read a directory exported by the Octopus software. File names report
a description of the device. File content can be updated to operate on actuators
and read to retrieve the status of sensors and actuators.
Such interaction is so simple that it does not deserve further explanation. See
[8] for further examples and more details.
References
1. IST advisory group EC: Scenarios for ambient intelligence in (2010)
2. Amoretti, M., Wientapper, F., Furfari, F., Lenzi, S., Chessa, S.: Sensor Data Fu-
sion for Activity Monitoring in Ambient Assisted Living Environments. In: Hailes,
S., Sicari, S., Roussos, G. (eds.) S-CUBE 2009. LNICST, vol. 24, pp. 206–221.
Springer, Heidelberg (2010)
3. Ábalos, N., Espejo, G., López-Cózar, R., Callejas, Z., Griol, D.: A Multimodal
Dialogue System for an Ambient Intelligent Application in Home Environments.
In: Sojka, P., Horák, A., Kopeček, I., Pala, K. (eds.) TSD 2010. LNCS (LNAI),
vol. 6231, pp. 491–498. Springer, Heidelberg (2010)
4. Ballesteros, F.J., Soriano, E., Guardiola, G.: Octopus: An upperware based sys-
tem for building personal pervasive environments. Journal of Systems and Soft-
ware 85(7), 1637–1649 (2012)
5. Ballesteros, F.J., Guardiola, G., Soriano, E.: Upperware: Bringing resources back
to the system. In: IEEE Middleware Support for Pervasive Computing Workshop
2010, Proceedings of the PerCom 2010 Workshops (2010)
6. Ballesteros, F.J., Soriano, E., Algara, K.L., Guardiola, G.: Plan B: using files in-
stead of middleware abstractions for pervasive computing environments. IEEE Per-
vasive Computing 6(3), 58–65 (2007)
7. Ballesteros, F.J., Guardiola, G., Soriano, E.: Personal pervasive environments:
Practice and experience. Sensors 12(6), 7109–7125 (2012)
8. Ballesteros, F.J., Guardiola, G., Soriano, E., Leal, K.: Traditional systems can work
well for pervasive applications. a case study: Plan 9 from bell labs becomes ubiq-
uitous. In: Proceedings of the Third IEEE International Conference on Pervasive
Computing and Communications, PERCOM 2005, pp. 295–299. IEEE Computer
Society, Washington, DC (2005)
9. Sun’s java remote method invocation home,
http://java.sun.com/javase/technologies/core/basic/rmi/index.jsp
10. W3c soap specifications, http://www.w3.org/TR/soap/
11. Lalis, S., Savidis, A., Karypidis, A., Gutknecht, J., Stephanides, C.: Towards Dy-
namic and Cooperative Multi-device Personal Computing. In: Yuan, F., Kameas,
A.D., Mavrommati, I. (eds.) The Disappearing Computer. LNCS, vol. 4500, pp.
182–204. Springer, Heidelberg (2007)
12. Pike, R., Ritchie, D.M.: The styx architecture for distributed systems. Bell Labs
Technical Journal 4(2), 146–152 (1999)
Dandelion: Decoupled Distributed User Interfaces
in the HI3 Ambient Intelligence Platform
Gervasio Varela, Alejandro Paz-Lopez, Jose Antonio Becerra, and Richard J. Duro
1 Introduction
Ubiquitous Computing (UC) and Ambient Intelligence (AmI) systems rely heavily on
the information they can extract from the environment and users. This, in turn, makes
these systems highly dependent on their capacity to use the available devices to inte-
ract with the users and the environment, forcing developers of UC and AmI systems
to deal with a wide variety of environments, users and devices in order to cover as
many scenarios as possible. This introduces a great level of complexity in AmI and
UC systems. For example, it is complicated to predict the kind of devices that will be
available for interaction. The environment conditions, like visibility or noise, may
also change and they may impact the operation of the interaction system. Further-
more, the users are also an important source of heterogeneity. They have different
abilities, different preferences, even the number of users may vary.
During the last few years we have been working in the development of the HI3 ar-
chitecture [1], which provides a common framework and runtime platform for the
development of AmI and UC systems, and on UniDA [2] that provides homogeneous
access to a network of heterogeneous devices. Supported by these solutions, we are
building Dandelion, a framework for the development of user interfaces for AmI and
UC systems. This paper presents a solution for building distributed UIs within Dande-
lion. It uses a model-driven approach that allows developers to build user interfaces
by defining a series of high-level declarative models. These models provide abstract
components, which, at runtime, are transparently and dynamically connected to end
devices and user interaction software.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 161–164, 2012.
© Springer-Verlag Berlin Heidelberg 2012
162 G. Varela et al.
In the next section, an overview of the Dandelion and its underlying supporting
software architecture is given. Section 3 presents a brief analysis of the main contri-
butions of Dandelion. Finally, in section 4 some conclusions are extracted.
Inside the HI3 platform, Dandelion is integrated within the application layer, and
makes use of the capabilities provided by the HI3 low-level layers to free developers
from many of the complexities of interacting with the environment and users. It uses a
model-driven approach in which a series of models are used to build, at runtime, a
user interface that is appropriate for the proposed models and the devices available in
each environment.
Fig. 1 shows the main components of the Dandelion system. The system is de-
signed to support a series of user interface models that describe the application do-
main, its interaction requirements, the user and the environment. The information
available in those models is used by the User Interface Manager (UIM) to select the
interaction elements (FIOs), among those available in the environment, that better suit
the requirements of the application, the user and the environment. The applications
are connected to the FIOs through a set of abstract interaction units (Abstract Wid-
gets) that provide a generic vision of any interaction technology. This connection,
which is distributed and dynamic, is managed by the UIM, and it can be modified at
runtime without impact on the application.
The application and its models are the only elements provided and known by the
developers. In the current implementation only the Abstract User Interface model
(AUIm) and the domain model are used. The AUIm is implemented using the
UsiXML [4] user interface definition language, which introduces the concept of Ab-
stract Interaction Unit (AIU) as an abstract representation of any interaction between
a human and a computer. These abstract elements are realized by the Final Interaction
Objects, which provide the real user interaction workforce of the system. They are
elements capable of interacting with a user, representing physical devices, GUI com-
ponents, gesture, voice recognition software, etc.
FIOs are implemented as distributed HI3 services that comply with a specific
communication interface called Generic Interaction Protocol (GIP). It is an agent
communication protocol that supports the interaction actions proposed by the AUIm
model of UsiXML: Input, Output, Selection, Navigation, and Control triggering.
Dandelion: Decoupled Distributed User Interfaces in the HI3 163
Looking at Fig. 2 and bearing in mind this architecture, the operation of the Dande-
lion system can be summarized as:
1. The Developers provide the application logic and a series of models that specify
meta-information about the application. Currently two UsiXML models are sup-
ported, the Abstract User Interface model (AUIm) and the Domain model.
2. When an application is started, its models are sent to a UIM (Interface Manager)
instance. It builds an abstract UI using the Abstract Widgets toolkit, creates a new
Domain Model Controller, and connects the application logic to the AUI by linking
domain objects with Abstract Widgets defined in the AUIm.
3. The UIM selects among the available interaction resources (FIOs) those that better
match the Abstract Widgets used in the AUI. Currently the system installer speci-
fies this selection manually in an XML configuration file.
4. The selected FIOs are connected to their corresponding Abstract Widgets, which
can talk remotely to the FIOs using the Generic Interaction Protocol (GIP). This
protocol supports the generic interaction actions defined by the AUIm.
5. When a change is made to a domain object, the Domain Model Controller sends
the change to its associated Abstract Widget, which redirects it to a FIO using the
GIP. The FIO contains the concrete logic necessary to interact with the user
through a hardware device or a graphical interface.
As can be seen, the developer is completely decoupled from the final devices and
technologies used for the user interface. Moreover, the user interface can be changed
dynamically, remotely deployed and physically distributed in the environment.
Following the principles of the Cameleon framework, UsiXML [4] provides a user
interface definition language that supports the different models proposed by the Ca-
meleon framework. It also provides methods and tools to build user interfaces by
progressively transforming models, at design time, from higher levels of abstraction
to the final design of the UI.
More specifically related to ubiquitous computing is the MASP [5] project. It fol-
lows the specifications of the Cameleon framework, but it uses the models at runtime,
using the information to build the UI interface using the local available resources.
Dandelion uses an approach similar to MASP. It follows the guidelines established
by the Cameleon framework and uses the models at runtime. Nevertheless MASP
requires developers to provide a Concrete User Interface (CUI) specifying the modali-
ties of the user interaction, while in Dandelion, that selection is performed by the
system, or, in the current implementation, the system administrator.
The final objective of Dandelion is to provide a UI development solution for AmI
and UC system where the developer is abstracted from the final shape of the UI, so
that the system can be able to adapt the interaction to any technologies or situations.
The selection of modalities at runtime will allow for more flexible UIs, as they can
be more adapted to the current context by selecting those available FIOs which use
modalities more adequate to the current users or environment state. In the current
implementation, the selection of FIOs, and therefore, the selection of modalities and
the final UI, is responsibility of the system administrator. He knows better the users
and the environment than the developers and can perform a better selection.
4 Conclusions
This article has presented the first stages of development of the Dandelion framework,
a system that leverages the advances of model-driven approaches in order to decouple
AmI and UC application logic from its user interaction system.
Models allow developers to design the user interaction decoupled from the applica-
tion logic, the environment state and user’s conditions. This characteristic will allow
applications to dynamically change and adapt their user interface more easily.
References
1. Paz-Lopez, A., et al.: Some Issues and Extensions of JADE to Cope with Multi-agent Operation
in the Context of Ambient Intelligence. In: PAAMS, Salamanca, Spain, pp. 607–614 (2010)
2. Varela, G., et al.: UniDA: Uniform Device Access Framework for Human Interaction
Environments. Sensors 11(10), 9361–9392 (2011)
3. Balme, L., Demeure, A., Barralon, N., Calvary, G.: CAMELEON-RT: A Software Architec-
ture Reference Model for Distributed, Migratable, and Plastic User Interfaces. In: Marko-
poulos, P., Eggen, B., Aarts, E., Crowley, J.L., et al. (eds.) EUSAI 2004. LNCS, vol. 3295,
pp. 291–302. Springer, Heidelberg (2004)
4. Vanderdonckt, J., et al.: UsiXML - User interface extensible markup language. Reference
manual. Université catholique de Louvain (2007)
5. Blumendorf, M., Lehmann, G., Albayrak, S.: Bridging models and systems at runtime to
build adaptive user interfaces. In: 2nd ACM SIGCHI Symposium on Engineering Interactive
Computing Systems, EICS 2010. ACM Press, New York (2010)
Selection and Control of Applications
in Pervasive Displays
1 Introduction
Urban spaces are increasingly embedded with ubiquitous computing technologies and
in particular with various types of public digital displays. This is leading to the emer-
gence of pervasive display systems that can be described as perch/chain sized ecosys-
tems for many-many interaction, composed of displays of various sizes (from hand-
held devices, to medium/large wall mounted displays), and where “many people can
interact with the same public screens simultaneously” [1]. Displays in pervasive dis-
play systems are inherently multi-purpose and our vision is to move away from a
world of closed display networks to a world of open display networks in which large-
scale networks of pervasive public displays and associated sensors are open to appli-
cations and content from many sources [2].
In this type of environment, content shown on public displays should not corres-
pond to pre-defined schedules, as is the case of today’s conventional digital signage
systems. Instead, there will be many applications running concurrently and being able
to react at any moment to input from users. The display environment may thus be
seen as a mixed-initiative scenario [3] in which the system and its associated applica-
tions are always showing content that is not only of a high relevance for the current
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 165–172, 2012.
© Springer-Verlag Berlin Heidelberg 2012
166 C. Taivan, R. José, and I. Elhart
display settings, but also content that reflect the current user interactions in that envi-
ronment.
Consequently, a pervasive display environment should manage the temporal and
spatial allocation of the displays between applications, as well as support selection
and control techniques that would allow people to drive the way applications are
shown and used in that environment. This should all be done considering the need to
accept, acknowledge, and resolve concurrent input from multiple users in a way that
is fair and clear for all those users.
In this study, we aim to inform the definition of novel techniques for application
selection and control in pervasive display environments that can address the above
challenges. These should enable multiple users to concurrently drive the selection of
the applications being shown and control of their behavior. We avoid as much as
possible assumptions about particular application models. For example, we are not
considering whether applications are running on the cloud, on the displays themselves
or any combination in between. Similarly, we are not considering how different sys-
tems may address the implications of multi-user concurrent interaction, but only that
such concurrent interaction will exist and will expose challenges approached in our
study.
Pervasive display environments may be seen as a mixture between traditional GUI
systems and sensing systems. Therefore we chose a methodology that is based on the
study of well-known selection and control concepts from GUI systems and on the
analysis of their applicability to the specific characteristics of pervasive displays.
After presenting related work in Section 2, we present in Section 3 a study of con-
cepts in traditional GUI systems. In Section 4, we analyze the case of GUI concepts
for public displays. The paper ends with conclusions in Section 5.
2 Related Work
Researchers in the area of public displays have largely investigated how to appropri-
ate displays by creating rich interactive applications and services [4][5][6]. Displays
as multi-user and shared resources have also been explored mainly from two perspec-
tives: time based queuing [7] and explicit [8] or implicit space partitioning [9]. Mo-
rales-Aranda et al. [10] describe a display prototype as an implicit space partitioning
system that can dynamically adapt content layouts allowing two users to visualize
their personal information. Vogel et al. [9] provides insights on how a public ambient
displays can be either individually or collaboratively shared. This functionality re-
flects the same assumptions that we are considering in our research, i.e., the display is
a shared resource and is not solely appropriated by one user at a time.
The study by Peltonen describes CityWall [11], a large multi-touch display in-
stalled in a central location in Helsinki, Finland. The system can visualize images
retrieved from Flickr and the users can interact with the content using one- or two-
handed gestures, e.g., resize, rotate, etc. This research is relevant for us because it
discusses how people succeed to appropriate a large display, resolve the potential
conflicts, and find the right moment to take the turn. Their results show that the deci-
sion about when a person should interact does not depend only on the available space
Selection and Control of Applications in Pervasive Displays 167
at the display, but rather on a set of complex social interactions of reasoning and ne-
gotiation between participants.
Sacks et al. [12] investigate a turn taking system for casual conversation seeking an
answer on how do participants in the conversation “select” the next speaker? They
build a set of rules that could help organizing the speaker selection. This work is an
analogy to ours and we acknowledge its relevance to convey place specific interac-
tions. To the best of our knowledge, the design of generic techniques for application
selection and control in multi-user, multi-application scenarios has not been ad-
dressed.
Controlling Application Life Cycle. The application life cycle embodies the se-
quence of events that occur between the launch and termination times of an applica-
tion. During the application life cycle, an application may be executed in different
states, e.g., background execution, suspended, inactive, foreground execution. An
important group of techniques is primarily aimed at enabling control of the transitions
between the various phases of the application life cycle.
For instance, in iOS the application life cycle is composed of five distinct states:
not running – the state of a rebooted device, active – the application is displayed on
the screen and receive inputs, background – the application may execute code without
receiving inputs or update the screen, suspended – an application is frozen and its
1
The GUI concepts are published at: http://ubicomp.algoritmi.uminho.pt/
research/apps4publicdisplays/, 26.06.2012
168 C. Taivan, R. José, and I. Elhart
state is stored in RAM and inactive – a temporary rest between two other states, e.g.,
yielded by incoming calls or if the user has locked the screen. While in the active
state, an application may require visual and input resources, in the background execu-
tion the application is running in a constrained behavior without requiring any display
or user input resources. In a UNIX environment, a process, i.e., a program in execu-
tion may run behind the scenes fulfilling various activities such as logging, system
monitoring, scheduling and user notification. Supporting multitasking, i.e., the ability
of an OS to execute multiple applications at once, is concerned not only with the op-
erating system on which the applications are installed but as well with the hardware
platform. For instance, background code execution is limited to a certain types of
devices, e.g., iPod third generation and iPhone 4. As well, an application can go to the
background mode as users switch between applications.
Visual Layers. The concept of visual layers describes all the application graphical
appearance in the screen layout. It is common in desktop OSs to distinguish between
applications by using the criteria of the most active application window, i.e., always-
on-top. Although an application might have multiple windows, the user attention is
focused only on the one that is in foreground. One application may feature additional
visual layers such as splash screens, or a sort of preliminary windows in which users
may optimize the application execution.
Particular types of visual layers are notifications. A notification is a typical appli-
cation feature. In a traditional OS, a notification message warns users about applica-
tion data updates or about system level issues. Mainly, the computer notifications
contain two classes: a) one that calls for user attention, e.g., pop-ups and b) the other
that does not call for explicit user attention, e.g., pop-under. A pop-under notification
contains a non-intrusive content that resides behind scene. In Windows environments,
non-intrusive notifications are shown in the notification area situated in the right side
of the Taskbar.
This section analyses the main challenges involved in supporting different facets of
application selection and control in the context of applications for pervasive display
systems. The methodological approach is inspired by the work of Bellotti et al. on
sensing systems framework [13]. We used the categories identified in the previous
170 C. Taivan, R. José, and I. Elhart
section as the framework for the major challenges in application selection and control.
For each of those categories, we review the traditional GUI solutions and analyze the
new challenges raised by the specificities of public display applications.
When considering those specificities, we assume in particular that there can be
many concurrently interacting users in the environment and also that the execution
environment of the applications is not a single display, but instead an ecosystem of
displays with multiple distributed user interfaces that span across multiple devices,
e.g., public displays, mobile phone, touch enabled surfaces.
In regard to the control of the application life cycle, the main implication is a se-
paration between the execution state in the environment and the execution state on
any particular device. While applications may be expected to be always available and
ready to produce content on any display, their normal execution mode may be a wait-
ing mode in which they are ready to receive input signals and in appropriate moments
generate content for presentation on the displays. The main challenges will be how to
model this combination between execution states and presentation state, in a way that
people can easily perceive and learn to control. It is also necessary to separate appli-
cation availability in the environment from its presentation on the displays or from its
execution on any particular device of that environment.
Identifying applications is important so that people may associate the content
they see on the displays with the application generating that content. An adapted ver-
sion of GUI concepts, such as application titles may be used in some cases, but may
also be inappropriate in other cases because it may interfere with the rich visualization
requirements of public displays. Alternative approaches may include a list of the
applications that are currently available to be shown on the displays. This list may
include the application id and a summary of its content, e.g. live tile, and may be
available through mobile devices or occasionally shown on the display to prompt
interaction.
Implicit application activation allows the presentation of a particular application
on the displays to be triggered by an external event. In a mixed-initiative model, the
system would need to implicitly call for specific apps, even if there is no activity from
users. Additionally, some applications may only be relevant when particular contex-
tual conditions occur. In such case the system may at any moment make selections
based on the interpretation of the respective context, e.g. people present and their
preferences. Therefore, a challenge for pervasive displays is the ability to integrate
this dynamic application selection into the application execution mode.
Explicit application selection in public displays is based on viewers’ explicit re-
quests for applications. Firstly, viewers need to identify applications in order to be
displayed. Afterwards, using various interaction techniques, e.g., mobile phone
through a Bluetooth connection, gestures, they can call for a particular application to
be shown. The main challenge is mediating between possible conflicting requests
from multiple users or even between users and system goals.
Multiple visual layers are an important feature, mainly in desktop systems, be-
cause they allow a more sophisticated management of the interaction with people.
However, in public displays, especially with multiple people sharing the display, it
becomes much more challenging to achieve a balanced combination between multiple
Selection and Control of Applications in Pervasive Displays 171
layers and a good interaction experience. Still, well-designed notification layers that
choose the best time to present themselves may provide an important alternative
channel for presenting contextually relevant content outside the normal presentation
cycles of the applications. In particular, these alternative visual layers may be impor-
tant in generating feedback for users trying to interact with the system and support
progressive interaction modes in which users and displays are increasingly aligned
while minimizing interactions by accident such as in gesture-based interfaces.
In Table 1, we summarize these basic questions on application selection and con-
trol and the specific challenges they raise for public displays systems.
Table 1. GUI solutions and public display challenges for application selection and control
5 Conclusion
In this paper, we analyzed traditional GUI concepts for application selection and con-
trol and discuss how they could serve as the basis for addressing similar challenges in
multi-application display environments. The results highlight that there are many
similarities and therefore many common solutions that should be adapted for this new
application domain, but also identify a number of unique challenges that may need
more that simple adaptions. Further work will use these design considerations for
implementation and validation of the envisioned application selection and control
techniques.
172 C. Taivan, R. José, and I. Elhart
Acknowledgments. This research has received funding from PD-NET project, which
acknowledges the financial support of the Future and Emerging Technologies (FET)
programme within the Seventh Framework Programme for Research of the European
Commission, under FET-Open grant number: 244011 and from “Fundação para a
Ciência e a Tecnologia” under the research grant SFRH/BD/75868/2011.
References
1. Terrenghi, L., Quigley, A., Dix, A.: A taxonomy for and analysis of multi-person-display
ecosystems. Journal of Personal and Ubiquitous Computing 13, 583–598 (2009)
2. Davies, N., Langheinrich, M., Jose, R., Schmidt, A.: Open Display Networks: A Commu-
nications Medium for the 21st Century. IEEE Computer 45, 58–64 (2012)
3. Horvitz, E.: Principles of mixed-initiative user interfaces. In: Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems the CHI is the Limit, CHI 1999, pp.
159–166. ACM Press, New York (1999)
4. Ojala, T., Kostakos, V., Kukka, H., Heikkinen, T., Linden, T., Jurmu, M., Hosio, S.,
Kruger, F., Zanni, D.: Multipurpose interactive public displays in the wild: Three years
later. IEEE Computer 45, 29–42 (2012)
5. Hosio, S., Jurmu, M., Kukka, H., Riekki, J., Ojala, T.: Supporting distributed private and
public user interfaces in urban environments. In: Proc. HotMobile 2010, pp. 25–30. ACM,
New York (2010)
6. Davies, N., Friday, A., Newman, P., Rutlidge, S., Storz, O.: Using bluetooth device names
to support interaction in smart environments. In: International Conference on Mobile
Systems Applications and Services, Mobisys 2009, pp. 151–164. ACM, Kraków (2009)
7. Churchill, E., Girgensohn, A., Nelson, L., Lee, A.: Blending digital and physical spaces for
ubiquitous community participation. Communications of the ACM 47, 38 (2004)
8. Izadi, S., Brignull, H., Rodden, T., Rogers, Y., Underwood, M.: Dynamo: a public interac-
tive surface supporting the cooperative sharing and exchange of media. In: Proceedings of
the 16th Annual ACM Symposium on User Interface Software and Technology, UIST
2003, pp. 159–168. ACM Press, New York (2003)
9. Vogel, D., Balakrishnan, R.: Interactive public ambient Displays: Transitioning from
Implicit to Explicit, Public to Personal, Interaction with Multiple Users. In: Proceedings of
the 17th Annual ACM Symposium on User Interface Software and Technology, UIST
2004, p. 137. ACM Press, New York (2004)
10. Morales-Aranda, A.H., Mayora-Ibarra, O.: A Context Sensitive Public Display for
Adaptive Multi-User Information Visualization. In: Third International Conference on
Autonomic and Autonomous Systems (ICAS 2007), p. 63 (2007)
11. Peltonen, P., Kurvinen, E., Salovaara, A.: It’s Mine, Don’t Touch!: interactions at a large
multi-touch display in a city centre. In: Proceedings of CHI 2008, pp. 1285–1294 (2008)
12. Sacks, H., Schegloff, E.A., Jefferson, G.: A Simplest Systematics for the Organization of
Turn-Taking for Conversation. Language 50, 696–735 (1974)
13. Bellotti, V., Back, M., Edwards, W.K., Grinter, R.E., Henderson, A., Lopes, C.: Making
sense of sensing systems. In: Proceedings of the SIGCHI Conference on Human Factors in
Computing Systems Changing Our World, Changing Ourselves, CHI 2002, p. 415. ACM
Press, New York (2002)
Multimodal Interfaces for the Smart Home: Findings in
the Process from Architectural Design to User Evaluation
Abstract. Smart Environments have specific natural interaction needs that can
be provided for with multimodal interfaces. There are still challenges to face,
such as the adaptability of the interaction and an evaluation of the proposed sys-
tems. This work focuses on these problems and proposes an architectural design
evaluated in the domain of Smart Homes. The architectural approach is based
on the Model View Presenter Pattern and the Service Oriented paradigm. The
evaluation was conducted with a laboratory deployment of a prototype of the
system and usability tests were carried out with a usability questionnaire.
Results show the technical feasibility of the proposed design and positive user
acceptance of the multimodal interface as compared to mono-modal interfaces.
1 Introduction
Smart Environments and Smart Homes are important research domains in the field of
Ambient Intelligence, and several research works focus on the development of new
interaction techniques within these domains [1-2].
Interaction mechanisms for smart environments should be natural and intuitive,
adaptable to user abilities and dynamic in the number of functionalities. Multimodal
interfaces are a good solution as they interpret information from various communica-
tion channels [3]. Multimodal interfaces have not yet been widely deployed in real
environments due to some unresolved issues: modality integration architectures, inte-
raction modelling, adaptability to changes in context and user abilities; and the lack of
evaluation studies in real environments [4].
This work focuses on these problems in the smart home domain. A novel multi-
modal control system for smart homes based on service-oriented multimodal architec-
ture is proposed. The underlying architecture supports the inclusion of any number of
input and output interaction modes, dynamic deployment and reconfiguration of the
modes, and simultaneous execution of multiple multimodal applications. The multi-
modal control system allows the control of digital devices in a smart home.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 173–180, 2012.
© Springer-Verlag Berlin Heidelberg 2012
174 M.Á. Fernández et al.
2 Related Work
In recent years the improvement of certain interaction technologies, such as gesture
recognition or voice technologies, has driven the development of new multimodal
systems [1], [7]. Despite the advances made, multimodal interfaces are still not a
commonly deployed technology due to some unresolved issues [4][5].
The use of multimodal interfaces for smart home interaction is another focus of re-
search. Although some general solutions for the problems addressed in this work are
proposed in [1], no evaluation with real users was carried out. In [2] the adaptability
to user abilities and preferences is not clearly addressed, and although an evaluation
methodology was proposed, it was not widely accepted.
One of the open problems in research into multimodal interfaces is the lack of
standard methodologies or metrics for their evaluation. Methodologies based on usa-
bility evaluation techniques and user perception have been proposed as a method of
general system evaluation [6]. Several methodologies following this idea [7-8] have
been proposed for multimodal interfaces, but none of them has been widely accepted.
Other general purpose usability tools have been applied to multimodal interface sys-
tems, but with the same lack of acceptance [10]. In spite of this lack of standard me-
thods and evaluations with real users, most of the evaluations are based on user test
sessions and usability questionnaires. The objective is to study the usability of the
system by analyzing the relation between objective parameters (time of tasks, delays,
number of tasks accomplished, etc.), and subjective user satisfaction.
• the use of the model-view-presenter (MVP) software pattern to solve the synchro-
nization problem
• the use of the service oriented paradigm to allow the management of all the entities
in the system (modalities, fusion/fission strategies, interaction models, etc,)
The MVP pattern is used in software applications to synchronize different views as-
sociated with the application model by using the presenter element. The MVP pattern
is a derivative of the well-known model-view-controller (MVC) pattern. Following
this approach, each interaction modality is a view in the MVP pattern, and is synchro-
nized by a centralized presenter element as in other works ([11], [12] or [1]). The
presenter element adds information coming from input interaction modalities (fusion),
updates the model representing the interaction of the user, and distributes information
among output interaction modalities according to the interaction model (fission).
Another important aspect is the communication between the synchronization ele-
ment (presenter), and the different modality components (views). Due to its flexibili-
ty, the architecture presented in this paper is based on an event-driven approach and
simple objects to transport information in the events.
Multimodal Interfaces for the Smart Home 175
Fig. 1 shows the blocks of the proposed architecture following the basic concepts
presented. The architecture is designed to support a variable number of interaction
modalities coordinated by the presenter. Communication between the modules is
enabled by an event communication channel.
The presenter module is made up of functional blocks: the fusion block; the fission
block; the view-model, which represents the state of the interaction; the domain mod-
el with the particularities of the application; and the communication adapter used to
send and receive events. The presenter communicates with the modalities using the
event communication channel and communicates with the domain logic of the appli-
cation. Although the presenter is independent of the application domain, its internal
domain model is bound to a specific domain logic.
Each modality component is made up of three blocks: the communication adapter,
used to send and receive events; the view-model, used to represent the state of the
interaction for this view; and the form. Forms show the state of the interaction shown
to the user in the view-model by using a specific interaction mode such as voice or
visual elements. The user can also interact with these forms.
The interaction state is represented by a model and every action of the user over
the modalities is represented as a modification of the model. Every view has its own
model called a view-model, which is synchronized with the main view-model in the
presenter. When the user interacts with a modality component, the component updates
its internal view-model and sends the appropriate events through the communication
channel. These events are received by the presenter, which updates its internal view-
model, applying the fusion strategy. This view-model update generates the execution
of the fission strategy, which decides which events should be sent to the rest of mod-
ality components. These events are received by the rest of the modalities in order to
update the internal view models. This dynamic behaviour makes it possible to syn-
chronize the interaction modules.
176 M.Á. Fernández et al.
• Visual and haptic mode. A visual interface showing the elements which can be
controlled is shown to the user. The interface shows the state of the interaction and
the user can select a specific device and execute an action. The user interacts with
the system using a touch screen such as a Tablet PC.
• Voice mode. An Automatic Speech Recognition (ASR) Service is available to
allow users to give commands to the system. The ASR is run on a PDA. A Text to
Speech (TTS) service gives feedback to the user about the state of the interaction
and the actions executed. The output of the TTS is played over ambient speakers.
• Gestural mode. The system allows users to interact with hand and arm gestures.
Gestures are recognized by using the accelerometers of small devices held by the
user. The accelerometer measures the direction and speed of user movements.
The fusion mechanism implemented is based on a First In/First Out strategy in which
the messages coming from the input modalities are used without taking into account
previous or posterior messages. The fission module is based on a broadcast strategy in
which every output modality receives the information and shows it to the user. The
view-model is composed of abstract representations of interaction components such
as forms, texts, list of items, etc.
The prototype was implemented using the OSGi service platform due to its special
dynamic service management characteristics. Each entity in the architecture is an
OSGi service factory. The communication channel is implemented using the Event
Admin OSGi standard service.
The validation of the proposed architecture and the multimodal application to control
smart homes was divided into two stages. Firstly, a completely functional prototype
was implemented to test technical feasibility. The execution of multiple instances of
the smart home control application was verified by simulating multiple users with
different configurations. Secondly, the application was validated with real users in a
laboratory setup. The aim of this stage was to evaluate the usability of the multimodal
interface application in a realistic environment. An important partial objective was to
analyse user preferences of the different modality components in the home.
The prototype was deployed in a laboratory with all the common elements of a liv-
ing room: on/off light switches, dimmers, a blind, a TV set, a thermostat, etc. All the
elements are connected to a home automation network with a TCP/IP gateway.
The user evaluation tests were organized using a realistic scenario in which users
had to configure the light level in the living room to watch the television. In this sce-
nario each user had to complete eight tasks: switch off two on/off lights, dim two
dimmers, switch the two lights back on and dim the two dimmers again. Each user
had to repeat the tasks with each individual modality (voice, visual and haptic, and
gestural) and once again with all the modalities freely combined. Each user had five
minutes to finish the scenario.
178 M.Á. Fernández et al.
While users were interacting with the system and completing tasks, data regarding
task times, number of task completed, number of errors, etc. was collected. These
parameters were selected from among the most common parameters used [10]. These
parameters were collected using automatic logs, direct observations and reviewing the
recordings of the tests. In order to measure user satisfaction, users completed a ques-
tionnaire after interacting with the system with each modality. The questionnaire used
was AttrakDiff [9]. Previous studies deemed it is the most suitable for multimodal
systems [10].
Sixteen people with similar technological skills were recruited to test the system.
Participants’ ages ranged from 22 to 32. Most of them were researchers working in
the same group as the authors but not directly involved in this work.
Table 1 shows a summary of the interaction parameters collected during the evalu-
ation tests. It shows the time needed by users to complete each individual task in the
scenario using different interaction modes. With any modality it is possible to fit the
distribution of task times in a LogNormal distribution defined by the parameters
shown in the table (parameters were calculated using the Maximum Likelihood Esti-
mation - MLE). The table also presents the average of tasks completed within five
minutes of the scenario with each interaction mode. The last column shows the
average of system errors (speech recognition errors, gestures not recognized, etc.)
Table 2 shows the results of the Attrakdiff questionnaire (PQ, HQ-I, HQ-S and
ATT) as well as the global mark given to the modes by users after completing the
scenario with all the modalities. The last column shows the average percentage of
usage of every mode in the multimodal scenario.
According to the Attrakdiff questionnaire results, the multimodal interaction ob-
tained the best results in the hedonic qualities (HQ-I and HQ-S indexes) and the at-
tractiveness (ATT index) but not in the pragmatic quality (PQ index). This indicates
that although users consider multimodality innovative and interesting, it is not the best
option to achieve their goals (time to complete tasks).
Table 1. Parameters (MMI: Multimodal, GUI: Visual/haptic, TUI: Gestural, VUI: Voice)
Task Execution (sec.) LogNormal Fitting (sec) Completed tasks avg. Errors avg.
MMI x̄ =20.58 s=17.20 µ=2.57 σ=0.81 r=0.98 7.81 3.25
GUI x̄ =17.49 s=17.62 µ=2.51 σ=0.89 r=0.98 8.00 2.65
TUI x̄ =22.89 s=15.55 µ=2.88 σ=0.74 r=0.97 6.31 5.87
VUI x̄ =21.09 s=14.14 µ=2.89 σ=0.55 r=0.99 7.06 6.62
Table 2. Results (MMI: Multimodal, GUI: Visual/haptic, TUI: Gestural, VUI: Voice)
PQ (x̄ , s) HQ-I (x̄ , s) HQ-S (x̄ , s) ATT (x̄ , s) Global (x̄ , s) % Usage
MMI 1,01 0,86 1.38 0.59 1.48 0.73 1.83 0.63 0.59 0.80 -
GUI 1.49 0.76 0.91 0.65 0.38 0.69 1.81 0.77 0.59 1.12 46
TUI -0.67 1.20 0.39 0.50 1.12 0.88 0.08 1.28 0.24 1.15 18
VUI 0.79 1.20 0.66 0.56 0.59 0.56 0.96 1.01 -0.53 1.01 36
Multimodal Interfaces for the Smart Home 179
Table 3 shows the relation between the attractive index in the Attrakdiff question-
naire and other interaction parameters or usability indexes. The study was done using
Multiple Linear Regression and the attractive index was selected as the global result
of the Attrakdiff questionnaire. In the first case, the relation between the attractive-
ness index of the multimodal system and the attractiveness index in the mono-
modalities was studied. According to the obtained model, the attractiveness of the
multimodal system can be explained by the attractiveness of the visual and haptic
modality, which is the most attractive modality in Table 2.
Finally, the relation between the attractiveness index and the interaction parameters
was analysed. Taking these results into account, the interaction parameters with the
most influence over the attractiveness index are the number of completed tasks and
the number of system errors. This result indicates that users prefer reliable modes
which allow the completion of tasks easily.
Several challenges have yet to be overcome in the design and deployment of multi-
modal interfaces. This work presents findings in the research into a more flexible and
adaptable interaction mechanism and in the establishment of evaluation methodolo-
gies to evaluate multimodal systems.
Some future research areas have been identified. The instantiation policy can lead
to excessive fragmentation of multimodal instances that will need a complex meta-
coordination strategy between instances in some multiuser environments. In addition,
evaluation methodologies should be researched to enable the objective comparison of
this kind of system due to the difficulty in finding correlations between usability per-
ceptions and multimodal interface parameters. It would be especially useful to devel-
op methods to evaluate multimodal interfaces in multiuser environments.
References
1. Blumendorf, M., Albayrak, S.: Towards a Framework for the Development of Adaptive
Multimodal User Interfaces for Ambient Assisted Living Environments. In: Stephanidis,
C. (ed.) UAHCI 2009, Part II. LNCS, vol. 5615, pp. 150–159. Springer, Heidelberg (2009)
2. Herzog, G., Reithinger, N.: The SmartKom architecture: A framework for multimodal
dialogue systems. SmartKom: Foundations of Multimodal Dialogue (2006)
3. Dumas, B., Lalanne, D., Oviatt, S.: Multimodal Interfaces: A Survey of Principles, Models
and Frameworks. In: Lalanne, D., Kohlas, J. (eds.) Human Machine Interaction. LNCS,
vol. 5440, pp. 3–26. Springer, Heidelberg (2009)
4. Sebe, N.: Multimodal interfaces: Challenges and perspectives. Journal of Ambient Intelli-
gence and Smart Environments 1, 23–30 (2009)
5. López-cózar, R., Callejas, Z.: Handbook of Ambient Intelligence and Smart Environments
(2010)
6. Folmer, E., Bosch, J.: Architecting for usability: a survey. Journal of Systems and Soft-
ware, 61–78 (2004)
7. Bernsen, N.O., Dybkjær, L.: Multimodal Usability: Conclusions and Future Work.
Multimodal Usability, 413–419 (2009)
8. Guna, J., Kos, A., Pogačnik, M.: Evaluation of a multimodal interaction concept in virtual
worlds. ev.fe.uni-lj.si 77, 287–292 (2010)
9. Hassenzahl, M., Burmester, M., Koller, F.: AttrakDiff: Ein Fragebogen zur Messung
wahrgenommener hedonischer und pragmatischer Qualität. Mensch & Computer, 187–196
(2003)
10. Möller, S., Engelbrecht, K.P., Kühnel, C., Naumann, A., Wechsung, I., Weiss, B.: Evalua-
tion of Multimodal Interfaces for Ambient Intelligence. In: Human-Centric Interfaces for
Ambient Intelligence, pp. 347–370 (2010)
11. Arnone, D., Rossi, A., Bertoncini, M.: An open source integrated framework for rapid
prototyping of multimodal affective applications in digital entertainment. Journal on
Multimodal User Interfaces 3, 227–236 (2010)
12. Lawson, J.J.-Y.L., Al-Akkad, A.-A., Vanderdonckt, J., Macq, B.: An open source
workbench for prototyping multimodal interactions based on off-the-shelf heterogeneous
components. In: EICS 2009: Proceedings of the First ACM SIGCHI Symposium on
Engineering Interactive Computing, New York (2009)
Web Browser HTML5 Enabled for FI Services
ETSI Telecomunicacion
Technical University of Madrid (UPM)
Madrid, Spain
{trobles,miranda,ralcarria,amorales}@dit.upm.es
1 Introduction
As common applications and services are supported by WWW and the proliferation
of Smartphones and tablets leverages new service interfaces for users with mobile and
pervasive requirements, the European Commission has launched the Future Internet
Public Private Partnership (FI-PPP) which aims to advance Europe's competitiveness
in Future Internet technologies and systems and to support the emergence of Future
Internet-enhanced applications of public and social relevance [1] [4]. This initiative
intends to build the Core Platform of the Future Internet [3] which will be open and
based upon Generic Enablers (GEs) which offer reusable and commonly shared func-
tions serving a multiplicity of Usage Areas across various sectors; bringing together
demand and supply and involving users early in the research lifecycle [3]. The solu-
tion presented in this paper is aligned with Lively Kernel approach [2] and the model
proposed by FI-PPP and is part of the Smartagrifood project’s smart food aware-
ness use case [5], which focuses on serving the information needs of the final cus-
tomer in the food value chain.
In order to meet overall requirements (i.e. web based service delivery, cloud terminal
based on web browser and integration with FIA) we designed a system architecture
and developed the first version of the system that helps consumers to be more aware
of the food they buy in the supermarket and relies on data management and provision,
collecting information that matches their interests, so that tailored product information
is delivered.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 181–184, 2012.
© Springer-Verlag Berlin Heidelberg 2012
182 T. Robles et al.
The design of the system architecture for the smart food awareness use case [5]
identifies three domains: personal domain contains the mobile phone and other
devices that are close to the user; community domain integrates the particular func-
tionalities of the specific domain (in this case a supermarket) and public domain
provides generic enablers (available to all community domains). The personal domain
comprises a service execution environment with an execution engine, a personal repo-
sitory and proxies for interaction with service and service parts from other domains.
The access to basic functionalities is provided by mobile terminals (both local capa-
bilities or enablers) or remote enablers from community and public domains (ma-
naged by the communication support module). The design also identifies a set of
proxies that either allow direct requests to the community enablers or delegate the
execution of these functions to the public domain (to be managed by GEs). The public
domain is presented as a set of GEs in line with the FI-WARE proposal. The commu-
nity Domain (supermarket) includes some proxies, which enable to use community
enablers to implement the functionality of the use cases.
Traditional limitations exposed by typical client approaches where interfaces are
tailored to fit specific deployment conditions (myriad varieties of mobile and desktop
environments) are tackled in the proposed solution, where a web-style client provides
service access with a solid and standard-compliant client framework that can be in-
voked from any user equipment with a web browser with HTML5 support.
Fig. 1 shows the high-level client terminal architecture where lower layers are spe-
cific to the device and the underlying Operation System. This solution stays on top of
any particular OS kernel and any set of capabilities available in the terminal. The Web
interface becomes the unique interaction tool for user as it is responsible from invok-
ing local capabilities and clustering cloud-based data in order to render the composite
service to terminal’s screen. The Service Renderer and the Service Execution Engine
is based on HTML5 and JavaScript.
Web Browser HTML5 Enabled for FI Services 183
Fig. 2 illustrates how HTML5 is used for defining functional elements of the ser-
vice interface and how this service is presented by a standard browser to the end user.
The prototyped web interface offers the following features: local capabilities ac-
cessible from web; outbound communication fully relied on a Smart Web Proxy
(SWP) and zero-configuration communication tool. The features supported by the
SWP integrated in the prototype are: local caching of SDL and other relevant data;
applies service policies by distributing tasks to specific local capabilities and syn-
chronization and up-to-date info retrieval from cloud-allocated services. The invoca-
tion of a cloud-allocated service is described in Fig. 3. First, HTML pages are fetched
184 T. Robles et al.
by a local proxy and parsed as service description; then local composition of services
is triggered and local capabilities are collected.
References
1. European Research–FI PPP, http://ec.europa.eu/information_society/
activities/foi/lead/fippp
2. Taivalsaari, A., Mikkonen, T., Ingalls, D., Palacz, K.: Web Browser as an Application Plat-
form: The Lively Kernel Experience. Tech Report. Sun Microsystems, Mountain View, CA,
USA (2008)
3. D2.2 FI-WARE High-level Description, Future Internet Core Platform (November 15, 2011)
4. Future Internet PPP Project Abstracts and Contact Details, http://www.fi-ppp.
eu/projects/
5. D400.1: Smart Food Awareness Specification for Experimentation, SmartAgriFood
6. Taivalsaari, A., Systa, K.: HTML5 Cloud Phone Platform for Mobile Devices. IEEE SW (99)
Interacting with a Robot: A Guide Robot
Understanding Natural Language Instructions
IK4-TEKNIKER
{lsusperregi,ifernandez,afgonzalez,sfernandez,imaurtua,ivallejo}@tekniker.es
http://www.tekniker.es/
Abstract. A guide robot has to capture and process the orders given
by humans in an accurate and robust way to generate the correct answer
to the task requested. The proposed solution aims at creating a guide
robot that is able to interact with humans, interpret and understand
their guiding requirements and plan the path to the destination. To au-
tomatically perform the natural Human Robot interaction, we propose
a solution combining natural language processing techniques, seman-
tic technologies and autonomous navigation techniques. We have evalu-
ated the solution in a real scenario with different subjects and we have
obtained an accuracy of 71,7%.
1 Introduction
Robots performing tasks in human settings as assistants, guides, tutors, or social
companions, pose two main challenges: by the one hand, robots must be able to
perform tasks in complex, unstructured environments and, on the other hand,
robots must interact naturally with humans.
A requirement for natural Human Robot Interaction is to endow the robot
with the ability to capture, process and understand accurately and robustly the
human request. A primary goal of this research is to analyse the natural ways by
which a human can interact and communicate with a robot. In this paper, the
robot provides a guide service capable of understanding the instructions given
naturally by the users.
We propose a solution able to guide people in a laboratory environment,
that combines natural language processing techniques for Spanish, with seman-
tic technologies and autonomous navigation techniques. We have evaluated the
solution in a real scenario with different subjects.
The rest of the paper is organized as follows: In section II related work in
the area of human robot interaction is presented. We focus mainly on work
done using robots in human populated environments. Section III describes the
proposed approach, Section IV details the experimental setup and results and
Section V describes conclusions and future work.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 185–192, 2012.
c Springer-Verlag Berlin Heidelberg 2012
186 L. Susperregi et al.
2 Related Works
The approach proposed in this work aims at creating a guide robot that is capable
of interacting with humans and understand their requests in Spanish natural
language. When the request is feasible, the robot must plan the necessary path
to the desired destination and go there. To carry out automatically all these
tasks, we propose a prototype consisting of three main modules: a knowledge-
base module, an Order interpretation module and a navigation module (KTBOT).
These main modules are explained in detail in the following subsections.
3.1 Knowledge-Base
In our approach, we try to describe a lab environment and possible actions re-
lated to the robot guidance that may occur there. We need a representation
to explicitly define concepts and relationships, such as actions, rooms, objects
Interacting with a Robot 187
in the rooms as well as people who use to be in those rooms. For such logic
description, we have used a semantic knowledge representation approach which
has been successfully applied for a similar task as we have seen in Section 2.
Specifically, we have described the environment using the Resource Descrip-
tion Framework (RDF), reusing the existing and expanded vocabularies, such as
GEO1 or FOAF2 , creating a repository as much reusable as possible.
According to this representation we have described all the experimental en-
vironment. In order to make accessible this semantic repository we have used
OWLIM [15], a high-performance Storage and Inference Layer (SAIL) for Sesame
which performs reasoning. So loading all this data in OWLIM as a RDF repos-
itory, we get an endpoint to easily access and infer information from the KB
using the standard SPARQL3 query language for RDF. This endpoint will be
the connection point between the KB and the order interpretation module.
destination, concretely the corresponding coordinates and angle for the extracted
destination.
4 Experiments
4.1 Setup
Experiments are conducted to show the feasibility of the natural guide robot.
The experimental scenario located at IK4-TEKNIKER is a real laboratory where
machines and humans share the space. The laboratory area in can be charac-
terized as a set of rooms that are described by their name or functionality, the
people who usually work in each room and the equipment available. The robot
is able to autonomously reach any entry point (door) in the map.
4.2 Evaluation
transcript the sentences in real time and the text generated feeds the semantic
interpreter of the system, starting the automatic process.
We have carried out the experiments with 10 subjects, each one asking 5/6
different actions. Table 1 shows the experimental results, describing for each
participant: number of tries (Request Num.), number of times the robot has
successfully perform the guide task (Correct), and when the behaviour of the
robot has not been the expected one. We have divided the error cases according
to three main possible reasons: when the error has been caused by the extracted
verb (Action), by the destination (Destination) or by some problem during the
navigation (Navigation).
fact that there was no room to move in the presence of too many people in its
path. The rest of the errors has been caused equally by verbs and destination
problems (46,7% each one).
References
1. Burgard, W., Cremers, A.B., Fox, D., Hähnel, D., Lakemeyer, G., Schulz, D.,
Steiner, W., Thrun, S.: The interactive museum tour-guide robot. In: Proceedings
of the Fifteenth National/Tenth Conference on Artificial Intelligence/Innovative
Applications of Artificial Intelligence, pp. 11–18 (1998)
2. Thrun, S., Bennewitz, M., Burgard, W., Cremers, A.B., Dellaert, F., Fox, D.,
Hähnel, D., Rosenberg, C., Roy, N., Schulte, J., Schulz, D.: MINERVA: A Second-
Generation Museum Tour-Guide Robot. In: Proceedings 1999 IEEE International
Conference on Robotics and Automation Cat No99CH36288C, pp. 1999–2005
(1999)
3. Tomatis, N., Philippsen, R., Jensen, B., Arras, K.O., Terrien, G., Piguet, R., Sieg-
wart, R.: Building a Fully Autonomous Tour Guide Robot: Where Academic Re-
search Meets Industry. In: 33rd International Symposium on Robotics (2002)
4. Fong, T., Illah, R.N., Dautenhahn, K.: A survey of socially interactive robots.
Robotics and Autonomous Systems 42, 143–166 (2003)
192 L. Susperregi et al.
1 Introduction
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 193–199, 2012.
c Springer-Verlag Berlin Heidelberg 2012
194 D. Pardo et al.
release of ‘queries’ to its website. It also requires the use of a parser to extract
contents from http packets. This ‘parser’ has been developed in Perl language.
It runs through Pyperl, an extension to invoke Perl code for Python, by loading
Perl modules and making direct calls to their functions. In addition, invoked
Perl code allows to invoke in its turn Python code, as deemed necessary. Since
the original application has its content in English, and we are interested on
Spanish-spoken volunteers, a module has been added between the parser and
the robot. This module uses Google’s API Translator1 to modify the language
of the contents before being submitted.
Initially, a request is made to the database server to start a new 20Q’s game.
The parser extracts the first question from the website, which is always the
same: “Think about an animal, vegetable or something else”. Once the question
and possible answers are obtained, they are translated and sent to the robot
Nao. The robot utters the question and waits for the user to answer one of
the possible answers. Speech recognition provides the confidence of the word
understood. When the confidence level of the answer is considered acceptable, it
is translated and encapsulated according to the databases criteria. Finally, the
answer is sent to the server for the next question. The following questions are
more concrete, such as: “Is it bigger than a sofa?” “Can you lift it?” Answers
are like: “Yes”, “Perhaps”, “Irrelevant”, “I do not know”, “Repeat”. After a few
questions, the 20Q server may already have the answer, then it guess what the
user is thinking about. If it guesses within 20 questions it is considered that the
winner is the robot, otherwise the user wins.
3 Experimentation
In order to test the implementation of the 20Q game, it took place an activity
with robotos and children in a primary school. A fourth-grade class volunteered
to participate in the gaming experience with the NAO robot. Participants were
23 students (14 girls and 9 boys) aged between 9 and 11. Prior consent was
sought for collaboration in this research to parents of students.
To carry out the experience with NAO, students were sitting in the classroom
in a ‘U’ form and a volunteer was randomly selected to give vocal answers to
the robot. The investigator, the robot and the participant were located in front
of other classmates to make the game of 20Q as shown in Figure 3. Thus, while
the child participates in the game, classmates watch as the interaction runs and
try to discover what the child is thinking about doing the turn of questions
and answers. The session was recorded on video with the aim of exploring the
spontaneous behavior of users to the system operation.
To analyze the conductual reactions of users, a series of behaviors were
recorded. They should provide information about the interest shown by users
regarding the activity of the 20Q game. A search for interesting behaviors was
carried out on time intervals of 30 seconds, and thery were recorded when found.
Moreover, a multifocal sampling was also completed such that at each sampling
point activity was recorded for all subjects, in the following categories: emotions
(happiness, sadness, surprise, fear, anger, neutral) and facial expression (smile,
laugh, frown, raised eyebrows expectantly, expressionless). The total duration of
the activity was approximately 10 minutes. As shown in Figure 4 it was recorded
on video and analyze the behavior of the 10 participants who are sitting around
NAO robot.
In relation to the eye gaze, during the first 2 minutes playing the game, the
20Q game draws the eye between 60% and 100% of the participants. From the
second minute, only from 10% to 40 % of participants directed gaze to the robot.
Around the minute 9, the robot retrieves the eyes of 70% of participants
agreeing with the NAO attempt to discover the animal in which the volunteer is
thinking about. Towards the end of the session only 50% of participants directed
gaze to the robot. In relation to the facial expression, most of the intervals (95%)
most of the participants are shown neutral. However, a 10–20% of participants
198 D. Pardo et al.
the perceived quality of it. Partners tend to adjust the length of the pauses be-
tween turns of conversation, considering that breaks over 1 second is considered
disruptive [4]. This aspect is of utmost importance to understand the results of
this test with non-expert users as the response of the same (lack of direction in
gaze, facial expression and emotional expression neutral) are surely related to
the response time of the platform.
Acknowledgments. The authors wish to thank the Primary School “El Mar-
galló” in Vilanova i la Geltrú, Spain, especially the teachers who participated in
the activity.
References
1. Aldebaran (2012), http://www.aldebaran-robotics.com/
2. Branson, S., Wah, C., Schroff, F., Babenko, B., Welinder, P., Perona, P., Belongie,
S.: Visual Recognition with Humans in the Loop. In: Daniilidis, K., Maragos, P.,
Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 438–451. Springer,
Heidelberg (2010), http://dx.doi.org/10.1007/978-3-642-15561-1
3. Diaz, M., Nuno, N., Saez-Pons, J., Pardo, D.E., Angulo, C.: Building up
child-robot relationship for therapeutic purposes: From initial attraction to-
wards long-term social engagement. In: FG, pp. 927–932. IEEE (2011),
http://dblp.uni-trier.de/db/conf/fgr/fg2011.html#DiazNSPA11
4. Jaffe, J., Feldstein, S.: Rhythms of dialogue. Academic Press, New York (1970)
5. Li, X., MacDonald, B., Watson, C.I.: Expressive facial speech synthesis on a robotic
platform. In: Proceedings of the 2009 IEEE/RSJ International Conference on In-
telligent Robots and Systems, IROS 2009, pp. 5009–5014. IEEE Press, Piscataway
(2009), http://dl.acm.org/citation.cfm?id=1732643.1732863
6. Twenty Questions (2012), http://20q.net/
7. Tamagawa, R., Watson, C.I., Kuo, I.H., MacDonald, B.A., Broadbent, E.: The ef-
fects of synthesized voice accents on user perceptions of robots. International Journal
if Social Robotics 3(3), 253–262 (2011)
Achieving User Participation
for Adaptive Applications
1 Introduction
Context-aware and adaptive software has been widely adopted in ubiquitous and
pervasive computing scenarios. Adaptive applications are able to consider their
environmental context through information provided by sensors or other data
sources. Moreover they are able to dynamically adapt to changes that may occur
in highly volatile and heterogeneous environments. Unlike traditional applica-
tions, adaptive applications are able to change their state and behaviour at run-
time. Application adaptation can be seen from two different perspectives: from
autonomic computing research [16] and from usability engineering [1]. While the
aim of autonomic computing has been to achieve the best desirable service for
the user without incorporating the user in the machine’s decision, usability re-
search implies adaptation concepts to improve the usability of an application by
adapting the user interface. Simply speaking, adaptivity either focuses on the
user or on the application. For this work we stay with the definition emerged
from the autonomic computing field. There have been various approaches to in-
corporate adaptive behaviour in software. The MAPE-K loop [12] is the base for
many of them. It consists of four basic phases: Monitoring, Analyzing, Planning
and Executing. A research challenge is to integrate the user in this loop when
user influence is desired or required [2]. Moreover it is not always preferable to
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 200–207, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Achieving User Participation for Adaptive Applications 201
have such applications operating completely autonomously. Users may not trust
applications that work completely autonomously and opaque their actions [23].
Van der Heijden argues that transferring control from the user to the system re-
sults in increased user anxiety [9]. Also users want to customize the application
in different degrees to match their preferences. Salehi et al. state that human
involvement in general is quite valuable for improving the manageability and
trustworthiness of adaptive software [21]. According to Weyns et al. the little
support of user interaction might be one of the reasons why adaptive applications
lack of success [25].
Application functionality and the corresponding user interface are closely cou-
pled. Adapting to context changes and substituting components offering the bet-
ter value may result in substituting the corresponding user interface. It is evident
that changing the user interface without request by the user would result in some
irritation and degraded user experience. There are three ways in which a user’s
interactive behaviour affects the behaviour of an adaptive application: 1) A user
might become distracted from the autonomic behaviour of an application. 2) A
user might want to change the pre-defined behaviour in case he is not happy
with the behaviour implemented by the developers. 3) A user might want to get
notified by on-going autonomic actions within the application.
In this paper we show how the user can be modelled in an adaptive application
to respect his interaction practices. We identify the elements of component-
based applications that the user is able to influence. We suggest an approach
how the user can be integrated into the MAPE-K adaptation feedback loop so
that changes to the pre-defined application behaviour are possible on behalf of
the user. The remainder is structured as follows: in Section 2 we first present
our view on adaptive software and the corresponding usability requirements.
Section 3 illustrates our approach to integrate the user in the adaptation loop.
We present our research on related work in Section 4 and finish with a conclusion.
a certain system specific context which is not always congruent with the context
the user perceives, users should be able to recognize changes and they should be
able to understand why a certain change has been made. Unpredictable adapta-
tion behaviours, which cannot be understood and controlled, likely confuse the
users [24]. In addition, transparency is mentioned by several authors as a basis
for usability, trustworthiness and acceptability of adaptive systems and applica-
tions [14] [19]. Particularly, controllability is a user requirement that has been
shown relevant for content-related adaptations of maps. The study of Ringbauer
et al. [20] on a navigational aid for pedestrians showed that users want to accept
the change from outdoor to indoor maps manually and an automatic zoom is not
accepted by the users. Design of adaptation in a transparent and controllable
way preserves user trust, keeps high interaction comfort, and gives the user the
feeling of control. We closely followed the concept of weak and strong adapta-
tions [21] and identified the following three levels of user distraction regarding
the types of adaptation described in Section 2.1.
1. Strong user interface related adaptations: services/components directly re-
lated to the user interface are modified. As a result the functionality and the
interaction flow is very likely to change.
2. Low user interface related adaptations: services/components are modified
according to context changes. The underlying functionality may change but
it is not as likely as with strong adaptations.
3. No user interface related adaptations: services/components are replaced to
achieve a better Quality of Service. The actual functionality of components
does not change, hence the user will not notice any change in use.
Adaptations can occur when users are involved with their actual tasks. In order
to inform the users, their attention has to be drawn to the adaptation. However,
attention is a limited resource of human-computer-interaction [11]. Therefore,
the so called IRC framework for designing notifications [15] is a fruitful source
for designing adaptations as it considers attention cost and utility benefit.
3 User Participation
To achieve better user experience and acceptance, we integrate the user in the
adaptive feedback loop. Integration refers to the type of influence/control a user
has on the application. We differentiate between implicit and explicit control.
UI = User Interface
UF = Utility Function
Application UI UF
the middleware. Hence it is not trivial to facilitate ad-hoc changes through and
several modifications are required. Figure 1 illustrates the proposed extensions
to the MUSIC middleware with both interaction and preferences manager.
4 Related Work
Improving usability by adapting the user interface of an application has been in
focus of researchers for a long time [1]. User interface adaptation at run-time aims
at adjusting the interaction interface as much as possible to the requirements,
limitations, and expectations of the individual user [19].
Unlike the previously mentioned work we do not use adaptation solely to
improve the usability but instead we want to make adaptive applications comply
with user expectations and interactive behaviour in general. User interaction in
adaptive applications has been considered as important ever since [2] [21] but
only few work actually addresses this topic. Hendricksen et al. [10] as well as Fong
et al. [6] present approaches to incorporate user preferences in the adaptation
decision. Their work focuses on intelligibility of adaptation by modelling context
and context information, revealing context information to the user and allow the
user to modify the adaptation behaviour based on preferences.
206 C. Evers et al.
5 Conclusion
Ubiquitous computing applications are closely coupled with the individual be-
haviour and bias of the user. While developing such applications one can argue
that requirements analysis is responsible for deriving requirements that would fit
the user’s needs and respect the user’s interactive behaviour in the application.
But at design time a requirements analyst respectively developer cannot foresee
the concrete execution of the application at run-time.
In this paper we presented concepts how user participation for adaptive soft-
ware can be achieved. The user will be integrated in the MAPE-K loop by consid-
ering the user’s interactive behaviour implicitly and explicitly. We are currently
implementing the extension to the MUSIC middleware (Figure 1) and will then
apply this to the Meet-U case study [4][3]. The design of the middleware exten-
sion and Meet-U is done by incorporating the presented usability requirements
for adaptive applications. We will then evaluate the Meet-U application on the
Android platform with the target user group. In parallel we will analyze the
performance influence of the additional processing of the interaction models.
References
1. Benyon, D.: Adaptive systems: A solution to usability problems. User Modeling
and User-Adapted Interaction 3(1), 65–87 (1993)
2. Cheng, B.H.C., et al.: Software engineering for self-adaptive systems: A research
roadmap. LNCS, pp. 1–26. Springer (2009)
3. Comes, D., Evers, C., Geihs, K., Hoffmann, A., Kniewel, R., Leimeister, J.M.,
Niemczyk, S., Roßnagel, A., Schmidt, L., Schulz, T., Söllner, M., Witsch, A.:
Designing socio-technical applications for ubiquitous computing - results from a
multidisciplinary case study. In: 12th IFIP International Conference on Distributed
Applications and Interoperable Systems (DAIS), Stockholm, Sweden (2012)
4. Comes, D., Evers, C., Geihs, K., Saur, D., Witsch, A., Zapf, M.: Adaptive appli-
cations are smart applications. In: 1st International Workshop on Smart Mobile
Applications, San Francisco, CA, USA (2011)
Achieving User Participation for Adaptive Applications 207
5. Floch, J., et al.: Playing MUSIC — building context-aware and self-adaptive mobile
applications. In: Software: Practice and Experience. Wiley (2012)
6. Fong, J., Indulska, J., Robinson, R.: A preference modelling approach to support
intelligibility in pervasive applications. In: International Conference on Pervasive
Computing and Communications Workshops, pp. 409–414 (2011)
7. FP6 IST MUSIC Project (May 31, 2012), http://ist-music.berlios.de/
8. Geihs, K., et al.: A comprehensive solution for application-level adaptation. Softw.
Pract. Exper. 39, 385–422 (2009)
9. van der Heijden, H.: Ubiquitous computing, user control, and user performance:
conceptual model and preliminary experimental design. In: Proceedings of the 10th
Research Symposium on Emerging Electronic Markets, Bremen, pp. 107–112 (2003)
10. Henricksen, K., Indulska, J., Rakotonirainy, A.: Using context and preferences to
implement self-adapting pervasive computing applications: Experiences with auto-
adaptive and reconfigurable systems. Softw. Pract. Exper. 36, 1307–1330 (2006)
11. Horvitz, E.: Principles of mixed-initiative user interfaces. In: Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems, pp. 159–166. ACM,
New York (1999)
12. Kephart, J., Chess, D.: The vision of autonomic computing. Computer 36(1), 41–50
(2003)
13. Krause, A., Smailagic, A., Siewiorek, D.P.: Context-aware mobile computing:
Learning context-dependent personal preferences from a wearable sensor array.
IEEE Transactions on Mobile Computing 5(2), 113–127 (2006)
14. Kurdyukova, E.: Designing Trustworthy Adaptation on Public Displays. In:
Konstan, J.A., Conejo, R., Marzo, J.L., Oliver, N. (eds.) UMAP 2011. LNCS,
vol. 6787, pp. 442–445. Springer, Heidelberg (2011)
15. McCrickard, D.S., Chewar, C.M.: Attuning notification design to user goals and
attention costs. Communications of the ACM 46(3), 67 (2003)
16. Murch, R.: Autonomic Computing. IBM Press (2004)
17. Nielsen, J.: Ten usability heuristics (2005)
18. Norman, D.A.: The design of everyday things. 1st basic paperback. edition. Basic
Books, New York (2002) c1988
19. Peissner, M., Sellner, T.: Transparency and controllability in user interfaces that
adapt during run-time. In: Workshop on End-user Interactions with Intelligent and
Autonomous Systems. ACM (2012)
20. Ringbauer, B., Kniewel, R., Hipp, C.: Fußgänger sind keine Autos: Benutzerzen-
trierte Entwicklung eines Fußgängernavigationssystems. In: Usability Professionals
2009: Berichtband des siebten Workshops des German Chapters der Usability Pro-
fessionals Association e.V, pp. 18–22. Fraunhofer IRB Verlag, Stuttgart (2009)
21. Salehie, M., Tahvildari, L.: Self-adaptive software: Landscape and research chal-
lenges. Transactions on Autonomous and Adaptive Systems 4(2), 1–42 (2009)
22. Shneiderman, B., Plaisant, C.: Designing the user interface: Strategies for effective
human-computer interaction, 4th edn. Addison Wesley (2005)
23. Söllner, M., Hoffmann, A., Hoffmann, H., Leimeister, J.M.: Towards a theory of
explanation and prediction for the formation of trust in it artifacts. In: 10th Annual
Workshop on HCI Research in MIS, Shanghai, China, pp. 1–6 (2011)
24. Weld, D.S., Anderson, C., Domingos, P., Etzioni, O., Gajos, K., Lau, T., Wolf, S.:
Automatically personalizing user interfaces. In: IJCAI 2003, pp. 1613–1619 (2003)
25. Weyns, D., Iftikhar, M.U., Malek, S., Andersson, J.: Claims and supporting evi-
dence for self-adaptive systems: A literature study. In: SEAMS 2012 (2012)
Extending Social Networking Services
toward a Physical Interaction Scenario
1 Introduction
Social Networking Services (SNS) allow users to connect and interact with others
who have different beliefs or interests [8]. These systems make deeper the relation-
ship with already known people, reinforcing and strengthening the existing social ties
[13]. Mobile devices provide ubiquitous access to SNS allowing a virtual interaction
among users. However, most communities are partially virtual, so they require
support for interacting not only in the virtual space, but also in the physical one [5].
This paper proposes an interaction paradigm to support this hybrid social scenario,
making current SNS more ubiquitous. System availability and privacy are important
issues to consider in these systems design [1]. In order to evaluate this hybrid interac-
tion paradigm, a mobile ubiquitous application was implemented. The tool, named
Lukap, was evaluated and the preliminary results indicate the system would be appro-
priate to support partially virtual communities (PVC).
The extension of the current SNS interaction paradigm takes advantage of the
physical location of members. Based on that information and the privacy preferences,
the application promotes face-to-face encounters. This functionality is highly availa-
ble since Lukap requires just the communication support of a Mobile Ad hoc Network
(MANET) [3] and the social data each user keeps locally in his/her device.
Next section reviews the related work on supporting systems for partially virtual
communities. Section 3 introduces the hybrid social interaction space. Section 4 de-
scribes Lukap and its main components. Section 5 presents the evaluation settings and
obtained results. Finally, section 6 presents the conclusions and future work.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 208–215, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Extending Social Networking Services toward a Physical Interaction Scenario 209
own information from the SNS s/he belongs to, including the privacy preferences.
Therefore, the services for supporting physical interactions do not require access to
Internet or a SNS server in real-time. In other words, the local information about SNS
and the communication capabilities of the users’ mobile devices are enough to allow
Lukap to promote interactions in the physical scenario.
Lukap can automatically detect other members within the contacts of a particular user.
This requires no supplementary action, so this service can be considered as an exten-
sion (or complement) to the user senses. Users can import their contacts from other
SNS, allowing people to interact through any of the communication channels
provided by Lukap. A user can set his/her status as available, unavailable, busy or
disconnected. Figure 2a shows the main user interface of Lukap. In this case, the user
set his status as available to his family, but busy to his friends. Figure 2b shows how a
user can import his/her contacts from a SNS.
Lukap was developed in C# using the Microsoft .NET 4 framework and it is
currently available on Windows 7. The application uses HLMP API [16] as the main
infrastructure to create and manage the MANET. Also, it guarantees that communica-
tion among users is secure, and that personal information from a user will not be
accessible to other members through the network.
Extending Social Netw
working Services toward a Physical Interaction Scenario 211
Fig. 2a. Lukap Main Userr Interface Fig. 2b. Importing Contacts from other SNS
S
Notifications are triggerred each time a gap is detected. Most of them are invisiible
to end-users because their goal
g is to keep updated the presence information in a phhys-
ical environment. Figure 3 presents the Lukap architecture, which considers the innte-
raction between the virtuall and the physical spaces. The virtual space is represennted
by the SNS that interact with
w Lukap using their public APIs [7]. Since each AP PI is
different, there is a commu unication layer composed of several controllers, each one
specifically designed to intteract with a particular SNS. An event manager acts ass an
intermediary between the SNSS controllers and the Lukap services. Typically, it m
man-
ages the input/output of thee invoked services, providing thus a unique interface w with
the SNS, other mobile nod des, and any external application that wants to collaborrate
on-demand with Lukap.
The application main seervices are grouped in four categories: social informatiion,
supporting services, user in nterface, and communication system. This last one is im-
plemented through the HLM MP API, which is responsible of creating and maintainning
the MANET and providing g message exchange among nodes available in the physical
scenario [12]. The user inteerface must expose through the user interface the serviices
and awareness mechanismss required to perform each activity. The supporting serviices
perform functions that are used
u by other services to expose complex and rich funcctio-
nality. This involves six seervices: a context discovery, a context-aware self-adapptor,
an activity estimator, a ressources handler, a users detector and positioning systtem.
Some of these components come from the HLMP API coordination services.
The context discovery service
s is responsible of identifying and notifying conttext
changes to other Lukap serrvices, and also recording and retrieving context variabbles.
The context-aware self-adaaptor is in charge of self-adapting the application when the
environment changes consid derably.
212 C. Vergara et al.
5 Preliminary Results
The Lukap usefulness and usability was evaluated through a focus group with six
software designers. Also, its performance was evaluated in a real scenario. Next
sections present the evaluation processes and the obtained results.
Table 1. Hypothetical situations used to evaluate the usability and usefulness of Lukap
Situation 1: It is Friday evening and Alice is Situation 2: Alice and Bob took the same course
sharing a drink with Bob at a bohemian quarter. at college, and they are working together. Their
Alice has to leave early and Bob evaluates the deadline is in a couple of hours, and they have not
possibility of staying for a while if he manages to finished yet. Alice does not know where Bob is.
find a friend nearby. Since the area is small, he She tries to reach him by phone, email, Facebook
walks around and tries to find some friends. and Twitter, but he is not accessible.
Afterwards, we presented the general features and the user interface of Lukap, as a
possible solution to address both situations. The designers analyzed Lukap and could
ask questions and discuss among them. Five participants praised the usefulness of the
application as a way to enhance physical interaction among SNS members in a ubi-
quitous way. Moreover, it was widely appreciated that Lukap runs over a MANET,
since no access to infrastructure-based communication networks is required.
The participants filled a 5-point Likert scale survey that helped us to understand
how suitable are the user interfaces and the services provided by Lukap. Figure 4
shows the items of the survey and the median score of the participants’ answers.
The designers claim the Lukap services are easy to use and consider a broad range
of functionalities. Some elements of the user interface have to be redesigned in order
to make them suitable for the end-users’ mental model as Lukap currently looks like a
chat room. There were also suggestions on including location awareness features,
such as displaying contacts in a map or a radar view indicating how far and in which
direction a user is from his/her contacts.
1 2 3 4 5
All nodes remained stationary during the tests to ensure the comparability of the
obtained results. The user detection process involved four scenarios using the same
experimentation setting. The first scenario considered the detection of nodes to one
hop of distance; the second one was to two hops, and so on. Five rounds were per-
formed per each scenario. The observed variables were two: (1) the time spent be-
tween the MANET is created and the target user is detected, and (2) the percentage of
unsuccessful detections. The obtained results are shown in Table 2.
The Lukap detection process is acceptable when there are 3 hops between the target
nodes, but we can see a 20% of fails. If the distance between nodes is 4 hops the solu-
tion tends to be inappropriate, mainly because the user detection fails grow up to
60%. In order to measure the available bandwidth between users in each scenario, we
transferred a file weighting 4,2MB. The results indicate that the network throughput,
when nodes are to 1 or 2 hops, is enough to exchange text and also voice messages
among them. After that distance, just text messages are recommended. This ensures a
certain system performance that does not impact negatively on the Lukap usability.
As future work, we will redesign the Lukap user interface to include more sophisti-
cated location awareness services.
Acknowledgments. This work has been partially supported by the Fondecyt (Chile),
grant Nº 1120207 and LACCIR, grant N° R1210LAC002.
References
1. Barkhus, L., Brown, B., Bell, M., Sherwood, S., Hall, M., Chalmers, M.: From Awareness
to Repartee: Sharing Location within Social Groups. In: Proc. of CHI 2007 (2007)
2. D’Aprix, R.: The Face-to-Face Communication Toolkit: Creating an Engaged Workforce.
IABC Press, San Francisco (2009)
3. De Rosa, F., Malizia, A., Mecella, M.: Disconnection Prediction in Mobile Adhoc Net-
works for Supporting Cooperative Work. IEEE Perv. Computing 4(3), 62–70 (2005)
4. Goffman, E.: The Presentation of Self in Everyday Life. Anchor Books, New York (1959)
5. Gutierrez, F., Baloian, N., Ochoa, S.F., Zurita, G.: Designing the Software Support for Par-
tially Virtual Communities. In: Herskovic, V., Hoppe, H.U., Jansen, M., Ziegler, J. (eds.)
CRIWG 2012. LNCS, vol. 7493, pp. 73–88. Springer, Heidelberg (2012)
6. Jones, Q., Grandhi, S.A.: P3 Systems: Putting the Place Back into Social Networks. IEEE
Internet Computing 9(5), 38–46 (2005)
7. Ko, M.N., Cheek, G.P., Shehab, M., Sandhu, R.: Social-Networks Connect Services. IEEE
Computer 43(8), 37–43 (2010)
8. Lampe, C., Ellison, N., Steinfield, C.: A Face(book) in the Crowd: Social Searching vs.
Social Browsing. In: Proc. of CSCW 2006 (2006)
9. Lindqvist, J., Cranshaw, J., Wiese, J., Hong, J., Zimmerman, J.: I’m the Mayor of My
House: Examining Why People Use Foursquare. In: Proc. of CHI 2011 (2011)
10. Lübke, R., Schuster, D., Schill, A.: MobilisGroups: Location-Based Group Formation in
Mobile Social Networks. In: PerCol 2011, Seattle, United States (2011)
11. Moran, A., Rodríguez-Covili, J., Mejia, D., Favela, J., Ochoa, S.: Supporting Informal In-
teraction in a Hospital through Impromptu Social Networking. In: Kolfschoten, G.,
Herrmann, T., Lukosch, S. (eds.) CRIWG 2010. LNCS, vol. 6257, pp. 305–320. Springer,
Heidelberg (2010)
12. Neyem, A., Ochoa, S.F., Pino, J.A.: Communication Patterns to Support Mobile Collabo-
ration. In: Proc. of the 15th Intl. Workshop on Groupware, Douro, Portugal (2009)
13. Norris, P.: The Bridging and Bonding Role of Online Communities. Press/Politics 7(3),
3–13 (2002)
14. Rheingold, H.: The Virtual Community. Addison-Wesley, Massachusetts (1993)
15. Ridings, C., Gefen, D., Arinze, B.: Some antecedents and effects of trust in virtual com-
munities. Journal of Strategic Information Systems 11, 271–295 (2002)
16. Rodríguez-Covili, J., Ochoa, S.F., Pino, J.A., Messeguer, R., Medina, E., Royo, D.: A
Communication Infrastructure to Ease the Development of Mobile Collaborative Applica-
tions. Journal of Network and Computer Applications 34(6), 1883–1893 (2011)
17. Schuster, D., Springer, T., Schill, A.: Service-Based Development of Mobile Real-Time
Collaboration Applications for Social Networks. In: PerCol 2010, Germany (2010)
18. Westerlund, M., Rajala, R., Nykänen, K., Järvensivu, T.: Trust and commitment in social
networking – Lessons learned from two empirical studies. In: Proc. of 25th IMP 2009
(2009)
Improving Cooperativity in a Workflow Coordination
Model over a Pub/Sub Network
1 Introduction
A workflow management system (WfMS) is a piece of software that provides an infra-
structure to setup, execute, and monitor scientific workflows. These systems enable the
“extraction” of process management from the application software, in order to achieve
flexibility, system integration, process optimization, improved maintainability, etc. A
key issue in WfMS research is the optimization of the coordination of distributed
workflows for a more efficient message exchange. Workflow scalability is a property
whereby a system can acquire more complexity without significantly worsening affect-
ing performance. This property depends on the optimization degree in the coordination
model used for workflows execution. As in WfMS, a workflow execution divided into
several levels (e.g. communication and service logic management) allows a decoupling
of functionalities that separates activity communication and data handling from the
condition for branch activation and the transmission of control events. In addition, the
control event transmission (e.g. complexion of activities executed and decision-
oriented messaging) can be decoupled from data events (e.g. the exchange of content
among activities). This work proposes a new approach that uses cooperation tech-
niques over a Publish/Subscribe (Pub/Sub) network to optimize scalability issues and
dynamic executions of distributed workflows. This is done by using the characteristics
of Pub/Sub networks and gossip-based algorithms for data transfer between brokers.
The structure of the paper is as follows: Section 2 describes related work in workflow
scalability and Pub/Sub networks. Section 3 shows the proposed workflow service.
Section 4 describes the proposed architecture and 5 our validation.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 216–223, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Improving Cooperativity in a Workflow Coordination Model over a Pub/Sub Network 217
2 Related Works
The problem of scalability in the distributed execution of workflows is an open topic
today. Related works highlight the importance of WfMS coordination model, not only
by nature (orchestration, choreography or mixed models) but also by the distribution
and task delegation algorithms and optimizations in inter-fragment communication.
Related to the nature of the workflow, a single centralized workflow engine is of-
ten not the best solution for executing scalable workflows: large amount of data are
routed through the centralized engine, which can cause a bottleneck. There approach-
es exist that propose decentralized service orchestrators which optimize communica-
tion by placing each orchestration engine as close as possible to the component
service it manages [10]. Others combine control-driven orchestration models with an
approach based in choreography to manage dataflow connections between activities
[12]. For an efficient distribution of tasks, which allows good scalability, a workflow
needs to be fragmented so that communication between the workflow fragments is
optimized, that is, the exchange of data is the minimum necessary for a successful
workflow execution. Some works [13] define a cost model and apply it to the task
distribution process so that data interchange between tasks is minimal. Other works
face task distribution from the perspective of Workflow Scheduling, that is, the
process of finding the efficient mapping of tasks in a workflow to the suitable
resources, elaborating P2P communication models between distributed workflow
brokers [14]. In our work a new cooperative approach is used to enable the communi-
cation layer to perform the orchestration functions of branch enablement conditions
and dynamic reconfiguration. The communication model used in our approach is
Pub/Sub [4] which allows better scalability in distributed workflows.
Pub/Sub models have been utilized as one of the alternatives for facilitating infor-
mation consumption/generation in several computing environments [4] including
event dissemination [5] across services. Gossip-based algorithms [1] are a group
of network protocols for information propagation in distributed systems. Gossip-
based algorithm offers advantages [6] for environments where simplicity, scalability
and convergent consistency are crucial, such as distributed services [7]. Currently,
there gossip-based mechanisms exist [8] for predicting service workflow; however,
our solution not only limits to the workflow itself, but also how the whole event
dissemination is adapted in runtime and the benefits of using cooperation mechanisms
between distributed services and distributed Pub/Sub systems.
beyond the scope of the paper. We also assume that all the fragments are successfully
placed in the mobile devices and the information about fragment interaction is stored
in the SDL (Service Description Language) document, which contains the necessary
information to execute the service. A Task is the instantiation of a fragment that per-
forms a work. Tasks are arranged and initialized in the service bootstrapping process,
which will be explained later. A task is composed of at least one Activity. An Activity
is an atomic unit of a task. It manages the communication with an object that can be
physical or digital, in order to perform an operation. We classify operations according
to their ability to produce data (sensors), consume data (actuators) and process data
(processors). Activities trigger data and control events that are consumed by other
activities in their own task scope or external task scopes. We define Limit Activity
(AL) as any activity that communicates with other activity contained in a different
fragment by using data or control events.
We use logic gates to enable communication between service fragments, which
were previously described [10]. These logic gates follow the workflow patterns
model, defined by Van Der Aalst et al. [15], corresponding to basic control flow pat-
terns and advanced branching and merging.
4 System Model
This section describes the design of our solution, whose overall architecture is de-
picted in Figure 2. We assume that a workflow has been fragmented, distributed, as
well as there exists a Service Orchestrator (SO) capable of invoking Activities and
other modules for interacting with the Pub/Sub network. For more information about
these modules the authors refer to their previous work [10]. Each service fragment
can be represented as a producer and/or consumer which communicate with other
fragments through a network of brokers. Hence, the cooperation mechanisms are sup-
ported by brokers in the form of internal subscriptions which are directly related to AL
and the logic gate input/output they are waiting for.
Service Workflow
Internal Composite Subscription (SI)
Participants Pub/Sub Serving Broker Architecture
SEP 1 SES 1
Producer Consumer SEP 2 SES 2
. . . .
. . . . Subscription (SI) Container
SEP k SES k
Pub/Sub
verifySubEvent()
Group of Terminals Serving Logic gate Type (sType)
Broker
Service Workflow Service Workflow Matching Function
B Participants
Participants B
CM incomingEvent()
Consumer updateSub()
Pub/Sub
Pub/Sub Publish/Subscribe System B & Producer
Serving Subscribe()
Serving N-to-N links Broker
Broker Gossiping Algorithms
B
Consumer Information Dissemination Event Dispatching
B Group of Terminals Function (IDF) Function (EDF)
Group of Terminals
bootstrapping process. Thus, every time a service orchestrator detects a limit activity
which requires external information, it makes use of the matching TID. The same case
occurs when a limit activity finishes and the result has to be propagated.
We make use of two subscription models: external and internal. The external one
establishes a relationship between the Limit Activity (AL) including the callback net-
work address (CB) and the control topic it is interested in, so it can be expressed as:
SE = {AL , TP < Vl >, CB } .The second model is a composite brokers’ representation of the
subscriptions and the logic gates associated with them. This subscription encompasses
the internal behavior of complex logic gates (ANDj, XORj, ORjS and ORjD)[15],
since they include triggering points whose value can change in runtime, so it is de-
k k
fined as: S I = { ( S EPk ), S ESk , sType} ; where SEP and SES defines the predecessor and
i i
successor subscriptions for triggering control events that directly reach AL; sType
encloses the gate type and identifier.
The bootstrapping process refers to the mechanisms that support the binding of the
Limit Activities’ subscriptions to Pub/Sub topics. As the SO detects the external ALs
that makes up part of the service, the specific topics identifiers for each activity and
service instance have to be designated. For this task, we make use of a topology inde-
pendent module called Coordination Module (CM). It assigns the topic identifiers
which will be later used by in the Pub/Sub system using the TID as a seed. From here
onwards, we model the CM as a unique naming entity since the distributed considera-
tions for naming systems are out of the scope of this article. The Orchestrator registers
in the CM, the references of source and destination ALs, as well as the logic gate
which binds them. Next, the CM generates the topics identifiers (TP) that will be used
for each Limit Activity of the Service instance and sends them to the Orchestrator in
the form @Service_Id/Activity_Id. It is possible because service instances are
previously subscribed to TID. Later, it gossips the new SI to brokers. The registration
method for the bootstrapping is defined as follows: register ([] AL source, [] AL desti-
nation, G_Type, CVL); where G_Type is the gate pattern and a CVL a value in
the case of ORjS/ORjD gates. After the bootstrapping, the Orchestrator initiates
the runtime process and service fragments are ready to publish/subscribe, control
events of AL they support; hereinafter, they only have to invoke the API methods:
publish(TP <Vl>, Content) ; subscribe (Service_Id/Activity_Id, TP<Vl>). unSubscribe
(Service_Id/Activity_Id, TP).
versus an existing SI. In the case of matching, it checks the logic gate pattern and de-
cides, taking into consideration the gate’s type and SI, send the same event to the right
subscriber using the SE and the Event Dispatching Function (EDF). b) It
fetches/receives subscription’s information from the Subscription Container and de-
cides, depending on the previous subscription information it has and the early match-
ing events, to apply the right dissemination model. The Information Dissemination
Function (IDF) replicates subscription information through other brokers. Finally, the
EMF manages the network protocols that implement the publish/subscribe primitives
triggered by AL.
Synchronization: brokers verify the status of the AL subscription they are currently
serving. In the case an AL intentionally or unintentionally disconnects, the SE state
changes; so, if the output of the activity is mandatory for the next limit activity’s ex-
ecution; the broker can send a Sc.Ec event to other brokers. Once this event arrives to
the broker who is currently instantiating the logic gate, it can update the runtime
status of the subscription SI.
Multi Choice: when a broker receives a control event, it checks its version and com-
pares it with the last cached version it has. In the event the broker finds the arriving
event is newer, it verifies if there are SIs that enclose matching SES. Then, it updates
subscriptions with the new value by decrementing Ts while runs the TCA algorithm.
If broker does not match any SE, it maintains the current Ts. Step 2 fetches the number
of matching SES that correspond with the just-received event; (ScT); afterwards, step 3
recovers the number of matching SES (ScE) which were exchanged in the last gossip
round. Steps 4 and 5 compare the number of ScE and ScT in order to choose the right Ts
calculation (6) and (13) according to the received events and the previous state of
them (and their related subscriptions) in the broker. Variables Round and NBroker are
the current value of a gossip round, and the number of active peer-brokers respective-
ly. The algorithm includes a TSmax value which is the maximum threshold of the gos-
sip algorithm. In the case of a sharp increase in control events, while the broker is not
active or busy; the TSmax threshold keeps a tolerable transmission delay. The TSmin is
the minimum time the algorithm should wait in order to prevent the overloading the
broker’s resources. Finally, variable IterationV stabilizes the algorithm for the current
control event version.
Simple Merge: In the broker, simple merge patterns can head to multiple subscription
events that trigger the same event in service Activities. The strategy consist of
maintaining a single Counting Bloom Filter (CBF) per every SPEk the broker has
to match; so, the broker stores the SES that match a single SPEK topic in this CBF.
CBF [3] is a special type of Bloom Filters which allow a space-efficient probabilistic
representation of a set that support membership. Hence, since CBF are space efficient
222 A. Morales et al.
entities, the broker can quickly verify multiple SES memberships into the same filter,
as well as to reutilize them for other service instances. Despite this, as fetching the
exact number of subscriptions in CBF requires large counters which can increase the
broker load, the TCA is not applicable. However, we propose to modify Ts with a set
of fixed values over the time.
5 Evaluation
We have implemented a broker prototype on top of our previously developments [9].
We have integrated the MQTT protocol for inter-broker and publisher/subscriber-to-
broker communication. Since we are not taking into consideration the discovery phase
and the broker number is fixed to 4, we have used the SO callback address as the
topic-seed for the CM. Figure 4 shows the overall convergence of control events
(Sc.Ec) in the Pub/Sub network, this means, when all the brokers have received the
expected information. In our case the expected information has been set as the last of
10 Sc.Ecs. This is because brokers can process Sc.Ecs in different order and therefore
acquire dissimilar states in runtime, so we have previously inserted non-expected but
valid ones in order to initialize the gossiping process and later gather accurate data.
We have simulated tree service patterns (increasing, decreasing and random) where
ALs use a Factor Event for creating their increasing or decreasing events and the cor-
responding serving broke publishes them in a single Sc.Ec. In every case we run the
CEDA algorithm for gossiping data. In case 1 brokers do not run the TCA, therefore
even when different service patterns are tested the convergence time remains
the same; in this single case the TS is fixed set to 500ms. In following tests we enforce
the TCA with the TSmax set to 1000ms and TSmin to 100ms. Results show that in case 2
the TCA improves an average of 53.04 % the overall convergence time. In case 3 this
percentage is about 31.92 %, and case 4 about 32.87 %.
6 Conclusions
The fundamental idea of this work is to provide cooperation mechanisms between the
service workflow layer, and the event dissemination layer. As workflows, and espe-
cially CMS, can acquire a long-standing complexity in runtime, our solutions are
Improving Cooperativity in a Workflow Coordination Model over a Pub/Sub Network 223
focused on providing simple, scalable and feasible integration strategies with the
Pub/Sub network. Finally, we demonstrated a proof-of-concepts and validated the
advantages of the proposed algorithms and models. Future works will be focused on
supporting more complex logic gates, extending our model to content routing, and
large deployment evaluation and testing. This work is supported by project
CALISTA, number TEC2012-32457.
References
1. Eugster, P.T., Guerraoui, R., Kermarrec, A.-M., Massoulie, L.: Epidemic information dis-
semination in distributed systems. Computer 37(5), 60–67 (2004)
2. Pallickara, S., Gadgil, H., Fox, G.: On the Discovery of Brokers in Distributed Messaging
Infrastructures. In: IEEE International Cluster Computing, pp. 1–10 (2005)
3. Broder, A.: Network applications of bloom filters: A survey. Internet Mathematics (2002)
4. Fiege, L., Cilia, M., Muhl, G., Buchmann, A.: Publish-subscribe grows up: support for
management, visibility control, and heterogeneity. IEEE Internet Computing 10(1), 48–55
(2006)
5. Medjahed, B.: Dissemination Protocols for Event-Based Service-Oriented Architectures.
IEEE Transactions on Services Computing, 155–168 (July-September 2008)
6. Birman, K.: The promise, and limitations, of gossip protocols. SIGOPS Oper. Syst.
Rev. 41(5) (October 2007)
7. Campos, F., Pereira, J.: Gossip-based service coordination for scalability and resilience. In:
Workshop on Middleware for Service Oriented Computing. ACM (2008)
8. Song, W., Jiang, D., Chi, C.-H., Jia, P., Zhou, X., Zou, G.: Gossip-Based Workload Pre-
diction and Process Model for Composite Workflow Service. In: World Conference on
Services - I, July 6-10, pp. 607–614 (2009)
9. Morales, A., Novo, O., Wong, W., Alcarria, R.: Towards the Evolution of Pub-
lish/Subscribe Internetworking Mechanisms with PSIRP. International Journal of Comput-
er Information Systems and Industrial Management Applications (2013) ISSN: 2150-7988
10. Alcarria, R., Robles, T., Dominguez, A.M., Cedeno, E.: Resolving Coordination Chal-
lenges in Cooperative Mobile Services. In: Sixth International on Innovative Mobile and
Internet Services in Ubiquitous Computing, IMIS (2012)
11. Fdhila, W., Dumas, M., Godart, C.: Optimized decentralization of composite web services.
In: 2010 6th International Conference on Collaborative Computing: Networking, Applica-
tions and Worksharing (CollaborateCom), October 9-12, pp. 1–10 (2010)
12. Fleuren, T., Gotze, J., Muller, P.: Workflow Skeletons: Increasing Scalability of Scientific
Workflows by Combining Orchestration and Choreography. In: 2011 Ninth IEEE Euro-
pean Conference on Web Services, ECOWS (2011)
13. Nanda, M.G., Chandra, S., Sarkar, V.: Decentralizing execution of composite web servic-
es. SIGPLAN Not. 39(10), 170–187 (2004)
14. Ranjan, R., Rahman, M., Buyya, R.: A Decentralized and Cooperative Workflow Schedul-
ing Algorithm. In: 8th IEEE International Symposium on Cluster Computing and the Grid,
CCGRID 2008, May 19-22, pp. 1–8 (2008)
15. Van Der Aalst, W.M.P., TerHofstede, A.H.M., Kiepuszewski, B., Barro, A.P.: Workflow
Patterns. Distrib. Parallel Databases 14(1), 5–51 (2003)
An Approach for the Creation of Accessible
and Shared Datasets
1 Introduction
Studies have shown that many developing economies are experiencing a demographic
transition from a predominately younger population to one with a much larger percent-
age of older people [1]. In addition to the increase in the percentage of older people, the
average age of life expectancy has increased from 75.7 years in 1990 to 79.7 years in
2008 and is expected to continue to rise [2]. At the same time, declining rates of fertility
are also having an impact on global demographics, with the ratio of 15 year olds to 65
year olds expected to decrease from the current ratio of 9:1 to 4:1 by 2050 [3].
As people age the likelihood of suffering from an age related impairment increases;
with people aged 85 and over being the most common sufferers of chronic diseases
and disabilities [1]. The impact of age related impairments has resulted in an
increased strain on the economy as the demand for health and social care services
*
Corresponding author.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 224–232, 2012.
© Springer-Verlag Berlin Heidelberg 2012
An Approach for the Creation of Accessible and Shared Datasets 225
continues to rise [4]. One potential solution to alleviate the impact of these challenges
is to integrate health and social care provision into the home through the use of tech-
nology [1]. A multitude of solutions are being developed within the area of smart
home environments [5 - 8]. In addition to supporting people suffering from cognitive
and physical impairments, such solutions can also support people who may live alone
and require assistance performing activities of daily living (ADL).
As a direct result of the large amount of research in this area there is an abundance
of data being generated, with a largely heterogeneous nature, given that is generated
from multiple sources and therefore stored in various formats. The main challenge
with the heterogeneity of data within this research domain is its lack of interoperabil-
ity; creating difficulties when data is being stored, exchanged and then processed.
This data is typically stored within a database where it can be processed further in
order to produce useful knowledge and allow behavioural patterns to be identified and
monitored [8]. This in turn can then be used to diagnose conditions, in addition to
monitor and identify cognitive or physical decline. From a research perspective, the
data collected can also be used to assist in the development of activity recognition
algorithms, which are subsequently deemed to be an essential component in support-
ing people as they perform ADL. Thoroughly testing the aforementioned develop-
ments can be time consuming and expensive although the use of a common approach
for storing and sharing datasets has been recognised as being beneficial in this proc-
ess. Juan Ye et al. state "Data sets are essential to activity recognition research, since
they provide a basis for assessing activity recognition algorithms. ... The ability of
researchers to share and reuse data sets is therefore of paramount importance." [9].
2 Related Work
During a workshop within CHI'09 [10] the issues surrounding the development of
shared home behaviour datasets were discussed to advance home-computer interac-
tion and ubiquitous computing research [11]. Intille [12] has continued this research
and outlines a problem within an area he is currently working, 'Portable In-Home
Data Collection'; whereby work related to context-awareness within the home is being
limited due to the lack of large datasets available to researchers to test their develop-
ments and discoveries [12]. Intille has proposed the development of a community
resource containing six datasets consisting of high quality, synchronised data streams
recorded over a four month period from most sensor types currently being used within
smart home environments [12]. This resource has the potential to enable researchers
to focus on the development and testing of activity recognition models without being
stalled by the requirement for data collection.
In order to produce shareable datasets there are ethical issues that must be consid-
ered and addressed. Publicly available datasets have been limited in the past due to
ethical issues arising when the potential to publically scrutinise data arises [13].
Therefore, whilst the need has been identified for a reliable, open source data
resource, it is also essential to consider the ethical issues that might arise as a result.
226 H. McDonald et al.
2.1 homeML
homeML [14] has been proposed as a means of solving the interoperability issues
associated with collecting data from heterogeneous data sources. homeML was origi-
nally designed as an XML-based format for the exchange and storage of data within
smart home environments. Nevertheless, as technology and services continue to
evolve it is now possible and indeed desirable to monitor and support a person both
inside and outside of their home environment. As a result it was necessary to revise
the original homeML schema to support the storage of data generated within multiple
environments [15]. In addition, it was also necessary to extend the schema further to
incorporate a patient’s location, in addition to experimentation annotation details col-
lected when recording data. Currently, homeML has evolved from version 1.0 to ver-
sion 2.2, as briefly described in Table 1. A detailed description of the homeML
schema and the evolution process is discussed within [15].
Table 1. A description of the evolution of homeML from version 1.0 to version 2.2
Version
1.0 When homeML version 1.0 [14] was proposed the schema was only
designed and not validated. The initial validation process identified minor
amendments, such as the addition of a 'data' tag to both the 'realTimeIn-
formation' and 'event' elements [15].
2.0 It is now increasingly common to monitor a person as they move between
environments. Therefore, a second evaluation process identified the need
to extend homeML to incorporate both location specific and mobile
devices, with the addition of the 'locationDevice' and 'mobileDevice'
elements [16].
2.2 Version 2.0 has since undergone two additional validations processes,
with homeML version 2.2 being the most recent version. An 'annotation-
Details' element is now included, to enable the categorisation of datasets.
3 Methods
The homeML Application has been developed to provide researchers with an intuitive
end-to-end system to assist them as they perform research in the area of smart envi-
ronments. It provides researchers with an approach for the creation of shareable data-
sets that can then be used to test and validate the developments they make in a timely
and efficient manner. This in turn means that researchers will have the tools available
to design an XML schema fitting to the experiment they would like to perform, popu-
late the XML schema with data generated from any device and upload the completed
XML file to a central repository.
Functionality of the toolkit allows a registered user, to store data generated both in-
side and outside of a smart home environment, within a central repository. Before the
file upload is complete, the format of the file is validated to ensure it adheres to the
homeML version 2.2 schema. Once uploaded, the data can be retrieved at a later stage
and viewed using the homeML Repository section of the application.
When uploading a dataset to the homeML Repository users have the option of
making the data 'public'. Users are therefore able to specify whether they want the
dataset they upload to be viewable and downloadable by other researchers; or private,
i.e. viewable by only the owner of the dataset. Therefore, the opportunity for re-
searchers to share and compare data gathered within a smart home environment has
the potential to be dramatically improved.
Table 2. Sensors available within SERG to participants for the purpose of completing the study
Sensor Sensor Type Description Location
S1 Contact Switch Board Room Door To Corridor Smart Meeting Room
S2 Contact Switch Board Room Door To Office Smart Meeting Room
S3 Contact Switch Kitchen Door to Office Kitchen
S4 Contact Switch Kitchen Cupboard Kitchen
S5 Contact Switch Fridge Kitchen
S6 Contact Switch Microwave Kitchen
S7 Tilt Switch Kettle Kitchen
S8 Contact Switch Board Room to Robotics Lab Robotics Lab
S9 Contact Switch Living Room to Kitchen Living Room
S2
S8
S1
S5 S6 S4 S7
Living Kitchen
Room S9
S3
Fig. 2. Deployment map of the sensors used during the study within the SERG labs
An Approach for the Creation of Accessible and Shared Datasets 229
4 Evaluation
The study documented within this paper focuses on validating the usability of the
homeML Application, including the homeML Repository and its corresponding suite
of tools as an end-to-end system when used by researchers, performing experiments
within a smart home environment.
To complete the study five participants were recruited from within the SERG [17].
Prior to engaging with the homeML Application each participant completed a ques-
tionnaire. The questionnaire consisted of both quantitative and qualitative questions;
the purpose of which was to profile each individual researcher in relation to their level
of experience and area of research.
Although each participant had a different area of interest, they were all involved
within research activities in the general area of smart environments. Each participant
had experience with XML and four out of five were already aware of the homeML
concept or had previously used it. Whilst performing their research each participant
identified a requirement to undertake some form of experiment that would result in
the generation of heterogeneous data produced by multiple sensors/sources. When
asked if the methods they currently used for data sharing were sufficient three out of
five participants said no; with one claiming that a major challenge they face on a
regular basis was ‘Obtaining a good quality dataset that is reliable and accurate in a
suitable format that is easy to integrate into already existing systems’.
Following completion of the questionnaires participants were able to log-on to the
homeML Application and complete the specified task. The task involved the partici-
pant designing an experiment using the homeML Toolkit. The participant was able to
select any of the nine sensors available within the SERG research labs and specify a
time period between one and fourteen days for their experiment to run. Table 3 docu-
ments the sensors chosen and the time specified by each participant during the study.
Table 3. Sensors chosen by each participant whilst performing the study, where S = Sensor and
P = Participant
S1 S2 S3 S4 S5 S6 S7 S8 S9 Time Period (Days)
P1 X X X X X 14
P2 X X 7
P3 X X 7
P4 X X 2
P5 X X X X X X X X X 13
Once the experiment was designed, the researcher saved the partially populated
XML schema. Over the time period specified by the user (between one to fourteen
days) the XML file was populated with real data generated within the SERG labs.
After the experiment was completed, the XML file was uploaded to the homeML
Repository. All users subsequently made their datasets publically available.
Finally, each participant was asked to complete a post-experiment questionnaire,
again consisting of both quantitative and qualitative questions. The purpose of this
questionnaire was to allow the participant to rate their experience of the homeML
Application in addition to providing them with an opportunity to suggest comments
230 H. McDonald et al.
and recommendations for the homeML Application. The questionnaire also provided
useful information regarding the participants’ opinions on using the homeML
structure in the future.
with all participants agreeing that the homeML Application would be a useful tool to
be available within the research domain.
In order to validate and test the homeML Application further it is our intention to
secure a number of collaborators who can assist with the evaluation of homeML and
the homeML Application in order to identify in a truely objective manner any inade-
quacies and scope for further development. The developers of the homeML Applica-
tion plan to attend a range of smart home related conferences. This opportunity will
be used to demonstrate the latest version of the homeML Format in addition to the
features of the homeML Application, including the homeML Toolkit and Repository.
Members of the research domain will be approached and asked to participate in re-
viewing and validating the work. Through this method of acceptance testing it will be
possible to establish wider acceptance within the research community of both
homeML version 2.2 and the homeML Application1.
Acknowledgement. The authors of this paper would like to thank all participants
involved with the study for their time, feedback and support. This work was funded
by the Department of Employment and Learning Higher Education Innovation Fund.
References
1. Coughlin, J.F., Pope, J.: Innovations in Health, Wellness and Ageing-in-Place. IEEE Engi-
neering in Medicine and Biology Magazine, 47–52 (2008)
2. World Health Organisation, http://apps.who.int/ghodata/?vid=720
3. Chan, M., Campo, E., Esteve, D., Fourniols, Y.-Y.: Smart homes – Current features and
future perspectives. Maturitas 64(2), 90–97 (2009)
4. Knickman, J.R., Snell, E.K.: The 2030 Problem: Caring for Ageing Baby Boomers. Health
Services Research, 849–884 (2002)
5. Helal, S., Mann, W., El-Zabadani, H., King, J., Kaddoura, Y., Jansen, E.: The Gator Tech
Smart House: A Programmable Pervasive Space. Computer in Computer 38(3), 50–60
(2005)
6. Kientz, J.A., Patel, S.N., Jones, B., Price, E., Mynatt, E.D., Abowd, G.D.: The Georgia
Tech Aware Home. In: Proceedings of CHI (2008)
7. Intille, S.S., Larson, K., Beaudin, J.S., Tapia, E.M., Kaushik, P., Nawyn, J., McLeish, T.J.:
The PlaceLab: a live-in laboratory for pervasive computing research (Video). In: Proceed-
ings of the Pervasive 2005 Video Program (2005)
8. Cooke, D.J., Das, S.K.: How smart are our environments? An updated look at the state of
the art. Pervasive and Mobile Computing 3(2), 53–73 (2007)
9. Ye, J., Coyle, L., McKeever, S., Dobson, S.: Dealing with activities with diffuse bounda-
ries. In: The Proceedings of the Pervasive 2010 Workshop (2010)
10. CHI 2009, http://www.chi2009.org/
11. Developing Shared Home Behaviour Datasets to Advance HCI and Ubiquitous Computing
Research, http://web.mit.edu/datasets/Home.html
12. Intille, S.: Making Home Activity Datasets a Shared Resource,
http://boxlab.wikispaces.com/Mission
1
Access to the homeML Application can be obtained using the following link: www.home-
ml.org/Browser
232 H. McDonald et al.
13. Coyle, L., et al.: Gathering datasets for activity identification. In: The Proceedings of the
Workshop on Developing Shared Home Behaviour Datasets to Advance HCI and Uniquit-
ous (2009)
14. Nugent, C.D., Finlay, D.D., Davies, R.J., Wang, H.Y., Zheng, H., Hallberg, J., Synnes, K.,
Mulvenna, M.D.: homeML – An Open Standard for the Exchange of Data Within Smart
Environments. In: Okadome, T., Yamazaki, T., Makhtari, M. (eds.) ICOST. LNCS,
vol. 4541, pp. 121–129. Springer, Heidelberg (2007)
15. McDonald, H.A., Nugent, C.D., Moore, G., Finlay, D.D.: An XML Based Format for the
Storage of Data Generated within Smart Home Environments. In: The 10th IEEE Interna-
tional Conference on Information Technology and Applications in Biomedicine, pp. 1–4
(2010)
16. McDonald, H.A., Nugent, C.D., Finlay, D.D., Moore, G., Hallberg, J.: A Web Based Tool
for the storing and visualising data generated within a smart home. In: Proceedings of the
IEEE Annual Conference of the Engineering in Medicine and Biology Society, pp. 5303–
5306 (2011)
17. Smart Environments Research Group, http://scm.ulster.ac.uk/
18. Nugent, C.D., Mulvenna, M., Hong, X., Devlin, S.: Experiences in the development of a
smart lab. International Journal of Biomedical Engineering and Technology 2(4), 319–331
(2009)
An Infrastructure to Provide Context-Aware
Information and Services to Mobile Users
Abstract. Mobile devices have undergone great changes in the last few
years, turning into powerful hand-held computers. This, together with
the numerous information sources they have access to, both in form of
physical sensors and applications with access to the Internet and large
amounts of personal data, make them an ideal platform for the deploy-
ment of context-aware applications. In this paper we present a context
management infrastructure for mobile environments, responsible for deal-
ing with the context information which will enable mobile application
and services to adapt their behaviour to meet user needs. This infras-
tructure relies on semantic technologies to improve interoperability, and
is based on a central element, the context manager. This element acts as
a central context repository and takes most of the computational burden
derived from dealing with this kind of information, thus relieving from
these tasks to more resource-scarce devices in the system.
1 Introduction
Nowadays mobile devices play a prominent role in our lives, being around 6 cel-
lular subscriptions for every 7 people in the world [7]. At the same time, with
the emergence of the new and powerful mobile phone models, the smartphones,
the capabilities of these devices have outstandingly increased. This way, smart-
phones are not only equipped with an equivalent hardware to desktop computers
from a few years ago, but they also include other key features like permanent
network connectivity, ability to execute a great diversity of applications and
integrate numerous sensors.
With these characteristics, latest mobile devices are able to provide richer and
better context information than ever before, and therefore are a great platform
for the deployment of context-aware applications. By context we mean any infor-
mation that can be used to characterize the situation of an entity (where entity
is a person, place, or object) that is considered relevant to the interaction be-
tween a user and an application, including the user and applications themselves,
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 233–240, 2012.
c Springer-Verlag Berlin Heidelberg 2012
234 P. Curiel and A.B. Lago
2 Related Work
Numerous context-aware systems for mobile environments have been developed
in the last years.
The work in [9], proposes a Context Managing Framework whose aim is to
provide context information to mobile applications in order to adapt their be-
haviour according to that context. As our system, it uses a blackboard approach,
based on a central context server, to decouple source and consumer communi-
cation. But in contrast to our solution, it places this central server inside the
mobile phone and limits the context information usage to each mobile phone’s
own applications.
CoBrA project [3] proposes an architecture for supporting context-aware sys-
tems in smart spaces. It is based in a central agent, a broker which maintains
a shared model of the context representing all the entities in a given space. It
also uses a policy language to allow users to define rules to control the use and
the sharing of their private contextual information. However, the system is not
explicitly designed to operate in mobile environments.
In [5], CASS, a middleware for context-aware mobile applications is intro-
duced. Its main goal is to give service to devices with low processing capabilities,
like mobile phones. For this purpose, as we do in our system, it delegates de-
manding tasks regarding context processing to external entities in the network. It
also resembles our solution in making usage of a central context repository. But
whereas we adopt a semantic representation for that central repository, which
enables a more flexible context processing and sharing, CASS uses a database
for this purpose, which follows no semantic representation and relies mainly on
a rule-based engine to process that context.
An Infrastructure to Provide Context-Aware Information and Services 235
Current Context. This element stores the context information which is valid in
each moment, that is, the one that represents the current status of the entities
which are considered to be part of the context. This information is stored follo-
wing the ontological model shared by the whole system, so the access to it must
be ruled by this model too.
Context History. In contrast to the current context, which only stores currently
valid information, the context history keeps track of the changes taken place
in the context information. Depending the strategy followed for the historical
information storage, this entity could require a great amount of space. Hence,
several policies have been established, such as number of entries to store, infor-
mation time-to-life, which entities should be stored and which not,... so as to
enable different configurations for each specific scenario.
and consumers to work with context information. Among those, the following
can be emphasized as being some of the most relevant ones:
– Add Context Info. This method enables a context source to add or update
context information in the current context space.
– Query. By calling this method, a context consumer can synchronously access
the current context space using SPARQL queries.
– Subscribe/Notify. The Subscribe method enables a context consumer to asyn-
chronously access the current context space. The consumer provides an
SPARQL query, which the context manager will register, as well as a call-
back address which the second will use to notify the first when the query is
matched using the Notify method.
Fig. 1. CPU usage adding instances Fig. 2. Memory usage adding instances
manager was executing in a PC with a Intel Core 2 Duo at 2.26GHz and with
4GB of memory, running Widows 7 Professional Edition and JDK 1.6. The CPU
and memory usage statistics were collected using the JConsole utility provided
by the JDK.
From these tests several conclusions were extracted. First of all, in Fig. 1 it
can be observed how an increase in the number of concurrent sources interacting
with the context manager greatly impacts the system performance. This way,
with only one concurrent source adding context information, the CPU demand is
very subtle until the ontological instance count in the context manager exceeds
100 units, and reaches the 100% with almost 200 units. In contrast, having
five concurrent sources interacting with the context manager, CPU demand is
significantly higher even when there is a low number of instances in the current
context, reaching a 100% CPU demand with about 50 instances. Finally, with
ten concurrent sources, the CPU demand is even higher, reaching 100% with
about 20 instances in the current context.
At the same time, Fig. 1 states what Fig. 2, 3 and 4 also do, that the number
of instances in the context space has a huge impact in the system performance
too. Therefore, apart from increasing the processing power needed, Fig. 2 shows
that memory consumption follows a linear increase as a function of the instance
number. Concerning the instance adding time (Fig. 3) and query times (Fig. 4),
they follow a similar pattern, increasing according to an exponential function.
An Infrastructure to Provide Context-Aware Information and Services 239
Finally, an excessive system overload was observed when the instance number
in the context space increases, so some checks were made. This led to the conclu-
sion that working with Jenabean is both more processor and memory demanding
that working directly with Jena, as Fig. 6 shows. Must be noted that this test
was not carried out removing Jenabean completely from the prototype, as it is
deeply integrated in the implementation, but only interacting directly with Jena
to perform the instance adding operation. Thus, reimplementing the prototype
without depending on Jenabean could improve system performance to a greater
extent that what Fig. 6 shows.
240 P. Curiel and A.B. Lago
5 Conclusion
References
1. Abowd, G.D., Dey, A.K., Brown, P.J., Davies, N., Smith, M., Steggles, P.: Towards
a Better Understanding of Context and Context-Awareness. In: Gellersen, H.-W.
(ed.) HUC 1999. LNCS, vol. 1707, pp. 304–317. Springer, Heidelberg (1999)
2. Apache Jena: Java semantic web framework, http://jena.apache.org/
3. Chen, H., Finin, T., Joshi, A.: An intelligent broker for context-aware systems. In:
Proc. of Ubicomp, Seattle, Washington, USA, pp. 183–184 (2003)
4. Coppola, P., Della Mea, V., Di Gaspero, L., Mizzaro, S., Scagnetto, I., Selva, A.,
Vassena, L., Rizi, P.: MoBe: context-aware mobile applications on mobile devices
for mobile users. In: Proc. International Workshop on Exploiting Context Histories
in Smart Environments, ECHISE, Munich (2005)
5. Fahy, P., Clarke, S.: CASS-a middleware for mobile context-aware applications. In:
Proc. Workshop on Context Awareness, MobiSys, pp. 304–308 (2004)
6. Hayes-Roth, B.: A blackboard architecture for control. Artificial Intelligence 26(3),
251–321 (1985)
7. ITU: Key global telecom indicators for the world telecommunication service sector,
http://www.itu.int/ITU-D/ict/statistics/at_glance/KeyTelecom.html
8. jenabean: A library for persisting java beans to rdf,
http://code.google.com/p/jenabean/
9. Korpip, P., Mantyjarvi, J., Kela, J., Keranen, H., Malm, E.: Managing context
information in mobile devices 2(3), 42–51 (2003)
10. OSGi Alliance: The open services gateway initiative, http://www.osgi.org/
11. Richardson, L., Ruby, S.: RESTful web services. O’Reilly Media (2007)
12. Strang, T., Linnhoff-Popien, C.: A context modeling survey. In: Proc. Workshop on
Advanced Context Modelling, Reasoning and Management, Nottingham, England
Situation-Driven Development: A Methodology
for the Development of Context-Aware Systems
Abstract. Several toolkits have been proposed in order to ease the development
of context-aware systems, providing high-level programming interfaces to man-
age context data. One of the main tasks in the development of such systems is
the definition of user situations that have to be identified by the system in order
to adapt its behaviour. These situations are best defined by domain experts, but
usually they do not have programming skills. Apart from that, there is a lack of
methodologies to guide the development process. This paper presents a meth-
odology based on the definition of situations that is designed to involve domain
experts in the development process. This way, they can support programmers in
the definition of the required situations. Also, a web-based platform has been
implemented in order to manage context data without any programming skills.
This way, domain experts can also configure the situations to be detected by the
system.
1 Introduction
Context-aware systems provide personalized services to users at any time and place,
based on the identification of the situation of the user. Nevertheless, the development
of these systems is difficult for programmers. There are several technical challenges
that have to be faced: context sources have to be identified, information has to be
obtained from these distributed sources, data has to be modelled in order to be proc-
essed by computers, the situation of the user has to be identified, and finally, the
system has to be adapted to the identified situation.
Apart from that, it can be difficult for a programmer to identify the needed situa-
tions and the desired behaviours of the system to be developed once a situation is
detected, because they are usually dependent on the application domain. Domain ex-
perts, that is, people that are experts in the domain where the application is going to
be deployed, can provide the needed expertise regarding the above requirements.
There are several research works that propose toolkits in order to ease the devel-
opment of context-aware systems [1]. Most of them offer high-level application
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 241–248, 2012.
© Springer-Verlag Berlin Heidelberg 2012
242 D. Martín et al.
2 Related Work
There are several software development methodologies that can be used in order to
implement context-aware systems [2] (e.g. waterfall, iterative) but all these method-
ologies are designed to guide the development process of general software systems, so
they do not consider the specific issues that are related to the development of context-
aware systems.
Several authors have proposed development methodologies to be used in the
implementation of context-aware systems. Henricksen and Indulska [3] propose a
methodology and a modelling language called Context Modelling Language (CML).
This language was developed for conceptual modelling of databases that store context
data. It has different constructs for capturing the needed classes and context sources,
the quality of context data and the dependencies and constraints between context fact
types. This way, system designers can specify the requirements of context data
needed by the system. The methodology is divided in five different stages: analysis,
design, implementation, configuration of the system and validation. Context-oriented
Programming (COP) [4] is a programming model that offers mechanisms to adapt the
system to be implemented according to the gathered context data.
The above mentioned approaches are focused on system designers and program-
mers and do not involve non-technical users. In that way, domain experts cannot take
part in the development process.
On the other hand, there are several context-aware toolkits to support the develop-
ment of such systems. Most of the toolkits [5][6] can only be configured using
Situation-Driven Development: A Methodology for the Development of Context-Aware 243
high-level application programming interfaces, and therefore only people with techni-
cal skills can use them in order to develop context-aware systems.
Other authors propose visual approaches where domain experts can take part in the
development process. The iCAP toolkit [7] is a visual editor where domain experts
can prototype context-aware systems using rules. DiaSuite toolkit [8] comprises a
domain-specific design language, a compiler for this language and an editor to define
simulation scenarios. The OPEN framework [9] is a programming environment for
rapid prototyping of context-aware applications. It is based on the configuration of
semantic rules in order to trigger predefined actions. All these toolkits have been de-
signed in order to involve non-technical users in the development process, but they
have some drawbacks. For instance, the iCAP toolkit does not have functionalities in
order to define a context data model and it cannot be extended. DiaSuite toolkit
provides a modelling language that can be quite difficult for a non-technical user
because programming skills are required to define artefacts. The OPEN framework
uses a semantic model to represent context data that can only be extended by skilled
ontology experts.
3 Situation-Driven Development
In order to guide the development process of context-aware systems and involve non-
technical domain-experts in collaboration with programmers, a methodology has been
designed. This methodology is based on the definition of situations and it is based on
the following premise: the programmers have context-aware toolkits that provide the
needed functionalities in order to detect user situations and these toolkits can be
configured without programming skills.
The aim of the methodology is to promote the collaboration among domain experts
and programmers in the development of context-aware systems. This methodology
244 D. Martín et al.
considers the different users’ situations that are relevant to adapt the behaviour of the
system to be developed. Like that, the used toolkits have to be able to provide mecha-
nisms to detect users’ situations and produce outputs based on these situations.
The methodology is divided into four different stages: analysis, configuration,
development and validation. There are some stages where only the programmer can
participate because they require some kind of development.
• Analysis. In the analysis stage, domain experts and programmers have to identify
all the situations that can be relevant for the system to be developed, specifying a
name, a description and the desired behaviour of the system once the situation is
detected. Also, each of the identified situations have to be parameterized with the
entities that are involved in the situation, the location where the situation can be
identified and the interval of time when the situation can be detected. The needed
inputs of context data in order to detect the situation have to be specified as well,
providing the objective, the conditions and the restrictions for each data type. Fi-
nally, the needed outputs once a situation is detected have to be specified. These
outputs will be used by programmers in order to adapt the system’s behaviour
according to the previous specifications. In order to support the analysis stage, an
excel sheet has been designed, where domain experts and programmers can discuss
about the needed parameters of each of the identified situations.
• Configuration. Once the analysis stage is finished, the toolkit has to be configured
with the specified parameters. In this stage, the programmer has to identify and
configure the context sources that can provide the defined inputs of data, and con-
figure or implement the providers that are going to obtain these data from the iden-
tified sources. The next step is the configuration of the areas where the situations
can be detected, the context data model that will store context data, the mappings
between the obtained data and the model, and the inference mechanisms in order to
detect the needed situations. These configurations should be done by domain ex-
perts with the collaboration of programmers, so the toolkit has to provide configu-
ration mechanisms in order to avoid the usage of programming languages.
• Development. The programmer has to implement the defined behaviours of the
system to be developed, processing the outputs generated by the toolkit.
• Validation. Finally, the service has to be tested and validated by domain experts
and programmers. Also, the final user could be involved in this stage.
Situation-Driven Development: A Methodology for the Development of Context-Aware 245
The first layer is where all the context sources are (e.g. mobile devices, sensors,
web services). The next layer contains the providers of the platform. These modules
obtain data from the identified sources. There are two types of providers, the passive
ones and the active ones. Passive providers wait until any of the registered sources
send data to the platform in order to process it. On the other hand, active providers
obtain data from the source making periodical GET requests. Context data has to be
provided by sources using XML.
1
https://vimeo.com/contextcloud
246 D. Martín et al.
The third layer is where context data is managed and configured using the interface
of the platform. This layer exposes a web front-end where the user can create areas,
rules, context entities, mappings between context sources and entities and the context
data gathering process from the identified context sources by the use of dialogs. The
main component of the layer is the Context Manager which is responsible for the
management of the context data life-cycle. Here, the defined context data model is
transformed into Java Bean classes and it is stored in the Context Model Store. This
way, the platform allows the creation of a context model defining entities with differ-
ent typed properties. Situations are modelled as Event Condition Action (ECA) rules
[13] that are validated and inserted into the rule engine of the Knowledge Base. The
platform provides dialogs to create areas and rules assigned to these areas, which are
defined by polygons that are created over a Google Maps layer. The Mapping Engine
does the mappings between data coming from the configured context sources and
the defined context entities, according to the user’s defined mappings. It also saves
or updates context entities instances into the Knowledge Base. The GIS Service trans-
lates the coordinates of context entities into a registered area name. This way, rules
will only have into account context entities that are in their associated area. Finally,
the rule engine is the responsible of firing all the defined rules. This component
can also POST data in XML format to external services as a consequence of the
defined rules.
Finally, the upper layer is where the context-aware system resides. This system
will adapt its behaviour based on the identified situations and the outputs generated by
the platform.
5 User Evaluation
The platform has been validated with 20 participants. They carried out the evaluation
in pairs composed by a domain expert and a programmer. The evaluation was inspired
in a tourism scenario, so the non-technical users were experts in the tourism
domain.
5.1 Methodology
First of all, the users were introduced to the Context Cloud platform and to the expe-
riment’s objectives. They were instructed on how to configure the platform and they
were given an example on how to identify a situation using the methodology. The
participants were given a text document where four different situations were
described.
• Waiting for the bus (S1): the visitor waits for the bus at Bus Stop A and she rece-
ives an SMS with the estimated time of arrival for the next bus.
• Sunbathing (S2): the visitor is at the beach when she receives an SMS advising her
that she should use sun cream because the temperature is higher than 30ºC
Situation-Driven Development: A Methodology for the Development of Context-Aware 247
• Waiting for the bus (S3): the visitor waits for the bus at Bus Stop B and she rece-
ives an SMS with the estimated time of arrival for the next bus.
• Arriving to the hotel room (S4): the air conditioning is activated when the visitor
goes into the room.
The situations were simulated using the Siafu Context Simulator2. It was configured
to send context data about the visitor on the move to the platform. The simulator also
provided some web services in order to obtain the weather information and to access
the air conditioning system of the Hotel. The participants had to configure the plat-
form in order to detect the above mentioned situations. During the test, an external
observer annotated all the problems that the participants found using the platform.
Also, once a situation was detected, the time spent in its configuration was annotated.
After having completed the user experience, each participant had to fill out a ques-
tionnaire based on a six-level likert scale, with values from 1 (totally disagree) to 6
(totally agree). The used survey was designed on the Technology Acceptance Model
(TAM) literature, and in particular, it was adapted from David’s studies [14]. This
way, three constructs were considered: Perceived Ease of Use (PEOU), the Perceived
Usefulness (PU), and the Behavioral Intention (BI).
5.2 Results
The 95% of the participants find that learning the methodology is easy and the 100%
of them state that it eases the collaborative work. Also, the 100% of the participants
find that the methodology is useful to work with the platform, and that it is useful to
develop context-aware systems.
The 95% of the participants find that learning how to use the platform is easy. The
75% of the participants find it easy to get Context Cloud to do what they want to do.
However, the other 25% disagree on that. The reason is that the participants were not
used to work with these kinds of toolkits. The 95% of the participants also find that
the interaction with Context Cloud is clear and understandable. The 90% of the
non-programmers state that it would be easy for them to become skillful at using the
platform.
The perceived utility of the platform is also highly supported by domain experts.
The 100% of the domain experts state that using Context Cloud in their jobs would
enable them to develop context-aware systems more quickly and that it would make it
easier to develop context-aware systems. Also, all of the participants would recom-
mend other users to use the platform and they would use it in future developments. In
addition to this, the 80% of them would pay for the system.
The average time spent by each of the pairs to solve the evaluation test was 89 mi-
nutes. It is relevant that the they spent an average time of 37 minutes in order to solve
the situation number one, while for the rest of the situations, the average time was 17
minutes. This means that once they know how to configure the platform in order to
identify the first situation, it is easier for them to configure it for the rest of the situa-
tions. This way, the learning curve is steep, that is, the participants learn in a very
short period of time how to use the platform successfully.
2
http://siafusimulator.sourceforge.net/
248 D. Martín et al.
6 Conclusions
References
1. Baldauf, M., Dustdar, S., Rosenberg, F.: A survey on context-aware systems. International
Journal of Ad Hoc and Ubiquitous Computing 2, 263 (2007)
2. Green, D., DiCaterino, A.: A Survey of System Development Process Models. Center for
Technology in Government University, Albany (1998)
3. Henricksen, K., Indulska, J.: Developing Context-Aware Pervasive Computing Applica-
tions: Models and Approach. Pervasive and Mobile Computing 2(1), 37–64 (2006)
4. Hirschfeld, R., Costanza, P.: Context-oriented Programming. Journal of Object Technolo-
gy 7(3), 125–151 (2008)
5. Bardram, J.E.: The Java Context Awareness Framework (JCAF) – A Service Infrastructure
and Programming Framework for Context-Aware Applications. In: Gellersen, H.-W.,
Want, R., Schmidt, A. (eds.) PERVASIVE 2005. LNCS, vol. 3468, pp. 98–115. Springer,
Heidelberg (2005)
6. Gu, T., Pung, H., Zhang, D.: A service-oriented middleware for building context-aware
services. Journal of Network and Computer Applications 28, 1–18 (2005)
7. Sohn, T., Dey, A.: iCAP: An Informal Tool for Interactive Prototyping of Context-Aware
Applications. In: Extended Abstracts of CHI, pp. 974–975 (2003)
8. Cassou, D., Bruneau, J., Consel, C.: A tool suite to prototype pervasive computing applica-
tions. In: 2010 8th IEEE International Conference on Pervasive Computing and Communi-
cations Workshops (PERCOM Workshops), pp. 820–822 (2010)
9. Guo, B., Zhang, D., Imai, M.: Toward a cooperative programming framework for con-text-
aware applications. Personal and Ubiquitous Computing 15(3), 221–233 (2012)
10. Dey, A., Abowd, G., Salber, D.: A Conceptual Framework and a Toolkit for Supporting
the Rapid Prototyping of Context-Aware Applications. Human-Computer Interaction 16,
97–166 (2001)
11. Yau, S.S., Huang, D.: Mobile Middleware for Situation-Aware Service Discovery and
Coordination. In: Bellavista, P., Corradi, A. (eds.) Handbook of Mobile Middleware
(2006)
12. Allen, J.: Maintaining knowledge about temporal intervals. Communications of the
ACM 26(11), 832–843 (1983)
13. Ipina, D., Katsiri, E.: An ECA Rule-Matching Service for Simpler Development of Reac-
tive Applications. Published as a supplement to the Proc. of Middleware 2001 at IEEE Dis-
tributed Systems Online 2(7) (2001)
14. Davis, F.: Perceived Usefulness, Perceived Ease of Use, and User Acceptance of lnforma-
tion Technology. MIS Quarterly 13(3), 318–340 (1989)
Evolving Context-Unaware to Context-Aware Model
Using the ESC Ontology
1 Introduction
2 ESC Ontology
The ESC (Entity Situation Context) ontology defines the most important concepts of a
context-aware model and their relationships. It was build according to the most com-
mon definition of context: “Context is any information that can be used to character-
ize the situation of an entity. An entity is a person, place, or object that is considered
relevant to the interaction between a user and an application, including the user and
applications themselves” [4].
Figure 1 illustrates the concepts and the relationships in ESC ontology.
Entity represents the relevant concepts that can be characterized by the context
information available in the domain (e.g. person, room).
ContextInfo represents the information available in the domain that is useful for the
situation characterization of an entity (e.g. location, activity).
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 249–252, 2012.
© Springer-Verlag Berlin Heidelberg 2012
250 H. Martins and N. Silva
─
─
Further, according to Dey [4] any information used in the situation characterization
of an entity is context information. The ESC ontology captures this through the
following axioms:
─ .
─ .
3 Proposed Approach
This section presents the process for transforming a context-unaware model into a
context-aware model. The process is composed by six steps.
In order to better convey the message, a running example will be used. Consider
the context-unaware model represented by the white-filled rectangles in Figure 2. The
goal of the process is to provide the application with competencies to “automatically
turn on the heater when a classroom is in class and provide teacher material when the
teacher is teaching”.
The first step of the process concerns the identification of new situations. Situa-
tions are domain and application-dependent, and are used during the decision making
process. New situations are therefore concepts that are added to the resulting context-
aware model. Considering the defined goal, two new situations are defined: Teaching
and InClass (gray-filled rectangles).
In the second step subClassOf relationships are created between the context-
unaware concepts and the Situation and Entity concepts (rounded rectangles). Based
on the presented example the following relationships are defined (thicker arrows):
(1)
(2)
(3)
(4)
In the third step Entities and Situations are related through the hasEntity relation.
Note that, according to the ESC model, the hasEntity relation is “inverse of” hasSitua-
tion, so there is no need to (manually) create the hasSituation relation, as it is inferred
by the reasoner. According to the running example the following relations are created:
(1) .
(2) .
This means that the Teaching situation characterizes the Teacher entity and InClass
characterizes the ClassRoom.
In the fourth step, all information that will be used to characterize each situation is
related to the respective situation through hasContextInfo relationships. This informa-
tion will be used later to describe the rules to characterize the situation. In the running
the example the following axioms are defined:
(1) .
(2) .
(3) .
The fifth step concerns the creation of rules in order to allow automatic inference of
finer-grained classification of situations. Through the literature two different ways to
create these rules are presented [5]: Using Description Logics axioms or using Horn
Clause rules (e.g. SWRL), though their use is not mutually exclusive. In this work
description logic axioms were adopted as an example. The following rules are defined
in the running example.
(1)
. .
(2)
. .
. )
These rules mean that (i) a Situation is characterized as InClass when the classroom
has a busy schedule, and (ii) it is characterized as Teaching when a teacher is lectur-
ing at a classroom that has a busy schedule.
252 H. Martins and N. Silva
Notice that different characterization rules can be used for a single Situation.
The sixth step is responsible for instantiating to create a relate situation’s individu-
als with correct entities’ individuals (hasEntity relation). The process has as input the
context-unaware knowledge base (model and individuals) and the context-aware
model. Suppose we have the following individuals in scope of the running example:
(1) 401
(2)
The following situation individuals are created and linked to the correct entities:
(1) 401
(2)
(3) 401, 401
(4) ,
Notice that an Entity can have more than one Situation individual. For example, an
instance room401 may have different situations (e.g. currSituation, prevSituation).
The result of this process is a context-aware knowledge base that is ready to be ap-
plied in the domain application and processed by generic DL reasoners.
4 Conclusions
We have here described a set of steps in order to transform a common application
model into a context-aware model. With the help of the ESC ontology, a new point of
view in context modeling in proposed, such we believe that the most important as-
pects of the process can also be used/adapt when creating new context-model from
scratch or extend existing ones. Despite the results are illustrated in a simple partial
model example, the most important aspects of the process and of the resulting con-
text-aware model were presented. To the extent of our knowledge, no other works
exist that address this topic.
References
1. Schilit, B., Adams, N., Want, R.: Context-Aware Computing Applications. In: Proceedings
of the 1994 First Workshop on Mobile Computing Systems and Applications, pp. 85–90.
IEEE Computer Society, Washington, DC (1994)
2. Gruber, T.R.: A translation approach to portable ontology specifications. Knowl. Acquis. 5,
199–220 (1993)
3. Wang, X., Zhang, D., Gu, T., Pung, H.: Ontology Based Context Modeling and Reasoning
using OWL. In: IEEE International Conference on Pervasive Computing and Communica-
tions Workshops, pp. 18–22 (2004)
4. Dey, A.K.: Understanding and Using Context. Personal Ubiquitous Comput. 5, 4–7 (2001)
5. Martins, H., Silva, N.: Characterization, Comparison and Systematization of Context Ontol-
ogies. In: 2012 Sixth International Conference on Complex, Intelligent and Software Inten-
sive Systems (CISIS), pp. 983–988 (2012)
TSACO: Extending a Context-Aware Recommendation
System with Allen Temporal Operators
1 Introduction
The research field on recommender systems has received much attention over the last
two decades. A great amount of work has been performed by academia and industry
on the design and development of new and better recommender systems. Neverthe-
less, this area of research maintains a high interest; mainly because it has many prac-
tical applications that assist users to tackle the information overload of modern times
and provide them with personalized recommendations.
Despite all the work done, there is room for more improvements to make recom-
mender systems more effective. In [1] [2] authors proposed several extensions that
can enhance their capabilities. Among them are scalability, flexibility, the incorpora-
tion of contextual data and the multicriteria/multidimension support. We considered
them as key capabilities and, therefore, we took them into account as we designed and
developed a recommender system based on a semantic multicriteria ant colony opti-
mization algorithm. The system, called SACO (Semantic Ant Colony Optimization)
and presented in [3], makes use of user’s past routes and their related context informa-
tion to make a prediction of the semantically closest routes with respect to the one a
user is following, given his current location and context data. Consequently only the
most appropriate data (e.g. Points of Interest (POI), to-do list items, traffic problems,
etc) is shown to the user taking into account the spatial and semantic proximity to the
predicted route(s).
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 253–260, 2012.
© Springer-Verlag Berlin Heidelberg 2012
254 J.A. Mocholi et al.
2 Related Works
To the best of our knowledge, there is no other work that makes use of ant colony
optimization and, to a certain extent, some kind of temporal logic to infer semantical-
ly-annotated routes for recommendation purposes. For this reason, we will review
here works of the literature on recommendation systems that make use of temporal
information and some temporal reasoning mechanism.
In [4] authors present a movie recommender system that includes time among
the dimensions it takes into account to make a recommendation. In order to query
the database with the user preferences and movie related data, the proposed system
utilizes first order temporal logic with temporal operators (e.g. always, until, future,
etc) to build the queries. Although this system has a multidimensional recommenda-
tion model (time, place and companion are part of the model, besides user and movie)
and multicriteria support (it can make a recommendation given a user and any of
the dimensions), it can only be used in that specific domain (as its model definition
is static and a change in the dimension set would require rebuilding the temporal
database), unlike SACO [3]. Besides, the use of first order logic to construct the
queries would be cumbersome and error prone for most of the possible users of the
system.
The system proposed in [5] builds a time-aware user model offline by using user
ratings, relative feature scoring over time and user demographic information. Then, it
can make recommendations by using temporal information, the rating matrix and the
neighbor set found via user similarity. Although temporal information is part of the
collected information and it is taken into account for the recommendation process,
this recommender system does not make use of temporal logic or temporal operators
to carry out its recommendations.
In [6] a multiagent scheduling recommender system is proposed. The system works
on graphs where each node represents an interval and directed edges are labeled with
temporal interval relations [7] and an associated probability that can be used to gather
user preferences. Each agent maintains a part of the graph and ensures the feasibility
TSACO: Extending a Context-Aware Recommendation System with Allen 255
of its subgraph by modifying the labels of the connecting edges to other subgraphs.
When no more edges need to be modified, the graph represents a global solution and
each agent can make a recommendation.
These temporal relations are really intuitive and are a convenient way for describing
temporal relationships between any pair of events. For this reason, we chose to
use these relations to allow the creation of temporal expressions to increase the
expressiveness of the queries the original SACO algorithm resolves.
Our approach, denoted here as TSACO (Temporal Semantic Ant Colony Optimiza-
tion), adds time related expressiveness to the work SACO [3] - which carried out
searches on route databases by utilizing a domain-specific ontology and the compari-
son between queries and learnt routes via conceptual distance measures (more infor-
mation about this concept may be found in [10]). The TSACO algorithm may be
formulated as follows:
256 J.A. Mocholi et al.
TSACO
Load ontology
Read restriction set
Apply restriction set (select and create nodes for the
search space according to restriction set terms)
Perform nodes score assignment process
Apply ACO-OP algorithm
Evaluate temporal restriction
END TSACO
TSACO is different from other ACO algorithms, such as ACO-OP [11], since it is
based on SACO. It uses semantic descriptions of concepts to dynamically assign
scores to nodes representing problem domain entities in a process called semantic
score assignment. In addition, this process is a dynamic process that considers the
restrictions defined either automatically or by the user. Nodes represent problem do-
main entities through the values of the ontology terms they contain. In the case of the
problem tackled here, nodes represent a certain contextual situation and edges (the
links between two nodes in the graph representing the problem search space) establish
a sequentiality relation between two contextual situations. A sequential collection
of contextual situations is called a contextual sequence. There is a more detailed
explanation on how contextual sequences are gathered and combined in [3].
Once the ontology of the problem domain has been loaded, the restriction set is
read. Restrictions sets are defined in our approach by means of assigning relevance
factors to elements in the ontological space, i.e. a relevance factor is assigned to a
context data value. The goal of the relevance factor is to adjust the level of impor-
tance of certain ontology term values over the rest within the ontological space.
All the ontology terms in the restriction set are used to create the search space and
build the graph and its nodes. In order to create the nodes, we take the set of terms of
the restriction set and select only those contextual sequences from the problem search
space that have at least one of the restriction set terms among the metadata of their
contextual situations. As a result of applying a restriction set on the items of a collec-
tion that has been annotated with the terms of a given ontology, we obtain a graph that
represents the search space to be used by TSACO.
Based on relevance factors and restrictions sets, the semantic score assignment
process assigns scores to each node in the search space as follows:
• Any item in the collection is defined as a tuple (IDi, Oi) where IDi is a unique iden-
tifier, and Oi is a set of pairs {(termk, valuek) | termk ∈ T, valuek ∈ Domain(termk)}
representing values associated to terms in a set T of a predefined ontology.
• A user restriction set R is defined as a set where
• Given a user restriction set R, a collection of distance metrics over T, and given the
set M of matching pairs M={(termj, valuej) ∈ R | ∃ (termj, valuei) ∈ Oi}, the score
Si associated to a node of the graph representing an item (IDi, Oi) of the collection
is calculated as follows:
|M | dterm j (value j , valuei ) ⋅ relevance j
Si = (1)
j =1 |M |
where the section with prefixes definition has been omitted and tsacotest is the
prefix defined for our ontology URI. Each Allen’s temporal operator described in
section 3 has its corresponding SPARQL template which is instanced every time a
temporal restriction uses its related temporal operator. Because of space limitations
we may not show each predefined SPARQL template. The evaluation of the temporal
restriction reduces the result set contextual sequences to those that fulfill the expres-
sion, which become the set that is returned as result of the algorithm.
5 Experimental Results
The experiments described here were not designed to evaluate the performance of the
TSACO algorithm (see [3] for a discussion of the good performance of SACO) but to
analyze how the results obtained improve in terms of the temporal requirements that
can be expressed when SACO is combined with Allen’s temporal operators. In order
to interact with the algorithm and provide a temporal expression, a desktop applica-
tion was developed to display returned matching routes on a map. For this task, the
application developed using the .Net framework makes use of a Google Earth plug-in
258 J.A. Mocholi et al.
and allows the creation of a restriction set and a temporal expression. Hence, the
restriction sets and the temporal expressions were manually established and supplied
to the algorithm, although we expect that a software agent would be responsible for
supplying both and also the context data used to update the user’s routes repository in
a real case. This agent would also personalize the information to be visualized accord-
ing to the user’s current context.
Experiments have been carried out on a database of around 2000 automatically
generated artificial routes. As routes were generated in the form of contextual se-
quences, the following contextual data were assigned per node and, hence, they
represent the terms of the ontology to be used:
• Activity: the kind of activity the user was involved in when the data was captured
(e.g. work or leisure time).
• Time: the exact time of day when the data was captured.
• GPS coordinates: the user’s location when the contextual data was gathered.
In the rest of this section, both TSACO and SACO are used to obtain the semantically
closest set of routes to the given context data. The expressiveness gained by means of
the temporal expressions is demonstrated by means of examples. The results of the
queries using our temporal extension are compared to the ones that would be obtained
without it but having both the same metadata. Due to space limitations we have just
included two query examples and compared the results in a tabular form in which
returned routes are ordered from higher to lower score.
SACO TSACO
Activity Time Activity Time
Route 1 work 08:05:58 work 06:05:14
Route 2 work 08:13:41 work 06:10:32
Route 3 work 08:01:21 work 06:03:00
Route 4 work 08:08:21 work 05:58:05
Route 5 work 08:00:02 work 05:59:33
TSACO: Extending a Context-Aware Recommendation System with Allen 259
Fig. 1. Routes obtained without (left) and with (right) the temporal restriction
Query: Routes where the activity is leisure and goes through coordinates [39.45887,
-0.34359] after 21.00.
This statement request routes for leisure activity that visit a certain coordinate and
taking place after 21.00. As in the previous example, the result sets are quite different.
The evaluation of the temporal expression allows us to obtain a result set that really
fulfills what the statement requested.
Despite the problem domain, the main goal recommender systems strive to achieve is
to select the best recommendation. In order to do this they have to make the best poss-
ible use of the knowledge available, since the quality of the recommendations they
could make is directly related to the amount of information and, very significantly, to
the way this information is processed and assessed. To this end, to get closer to the
selection of the best prediction, we opted for increasing the expressiveness of the
queries in SACO by combining them with a temporal language that describes a cer-
tain condition that metadata of the routes returned should fulfill. In this paper we have
presented Allen’s temporal operators as the basis to build our temporal expressions
which are translated to SPARQL in order to assess which routes found by SACO
comply with the temporal restrictions. The results obtained in the experiments showed
that the possibility to state a temporal relationship between metadata turned out in
recommended routes that are better tailored to the user request. Depending on the
query, a temporal expression can make some route(s) to appear in the results unlike
260 J.A. Mocholi et al.
when only the non-temporal query was used. This effect could occur for routes that
obtained an overall lower score but that do fulfill the new temporal restriction.
Our plans for future work in the research presented here include improving the
expressiveness of the temporal expressions that can be created by allowing the
composition of temporal expressions through the use of logical operators. We have
already started to work on it.
Acknowledgments. This work has been supported by the Centre for the Development
of Industrial Technology (CDTI) under the funding project CENIT-MIO! CENIT-
2008 1019.
References
1. Adomavicius, G., Tuzhilin, A.: Toward the next generation of recommender systems: A
survey of the state-of-the-art and possible extensions. IEEE Trans. Knowledge Data
Eng. 17(6), 734–749 (2005)
2. Picón, A., Rodríguez-Vaamonde, S., Jaén, J., Mocholi, J.A., García, D., Cadenas, A.: A
statistical recommendation model of mobile services based on contextual evidences.
Expert Systems with Applications 39(1), 647–653 (2012)
3. Mocholi, J.A., Jaen, J., Krynicki, K., Catala, A., Picón, A., Cadenas, A.: Learning semanti-
cally-annotated routes for context-aware recommendations on map navigation systems.
Applied Soft Computing 12(9), 3088–3098 (2012)
4. Linn, Z.Z., Hla, K.H.S.: Temporal Database Queries for Recommender System using
Temporal Logic. In: Intl. Symposium on Micro-NanoMechatronics and Human Science,
pp. 1–6 (2006)
5. Ullah, F., Sarwar, G., Lee, S.C., Park, Y.K., Moon, K.D., Kim, J.T.: Hybrid recommender
system with temporal information. In: Intl. Conf. on Information Networking, pp. 421–425
(2012)
6. Shakshuki, E., Trudel, A., Xu, Y., Li, B.: A Probabilistic Temporal Interval Algebra Based
Multi-agent Scheduling System. In: International Joint Conference on Artificial Intelli-
gence Workshop in Multi-Agent Information Retrieval and Recommender Systems,
pp. 62–69 (2005)
7. Allen, J.F.: Maintaining knowledge about temporal intervals. Commun. ACM 26(11),
832–843 (1983)
8. Dorigo, M., Maniezzo, V., Colorni, A.: The Ant System: Optimization by a Colony of
Cooperating Agents. IEEE Trans. Systems, Man and Cybernetics, Part B 26, 29–34 (1996)
9. Dorigo, M., Stützle, T.: The ant colony optimization metaheuristic: Algorithms, applica-
tions and advances. In: Glover, F., Kochen-berger, G. (eds.) Handbook of Metaheuristics,
pp. 251–285. Kluwer Academic Publishers (2003)
10. Khan, L., McLeod, D.: Audio structuring and personalized retrieval using ontologies.
IEEE Advances in Digital Libraries (2000)
11. Liang, Y.C., Smith, A.E.: An Ant Colony Approach to the Orienteering Problem. Journal
of the Chinese Institute of Industrial Engineers 23(5), 403–414 (2003)
Ontological User Profile Modeling for Context-Aware
Application Personalization
1 Introduction
Ubiquitous computing systems are built upon relevant context information that is used
to respond and adapt to users’ behaviors. To achieve this, context-awareness alone is
insufficient pushing the need for some form of personalization. For example, person-
alized assistive services aim to support the daily activities of users and enhance their
Quality of Life (QoL). With the emergence of mobile-based technologies, there is a
growing need to develop applications that are based around the user and canadapt to
user needs within mobile environments. This has sparked research into the areas of
user modeling, context modeling and human-computer interaction within context-
aware applications. A key challenge within pervasive computing systems is to provide
the “right” information for the ‘right’ user at the ‘right’ time and in the ‘right’ way
[1]. This has led to the use of ontologies as a means to provide personalized services
through adaptable user models. User models have been developed for use within per-
sonalized web information retrieval systems, adaptive user interface design and for
public services such as digital museum guides or electronic, customized libraries [2].
Current context-aware adaptation techniques are limited in their support for user per-
sonalization. With the development of mobile technologies, people are becoming
more dependent on the use of technologies and these have been gradually integrated
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 261–268, 2012.
© Springer-Verlag Berlin Heidelberg 2012
262 K.-L. Skillen et al.
into their daily lives. As a result, there is a growing demand to develop better methods
to capturing and representing adaptable profiles within changing environments.
This paper describes an ontological user profile model for adaptive, context-aware
applications. Section 2 discusses existing work within the area of ontological user
profiling and Section 3 characterises users and their needs in context-aware applica-
tions within mobile environments. Section 4 provides a detailed description of the
ontological user profile modeling, while Section 5 demonstrates the use of the ontol-
ogy illustrated through a typical case study. Section 6 concludes the paper.
2 Related Work
Ontology-based user modeling within pervasive environments has been previously
studied within many research areas. Chen et al. [3] developed COBRA-ONT, an on-
tology developed as part of the Context Broker Architecture, an architecture that pro-
vides knowledge sharing and context-reasoning support for context-aware applica-
tions. This architecture was successfully used to enable sensors, devices, and agents to
share contextual knowledge and to provide relevant information to users based on
their contextual needs. Razmerita et al. [4] presented work on user modeling with a
generic ontology-based architecture called OntobUM. This architecture contained
three separate ontologies, namely user, domain and log ontologies. With advances in
mobile technologies and associated context-aware applications, user modeling needs
to address the dynamic lifestyles of different users within different environments. The
Unified User Context Model (UUCM) [5] was introduced aiming to define a com-
monly accepted ontology that can work within different changing environments.
UUCM suggested that a more complete model of the user is obtained through
modeling ‘dimensions’ (i.e. cognitive patterns, relationships and environments).
Sutterer et al. [6] introduced the notion of personalized user profiles with the creation
of the User Profile Ontology with Situation-Dependent Preferences Support (UPOS).
This ontology has claimed to be able to support the situation-dependent personaliza-
tion of services within changing environments but there has been no further imple-
mentation as of yet. The work in [7] describes a user model where semantic user
profiles are created and then used within peer-to-peer type changing environments
(i.e. user profiles are stored within a mobile device).In general, user modeling and
personalization has focused more so on improving a user’s experience via web infor-
mation retrieval methods [8] or interactions via personalized user interfaces [1]. Nev-
ertheless, research is emerging in the area of adaptable, mobile user modeling. In [9]
a framework (Imhotep) was developed which enables the creation of applications that
can be adapted to suit different user needs, wants or capabilities over time. This work
also presents a fuzzy-logic-based inference mechanism to allow new user capabilities
to be identified from old ones. The work in [10] focuses on the development of a
personalized, run-time approach to adaptation within context-aware applications. The
approach provides techniques for selecting user behavior histories, mining usage pat-
terns and for generating relevant adaptation behaviors within different mobile
environments. Our current study extends the aforementioned concepts to incorporate
dynamic components for use within adaptable, mobile applications.
Ontological User Profile Modeling for Context-Aware Application Personalization 263
User profile modeling is indispensable for any kind of personalization; either person-
alized information retrieval, adaptive user interfaces or personalized service delivery
of context-aware applications. A user profile is a digital representation of the unique
data concerning a particular user. User profile modeling involves the creation of a
data structure that can hold the characteristic attributes of a type of users. This can be
done by either manually defining these attributes, e.g. habits, preferences and interests
or inferring unobservable information from observable data relating to their actions,
thoughts or behaviours [11]. The data structure is usually referred to as a user model
that serves as a template for generating specific user profiles for different individuals.
People in different environments normally have different behaviours. For example,
in a supermarket people are concerned with shopping or in a residential setting people
will mainly carry out activities of daily living. To adapt to the changing needs of a
specific user within different settings, e.g., providing a help-on-demand service, a
system needs to understand the specific aspects of the user profile particularly relating
to its situated environment. User profiling can be used to achieve a model of different
users’ needs or wants and can facilitate the personalization of adaptable applications.
When modeling human users, their individual characteristics can be broken down into
various levels of granularity. An example of this would be the concept of a ‘food’
preference, which could contain the sub-preferences ‘Asian', ‘European’, ‘Western’ or
‘Italian'. This in turn suggests that the user profile must also be broken down into
various levels to enable the development of a more complete and extensive user
model. When modeling the user, we need to take into account user, temporal and
environmental contexts in order to provide a more comprehensive model of dynamic
user attributes which change as they move between mobile environments. User con-
texts refer to user information such as user preferences, interests, activities, wants or
needs. Temporal contexts generally refer to entities such as time or location. For ex-
ample, if we model the location of the user at different times, we can determine if
they want to go to sleep or go shopping at some specific location. Environmental
contexts can be described as information related to the user’s surrounding environ-
ment. A user will naturally always have some activity to complete or have some
thought that needs to be actioned. Therefore the relationships between the user and
context are crucial to providing any form of personalized service.
reuse of system knowledge. This is particularly important when modeling user aspects
that can be remembered and reused later [12].
This study presents the development of a User Profile Ontology based on user cha-
racterization. The ontology is defined using the Protégé ontology editor, and repre-
sented in the standard Web Ontology Language (OWL). A top-down design approach
was used where top-level, general concepts concerning the user are selected (e.g.,
Education, Capability) and further broken down into more specialized concepts (e.g.,
Capability_Level). The ontology combines both static and dynamic user concepts
with more focus on the changing behaviors of the user. One compelling feature of the
ontology is its focus on the dynamic aspects of a user. This provides an adaptable
model that can be used across various application environments, thus improving user
model versatility. For example, a user’s mobile phone may need to be set to silent
mode during a meeting or to loud mode if walking through a busy shopping center.
To build the user profile ontology, firstly key terms for describing a user are identified
and modeled as ontological classes as shown in Table 1. The central class “User”
represents any user of the involved in the running of the application. The “Us-
er_Profile” class denotes the central concept within the ontology. Within this there are
five key profile classes. The “PreferenceProfile” is linked to the class “Prefe-
rence_Domain”, which describes the high level view of the type of preference a user
may have at any one time, “Preference_Specialism”, a class detailing the specific
wants of a user and the class “Preference” itself, which is used to link the user with
their specific preference. The “EducationProfile” and “HealthProfile” classes follow a
similar structure as before, with sub-classes relating to the areas of the user’s health
and education histories. “Capability” is defined as the extent to which the user has a
physical, emotional or cognitive ability in order to perform some task or activity. By
including different levels, we can tailor certain activities to suit their own abilities.
Fig. 1. Extract of User Profile Ontology detailing the ontology hierarchical structure
After all key concepts or classes are defined within the ontology, specific object
and data properties are defined for these classes, as shown in Table 2. These proper-
ties are used to relate one class to another via objects or specify relations via
data-types. They play an important role in relating key concepts and inferring new
information. For example, the “HealthCondition” class could contain a property “ha-
sHealthCondition” to determine the state of a user’s health and this could be linked to
the “Capability” class. This could therefore determine that a user has a particular ill-
ness such as Dementia, and so infer that they have low levels of memory capability.
This way, relations between different users can be modeled and the application can
266 K.-L. Skillen et al.
learn to adapt to different user patterns. Figure 2 presents a hierarchical view of the
key ontology classes, object-properties and data-type properties as displayed within
Protégé.
Table 2. A outline and description of the object and data-type properties currently used within
our User Profile Ontology
Object Property Description Data Property Description
hasUserProfile (isUser-
Every user has one user hasName Every user has a name
ProfileOf) profile.
hasActivity Each user performs some has Age A user has one unique age.
activity at one point.
hasContext The current situation of the hasCapabilityLevel Every ability is defined in
user. levels (string)
hasUser Each condition has a user. hasSeverityLevel See above.
hasCapability (Type, Each user has different capa- hasTime Every activity occurs at one
Domain) bilities and abilities. time.
hasHealthCondition Every user has a specific livesIn Where the user is commonly
(Type, History, Symp- health status and associated situated.
toms, Status) conditions.
Fig. 2. An overview of the User Profile Ontology classes, object properties and data properties
5 Case Study
1
MobileSage AAL Funded Research Project available at:
http://www.mobilesage.eu/es.
Ontological User Profile Modeling for Context-Aware Application Personalization 267
As illustrated within Figure 3, the core component within the architecture is the Pro-
filing Service, which is used to create and modify the user’s unique User Profile and
supports the functionality of other components. Within the Profiling Service, user
models will be created from user personas and the User Profile is based on the se-
lected User Model. The profile will contain information such as the user's preferred
form of feedback in addition to which services the user would like at that moment in
time. The user interacts with the smart-device’s Dialog Manager through the User
Interface and the User Profile is used to provide an interface for personalized requests
and adaptation of relevant content. The Dialog Manager requests relevant information
from other services and organizes this information to display it to the user in a more
understandable form. Within this Dialog Manager there is a Personalized Request
function, Adaptation Engine (via Rules) and a Personalized Content component. The
Personalized Request function takes user requests along with information from the
User Profile, and modifies these according to the context or preferences. When the
Dialog Manager is personalizing the request, the Adaptation Engine can add extra
rules to tailor the information to them, depending on the user’s situation. The Persona-
lized Content component transfers the adapted content to the smart-phone user inter-
face to be displayed to the user.
Fig. 3. Illustration of the flow of information within the MobileSage ‘help-on-demand’ service
Within the Personalization Service, the Usage Log records all user interactions
with the smart-phone, and data is stored and mined by the Data Mining component.
Once it is mined, both User Models and Profiles are created. The UI Adaptation En-
gine will use a set of rules to modify the interface of the application based on context.
The above framework can be used within a variety of different use-case scenarios
and application domains. For example, the use of such context-aware, personalized
services can be provided to support those with Dementia to provide real-time naviga-
tion to help them find their way if they become lost. The services could also help a
tourist find local information regarding points of interest in a foreign country or pro-
vide a translation of some foreign place into their own preferred language.
268 K.-L. Skillen et al.
6 Conclusions
The User Profile Ontology that has been presented provides an extensible user profile
model that focuses on the modeling of dynamic and static user aspects, facilitating its
primary use in context-aware applications. One distinguishing feature of this model is
its focus on dynamic and temporal concepts to enable the personalization of applica-
tions as the user moves between mobile environments. The ontology model has been
adopted by the MobileSage AAL research project, where further evaluation will be
subject to the model being fully implemented and deployed within smart, adaptive
applications.
References
1. Fischer, G.: User Modelling in Human-Computer Interaction. User Modelling and User-
Adapted Interaction 11, 65–68 (2001)
2. Golemati, M., Katifori, A., Vassilakis, C., Lepouras, G., Halatsis, C.: Creating an Ontolo-
gy for the User Profile: Methods and Applications, Morocco (2007)
3. Chen, H., Finin, T.: An Ontology for a Context Aware Pervasive Computing Environment.
In: IJCAI Workshop on Ontologies and Distributed Systems, Acapulco (2005)
4. Razmerita, L., Angehrn, A., Maedche, A.: Ontology based user modeling for Knowledge
Management Systems. In: Proceedings of the User Modeling Conference, Pittsburgh
(2003)
5. Viviani, M., Bennani, N., Egyed-Zsigmond, E.: A Survey on User Modeling in Multi-
Application Environments. In: 3rd International Conference on Advances in Human-
Oriented and Personalized Mechanisms, Technologies and Services (2010)
6. Sutterer, M., Droegehorn, O., David, K.: Upos: User profile ontology with situation-
dependent preferences support. In: Proceedings of the 1st International Conference on Ad-
vances in Human-Computer Interaction (2008)
7. von Hessling, A., Kleemann, T., Sinner, A.: Semantic user profiles and their applications
in a mobile environment. In: Artificial Intelligence in Mobile Systems (2004)
8. Soui, M., Rojbi, S.: User modeling and web-based customization techniques. An examina-
tion of the published literature. In: 4th International Conference on Logis-
tics(LOGISTISQUA) (2011)
9. Almeida, P.O., Eduardo, C., López-de-Ipiña, D., Marcos, S.: Imhotep: an approach to user
and device conscious mobile applications. Personal and Ubiquitous Computing 15(4),
419–429 (2011)
10. Tsang, S.L., Clarke, S.: Mining User Models for Effective Adaptation of Context-aware
Applications. In: The 2007 International Conference on Intelligent Pervasive Computing,
IP 2007 (2007)
11. Zukerman, I., Albrecht, D.: Predictive Statistical Models for User Modeling. User Model-
ing and User-Adapted Interaction 11(2), 5–18 (2001)
12. Josephson, J.R., Benjamins, R.V., Chandrasekaran, B.: What Are Ontologies, and Why Do
We Need Them? IEEE Intelligent Systems 14(1), 20–26 (1999)
13. Trajkova, J., Gauch, S.: Improving Ontology-based User Profiles. In: Proceedings of
RIAO. University of Avignon (2004)
Tagging-Awareness: Capturing Context-Awareness
through MARCado
Abstract. This article uses the vision of Ambient Intelligence that proposes
distributing various computing devices that react to the presence of users. As a
first approximation, we have thus modeled the context by identifying users. To
do this, we rely on the model of the five W’s. Similarly, this article presents a
way to generate context-awareness, supported in the MARCado framework. It
also presents a mechanism that allows behavior of contextual information to
obtain context-aware services. These elements have been implemented after
adapting sensorial technologies, which provide simple input into the system.
1 Introduction
Ubiquitous Computing [1] and Ambient Intelligence [2] are two related paradigms
that promote proactive systems and offer non-intrusion. Both paradigms can
approximate being aware of people’s habits and needs. It is necessary to minimize or,
if possible, eliminate the interaction between user and environment so the application
of these paradigms can occur.
Identifying the user and minimizing the interaction between the user and the
environment can make some activities transparent [3], [4]. In this sense, some authors
suggest representing the natural interaction with an “implicit interaction or embedded
interaction” [5]. This term is the nearest approach to the computational concept called
context-awareness. On the other hand, it is also important to know the elements that
surround the physical environment. Dey proposes the concept “Context” [6]. This
definition describes context-aware computing as a system that uses context to provide
the user with relevant information or services. To properly leverage what happens in
context, it is necessary to have models that identify relevant information and represent
the context as an information source. A theory that allows considering important
aspects to better understand the elements involved in a context-aware environment is
known as the five W’s: Who, Where, When, What and Why.
We consider the key areas of Ambient Intelligence and identification technologies,
which provide excellent sensory input. We use identification technologies as a means
for a more natural interaction and to be closer to the user. The following sections
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 269–273, 2012.
© Springer-Verlag Berlin Heidelberg 2012
270 S.W. Nava-Díaz et al.
explain the main idea of this work: tagging-awareness through MARCado. Finally,
we discuss the conclusions to our proposal.
Our design focuses on the identification process to create smart environments through
a natural interaction. We also apply the idea of embedded interaction, though we must
embed technology into everyday artifacts and the environment around us. This can be
achieved by modeling the context with the “Who” using “Where” and “When” as a
set of elements that relate to each other to obtain services; representing the “What” is
to say the following: “Who (Where, When) What”. Strategically placing these
concepts in the context, we can obtain a proactive environment that is transparent to
the user and not intrusive, when interacting with these artifacts.
We consider a new way to interact, adding an additional feature: tagging. We refer
to tagging artifacts and interaction as well as objects and activities. In this, the most
important aspect occurs in eliminating the “When” aspect, which the user now
manages, and it presents the idea of context-awareness through tagging (Tagging-
Awareness) [7]. Similarly, “Where” is also tagged and simultaneously increased.
Another important feature of this model is that the user manages the “What” aspect
whenever he/she wants, using a simple contact interaction. Thereby, a user requests a
service required, but without the user’s knowledge, the other service is obtained in an
implicit way. That is, the context knows who is requesting the service, is aware of
what is necessary, and delivers another adaptive service (implicit).
the environment and interacts with it; he/she receives the requested service and
transparently stores dynamic data. A second person interacts with the same element
after, and receives the required explicit service, at the same time he/she acquires an
additional benefit; resulting in a context-aware service through subsequent tagging.
Any user involved implicitly in storing dynamic data can receive it, even a user who
has not previously interacted with that element.
5 Implementation
This section describes context-awareness through tagging using NFC technology and
Java_ME-based services. These technologies allow the development of context-aware
services driven by dynamic data (marks). Though all data are stored in any tags (area,
device, object and user types) [8], the processing is realized in the mobile phone NFC,
which has such a capacity.
Middleware Awareness (MiAwa) is a small application on the mobile device that
can control the behavior of actions based on marks. This application is responsible for
analyzing contextual information (static data and marks) stored in NFC tags and
launching services to users. Figure 1 shows the MiAwa structure. The MiAwa
execution process develops in several steps. (i) Touching Interaction: a user interacts
with any relevant entity in the environment (he/she touches an entity with mobile
phone). This action is performed in the interaction layer and is supported by NFC
technology. (ii) Data Extraction: contextual information (static data and marks) that
have been stored in tags are read and sent to the interpretation and analysis layer. (iii)
Generating Decision Trees: the information obtained from tags is processed and
evaluated, and decision trees and corresponding ways are created. This is performed
in the interpretation and analysis layer. (iv) Tagging-Awareness Model Extraction: the
model supports the conditions that must be evaluated in the decision trees; this
272 S.W. Nava-Díaz et al.
6 Conclusions
This paper has presented a way to obtain and store context awareness through
tagging. To tag the context, we have distributed the required information, knowing
aspects of “Who”, “Where” and “What”, while avoiding “When”. This proposal
highlights that context-aware services are performed implicitly through tagging
(Tagging-Awareness). The ready availability of devices that manage the processing,
storage and communication by only touching the tags allows users to conduct their
daily activities transparently. Marks stored on each tag, including mobile phone
information, makes providing new services by interpreting awareness easy.
References
1. Weiser, M.: The Computer for the Twenty-First Century. Scientific American 265(3),
94–104 (1991)
2. ISTAG, Scenarios for Ambient Intelligence in 2010, I. S. T. A. Group, European
Commission (2001), ftp://ftp.cordis.europa.eu/pub/ist/docs/
istagscenarios2010.pdf
Tagging-Awareness: Capturing Context-Awareness through MARCado 273
3. Nava-Díaz, S.W., Chavira, G., Hervás, R., Bravo, J.: Adaptabilidad de las tecnologías RFID
y NFC a un contexto educativo. IEEE-RITA 4(1), 17–24 (2009)
4. Chavira, G., Bravo, J., Nava-Díaz, S.W., Rolón, J.C.: PICTAC: A Model for Perceiving
Touch Interaction through Tagging Context. Journal of Universal Computer Science 16(12),
1577–1591 (2010)
5. Schmidt, A.: Implicit Human Computer Interaction Through Context. Personal
Technologies 4(2), 191–199 (2000)
6. Dey, A.K., Abowd, G.D.: Towards a Better Understanding of Context and Context-
Awareness. Georgia Institute of Technology (1999)
7. Nava Díaz, S.W.: Modelado de un Ambiente Inteligente: Un Entorno Consciente del
Contexto a través del Etiquetado. p. 281. Universidad de Castilla-La Mancha (2010)
8. Nava-Díaz, S.W., Chavira Juárez, G., Rolón, J.C., Orozco, J.: MARCado: A framework for
tagging context-awareness. In: 5th International Symposium on Ubiquitous Computing and
Ambient Intelligence, Riviera Maya, México (2011)
Exploiting User Feedback for Adapting Mobile
Interaction Obtrusiveness
1 Introduction
The ubiquity of mobile devices enables users to always be connected to the
services of the environment. However, with ever greater number of services added
to our surroundings, users may be interrupted often. Usually, these interruptions
take the form of notifications that are received on the various devices that a user
possesses [1]. Since user attention is a valuable but limited resource, notifications
must behave in a considerate manner [2], demanding user attention only when
it is actually required according to user needs and context.
In a previous work [3], we developed a method for designing and personalizing
mobile services to regulate the interaction obtrusiveness (i.e., the extend to which
each interaction intrudes the user’s mind). However, it is difficult to define service
obtrusiveness behavior in the design stage to meet different user demands since
user preferences and needs can change over time. Thus, in this work we go
an step further, adapting the decisions made at design time by learning users’
preferences exploiting users’ feedback through experience.
The main contribution of this paper is a learning method for adapting the de-
signed service interaction (specified in an obtrusiveness adaptation space) based
on feedback given by the user. We use a reinforcement learning approach for au-
tomatically reconfiguring the interaction obtrusiveness in a way that maximizes
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 274–281, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Exploiting User Feedback for Adapting Mobile Interaction Obtrusiveness 275
the user’s satisfaction for long-term use. An initial interaction obtrusiveness con-
figuration is designed to ensure a consistent initial behavior according to initial
user needs. This configuration is then adapted for each service to the individual
behavior and preferences of users based on the feedback given by them.
!
"
# )$ %
.
!
"
#
$ %
+
, -
*
&'
(
"
#
"
%
Fig. 2. (a) Interaction obtrusiveness adaptation. (b) Interface to set up the preferences.
For example, user preferences can change due to a change in his lifestyle (e.g.,
starting classes, starting a new job, etc.). Imagine I had defined the healthcare
service to be in the aware level of attention by means of speech feedback because
that service was very important for me and I was unemployed (illustrated as S1
in Fig. 2(a)). However, I have just started a new job working most of the day with
other people and I do not want other people to be aware of it. Thus, the service
obtrusiveness level would have to be adapted to another level that requires less
attention. Fig. 2(a) illustrates this obtrusiveness adaptation. To address this
problem, the system should at least allow users to set up his new preferences
explicitly by means of an end-user interface (see Fig. 2(b)). However this could
be a tedious task if the number of services is high and users often forget to
set and reset the configurations, resulting in unwanted interruptions [5]. Thus,
we propose to learn these preferences and adapt the initial design to maintain
user satisfaction. Our system achieves this by learning from user’s reaction after
receiving a notification in a specific obtrusiveness level.
In order to improve the initial obtrusiveness design and adapt the obtrusiveness
level for each service, we follow a Reinforcement Learning (RL) strategy [6] in
which our system learns from user feedback after receiving a notification in an
obtrusiveness level (obtrusiveness state). Depending on the received feedback
(positive or negative), the current obtrusiveness state is rewarded or punished.
In this way, if negative feedback is received continuously for a given notification
Exploiting User Feedback for Adapting Mobile Interaction Obtrusiveness 277
in an obtrusiveness state, it means that the user is unsatisfied with the design
and the system will adjust it by changing the obtrusiveness level. Thus, in each
obtrusiveness state, the possible actions that the system can take are:
makes the system not learn anything, while a factor of 1 would make it consider
only the most recent feedback; r is the reward observed after performing the
action a in an obtrusiveness state s; γ is a discount parameter between 0 and 1
expressing the importance of future rewards; and Q(s , a ) is the expected future
feedback. We have introduced a new function V to establish a limit value that
the quality value can take to accelerate the learning and obtain an adaptation
with few trials when the user changes his preferences. This new function is:
⎧
⎨ −Δ if Q(s, a) <= −Δ
V (s, a) = Q(s, a) if − Δ < Q(s, a) < Δ (2)
⎩
Δ if Q(s, a) >= Δ
Where Δ is the limit value that the quality value can take. This value has to be
defined according to the reward and learning rate values.
Alg. 1 shows our modified version of Q-Learning for learning after receiving
a notification and taking into account the limit value function. As we have a
designed a-priori model, we initialize s as the designed obtrusiveness state and
the value of the state-action pairs Q(s, a) and V (s, a) with the designed values
giving more weight to the action do nothing in the initial obtrusiveness state.
This provides a consistent initial behavior. When a notification has to be sent,
the system choose an action a in the state s following the behavior policy. Then,
the action is performed and the notification is sent in the new obtrusiveness state
s . The system observes the reward obtained from the user either explicitly by
means of a graphical interface or implicitly by recognizing the user’s emotional
state. Based on the obtained reward, the system updates the value of the state-
action pair. Then, s is updated to the current obtrusiveness state s . Finally, if
the received reward is negative, we save the kind of negative reward received.
Exploiting User Feedback for Adapting Mobile Interaction Obtrusiveness 279
If the action to perform is to adjust the obtrusiveness state, the system has
to compute the initiative and attention level required for the adaptation to a
new obtrusiveness state. We introduce an ordering in the obtrusiveness axes as
invisible < aware (aware interactions requires more attention than invisible
ones) and reactive < proactive (a reactive value provides a lower degree of
automation than the proactive one). In this way, we express the changes in the
obtrusiveness level as increments and decrements in the different axes.
To calculate the changes, we process the kind of negative feedback. For ex-
ample, if the user puts the device into silent mode after receiving a notification,
it means a decrement in the attention axis (decremental feedback). However, if
the user is not aware of a notification, it needs to increment the attention (incre-
mental feedback). For processing this feedback, we calculate the median for all
the increments and decrements. The rationale behind it is to obtain the average
movement in the obtrusiveness space. Finally, the resulting action is applied.
4 Experimental Evaluation
rewards). The quality drops when the experimenter changes his mind on the
expected behavior (negative rewards) and it is fluctuating while the system is
learning. The figure shows an initial phase where the experimenter was satisfied
with the behavior (iter. 1 to 39). Then, the experimenter changed his mind and
the curve drops (iter. 40 to 45). The system quickly learned the new behavior
(iterations 46 to 49) and finally the user was satisfied after the new behavior
was achieved (iter. 50 to 90). Note that the value of Δ is dependent on α and r
because it defines more or less the number of iterations to learn a new behavior.
If Δ is higher, the system will be less sensitive to anomalous rewards but a
change in the user’s mind will be slower to learn. On the contrary, if Δ is lower,
the system will be more sensitive to anomalous rewards but will learn quicker.
Thus, the choice of Δ value could be different for each user following this criteria.
5 Related Work
Some studies have been conducted on context-aware mobile computing to au-
tomatically adapt the modality configuration profile of mobile devices based on
context changes [10]. However, the focus on these studies has been on context
recognition, not on the obtrusiveness aspects of the interaction. Korpipää et
al. [11] also developed a prototype tool to let users customize interactions with
smart phones. However, Barkhuss and Dey [12] conducted a study showing that
users prefer an autonomous behavior in spite of feeling a lack of control.
In order to learn effective policies for an autonomous behavior, RL has been
demonstrated to be a promising approach [13]. Schiaffino and Amandi [14] de-
veloped a personal agent to learn individual preferences and give personalized
assistance. Zaidenberg et al. [15] adapted a pre-defined set of actions to users
by giving system rewards to good decisions. We use RL with a different goal in
order to adapt the initial design and reduce the burden of mobile notifications.
Towards building systems that adapt their level of intrusiveness to each user,
studies focus on minimizing unnecessary interruptions for the user [1] by learning
interruption models [5]. They have focused primarily on determining when to
interrupt, without putting much attention on the how as we do in this work.
6 Conclusion
In this work, we provide a learning method for adapting interaction obtrusiveness
based on user’s reaction. Because user’s needs and preferences can change over
time and the initial design can not further satisfy the user, we introduce a system
to adapt this initial design through experience. We exploit user feedback after
receiving a notification in a certain obtrusiveness level in order to minimize the
burden of mobile notifications and maximize users satisfaction in a long-term
use. During the experiment, we found that intelligibility can become an issue
that affects user satisfaction, as the adaptation is transparently to users. This
can cause uncertainty of what the system has learned. Offering to users the
possibility to check the adaptations could reduce the uncertainty and lack of
control. Thus, further work is needed to overcome this problem.
Exploiting User Feedback for Adapting Mobile Interaction Obtrusiveness 281
Acknowledgments. This work has been developed with the support of MICINN
under the project EVERYWARE TIN2010-18011 and co-financed with ERDF, in
the grants program FPU.
References
1. Ramchurn, S.D., Deitch, B., Thompson, M.K., Roure, D.C.D., Jennings, N.R.,
Luck, M.: Minimising intrusiveness in pervasive computing environments using
multi-agent negotiation. In: MobiQuitous 2004, pp. 364–372 (2004)
2. Gibbs, W.W.: Considerate computing. Scientific American 292(1), 54–61 (2005)
3. Gil, M., Giner, P., Pelechano, V.: Personalization for unobtrusive service interac-
tion. Personal and Ubiquitous Computing 16(5), 543–561 (2012)
4. Ju, W., Leifer, L.: The design of implicit interactions: Making interactive systems
less obnoxious. Design Issues 24(3), 72–84 (2008)
5. Rosenthal, S., Dey, A.K., Veloso, M.: Using Decision-Theoretic Experience Sam-
pling to Build Personalized Mobile Phone Interruption Models. In: Lyons, K.,
Hightower, J., Huang, E.M. (eds.) Pervasive 2011. LNCS, vol. 6696, pp. 170–187.
Springer, Heidelberg (2011)
6. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press
(1998)
7. Isbell, C., Shelton, C.R., Kearns, M., Singh, S., Stone, P.: A social reinforcement
learning agent. In: AGENTS 2001: Autonomous Agents, pp. 377–384. ACM (2001)
8. Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. Learning 8(3-4), 279–292 (1992)
9. Gil, M.: Prototype and case study, http://www.pros.upv.es/adaptio/learning
10. Valtonen, M., Vainio, A.M., Vanhala, J.: Proactive and adaptive fuzzy profile con-
trol for mobile phones. In: PerCom 2009, pp. 1–3 (2009)
11. Korpipaa, P., Malm, E.J., Rantakokko, T., Kyllonen, V., Kela, J., Mantyjarvi,
J., Hakkila, J., Kansala, I.: Customizing user interaction in smart phones. IEEE
Pervasive Computing 5, 82–90 (2006)
12. Barkhuus, L., Dey, A.K.: Is Context-Aware Computing Taking Control away from
the User? Three Levels of Interactivity Examined. In: Dey, A.K., Schmidt, A.,
McCarthy, J.F. (eds.) UbiComp 2003. LNCS, vol. 2864, pp. 149–156. Springer,
Heidelberg (2003)
13. Tesauro, G.: Reinforcement learning in autonomic computing: A manifesto and
case studies. IEEE Internet Computing 11(1), 22–30 (2007)
14. Schiaffino, S., Amandi, A.: Polite personal agents. IEEE Intelligent Systems 21(1),
12–19 (2006)
15. Zaidenberg, S., Reignier, P., Crowley, J.L.: Reinforcement learning of context mod-
els for a ubiquitous personal assistant. In: Corchado, J., Tapia, D., Bravo, J. (eds.)
UCAmI 2008, vol. 51, pp. 254–264. Springer, Heidelberg (2009)
Towards an Infrastructure Model for Composing
and Reconfiguring Cyber-Physical Systems
1 Introduction
Cyber-Physical Systems (CPS) put together computation and physical processes [1].
This kind of systems is being increasingly used in different domains such as
healthcare, transportation, process control, manufacturing or electric power grids. As
a consequence of its interaction with the physical world, they are required to operate
dependably, safely, securely, efficiently and typically in real-time.
CPS require an intensive use of the communication networks and, as described in
[2] IP technologies are increasingly used. However, the complexity of dealing with a
large number of devices and heterogeneous platforms and, simultaneously, satisfying
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 282–289, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Towards an Infrastructure Model for Composing and Reconfiguring CPS 283
This work focuses on the characterization of the infrastructure from the application
execution point of view. As a result, an abstract representation of the underlying
infrastructure based on two types of entities has been obtained. Namely, these entities
describe the application participant nodes and the connecting network segments. The
models for these entities are based on a resource set that is platform independent.
284 I. Calvo et al.
As part of the iLAND project, this generic model has been validated at three
different demonstrators (see Fig. 2) from different domains: (1) An intelligent remote
surveillance system; (2) a health care application and (3) an early-environmental
detection application.
However, the generic model may need to be extended for a specific application
domain as shown in Fig. 4. Actually, during the iLAND project three demonstrators
of different application domains where considered and specific properties were added
to the generic device model. More specifically, in Fig. 4 the I.R.S. node provides the
specific properties needed to model Intelligent Remote Surveillance devices, the
CareNode represents specific properties needed at health care applications, and the
E.E.D. Node represents specific properties of Early-Environmental Detection devices.
Following there is a brief discussion about the main attributes represented by the
iLAND generic model. Even though during the iLAND project several modeling
languages were considered in order to characterize the resources, namely, UML,
SysML, AADL, and others based on XML technologies, the final model used in
iLAND is based on the UML profile for MARTE [9]. This profile provides a set of
attributes rich enough to model in detail a large number of resources and at the same
time provides different tools that help designers in the creation of models to represent
the systems. However, the use of UML profile for MARTE presents some
inconveniences. On one side, this profile is mainly aimed at representing static
properties of the nodes whereas the iLAND middleware must make decisions at run-
time based on the dynamic status of the system nodes. Consequently, iLAND requires
286 I. Calvo et al.
representing dynamic properties of the nodes. On the other side, iLAND requires a
deep knowledge about the network properties that goes well beyond those provided
by the UML profile for MARTE.
As a consequence, the adopted solution has been extending the UML profile for
MARTE with new properties, while keeping the syntax and types to represent the new
properties when possible.
From the previous discussion may be inferred that the iLAND middleware
manages different types of properties depending on whether they are static or
operational (dynamic) and generic or specific of the application domain. Table 1
shows the combination of the different types of properties used to represent the
iLAND devices with some examples.
However, the iLAND middleware has been designed to react to dynamic updates in
the state of the device resources as well as to provide dynamic reconfiguration of the
applications at run-time. This fact requires that some dynamic attributes must be
represented in the iLAND device model so that the upper layer of the iLAND
middleware architecture can make decisions accordingly, assuring that certain QoS
requirements are satisfied. These dynamic or operational attributes are shown in Table 3.
The iLAND middleware also requires information about the underlying network
infrastructure in order to determine whether the QoS requirements of the applications
can be satisfied. This section describes the proposed model to characterize the
different network segments that provide the communication infrastructure.
The complexity of the network infrastructure has an important effect in the cost of
the algorithms used to compose the service oriented applications. As a result, the
proposed model tries to be generic enough to allow representing the infrastructure
with different degrees of complexity, but at the same time capable of describing the
communication infrastructure in order to make useful decisions. Actually, it allows
modeling some network devices such as gateways, switched or even wireless access
points. Table 4 shows the main attributes of the model.
5 Conclusions
This work proposes a platform independent characterization of the resources used at
the iLAND middleware. This model is used by the nodes at the upper layer of the
iLAND architecture (T3 nodes) in order to make reconfiguration decisions. The
Towards an Infrastructure Model for Composing and Reconfiguring CPS 289
proposed model represents (1) the devices of the CPS according to four main types of
resources, namely, CPU, memory, network interface and power supply and (2) the
network infrastructure.
The proposed model for the devices has two types of attributes: static and
operational or dynamic, and can be extended for different application domains. This
model uses the UML profile for MARTE syntax when possible. However, this model
has been extended since it does not represent neither dynamic properties of the
devices nor the network infrastructure.
The proposed model allows performing both off-line and on-line analysis of the
CPS at the upper layer of the iLAND middleware, which is responsible for composing
the applications and taking reconfiguration decisions while keeping certain QoS
parameters.
References
1. Lee, E.A.: Cyber Physical Systems: Design Challenges. In: 11th IEEE Symp. Object
Oriented Real-Time Distributed Computing (ISORC 2008), pp. 363–369 (2008)
2. Koubâa, A., Andersson, B.: A Vision of Cyber-Physical Internet. In: Proc. of the Workshop
of Real-Time Networks (RTN 2009), Satellite Workshop to (ECRTS 2009) (July 2009)
3. RUNES project: Reconfigurable, Ubiquitous, Networked Embedded Systems,
http://www.ist-runes.org/
4. González, M., Tellería, M.: FRESCOR deliverable Architecture and contract model for
integrated resources II - D-AC2v2 (2008), http://www.frescor.org
5. García-Valls, M., Gómez-Molinero, F.: iLAND: mIddlLewAre for deterministic
dynamically reconfigurable NetworkeD embedded systems. In: The Embedded World
Conference, Nüremberg, Germany (March 2010)
6. iLAND, Project Web page, http://www.iland-artemis.org/ (last access June
2012)
7. Wikipedia, iLAND project, http://en.wikipedia.org/wiki/Iland_project
(last access June 2012 )
8. Calvo, I., et al.: Resource Characterization for Platform Independence. iLAND Deliverable
D4.1 (November 2010), http://www.iland-artemis.org/doc/
deliverables/wp4/D4_1_ResourceCharacterizationv2.1-Final.pdf
9. UML Profile for MARTE: Modeling and Analysis of Real-Time Embedded Systems. OMG
Document: formal/2009-11-02, http://www.omg.org/spec/MARTE/1.0
Service Composition for Mobile Ad Hoc
Networks Using Distributed Matching
1 Introduction
In ubiquitous computing environments users should not only access services pro-
vided by devices in an single manner but also by means of complex services
obtained by connecting other simpler ones. The environment, constituted by
available devices, should be capable of helping user to automate composition.
In this work we propose an architecture for service composition which resolves
the problem of composition in a mobile ad-hoc network. A mobile ad-hoc network
is a type of infrastructure-less network where devices communicate with each
other using wireless communication. Nodes in these networks communicate by
discovering available routes and performing one-hop transmission of messages.
Due to the mobility of the network, new routes can appear or disappear during
the network life-time and have an impact over those protocols and applications
constructed using them.
The proposed architecture solves the problems of service discovery and com-
position search while providing mechanisms for message forwarding and a set of
functionalities which can be used by developers to construct their final applica-
tions. Due the nature of the mobile ad-hoc network, the solution is proposed in a
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 290–297, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Service Composition for Mobile Ad Hoc Networks 291
way that all nodes take part in the different composition processes. This avoids
the problem of broker nodes selection or the usage of central repositories which
could produce a point of failure due to changes in the topology of the mobile
ad-hoc network.
The rest of the paper is organized as follows. Section 1.1 introduces some ques-
tions about service composition in mobile ad-hoc networks. Section 2 presents
the proposed architecture while Section 3 summarizes the evaluation performed
to test the proposal. In Section 4 related work in the area is introduced and,
finally, Section 5 concludes the paper and presents some future work.
2.1 Dissemination
The proposed solution for composition does not rely in the existence of central
repositories for service registration. However, information about services pro-
vided by each node must be propagated across the network in order to achieve
the composition process. Service dissemination is performed by means of using
292 U. Aguilera and D. López-de-Ipiña
Table Update messages which are sent to neighbour nodes until the maximum
propagation distance is reached. Table update messages start from the nodes
which provide the services and are propagated through the network. Propaga-
tion of update messages is triggered when a previous message was received by
a node and its local information table is updated, or when a new neighbour
appears. The dissemination of service information has been previously studied
by the authors of this paper in [1].
In order to reduce the number of messages sent, nodes group the information
sent to neighbours thanks to the use of a shared taxonomy. This taxonomy
could be specified using an ontology language such as OWL or RDF [11]. Table
update messages do not contain the whole description of each service but only
the type of their parameters (i.e. inputs and outputs) according to the defined
taxonomy. Nodes receiving input or output information group them according
to the shared taxonomy if parameters are related. Only the most generic type of
related parameters will be propagated to neighbours. Parameters can be related,
according to the shared taxonomy, in the following ways: equality, when two
parameters have the same exact type, subsumption, when a parameter type is
more generic than the other, or no-relationship among received parameters.
This layer uses the information which has been propagated by the Dissemination
layer to search for available services. It offers functionality to search for services
which have some specific input or output parameters. An application, or upper
layer, which is trying to locate some service in the network must specify which are
the types of the input or output parameters that the requested service provides.
Once a search starts, a message containing the needed service description will
be propagated from the searching node to its neighbours. Nodes receiving a
search message will check their local services with the parameters contained in
the search message. If a match is produced a message is sent to the searching
node containing the network address of the node and the unique identifier of the
located service.
Service Composition for Mobile Ad Hoc Networks 293
Search messages are propagated through the network aided by the information
distributed by the dissemination layer. Every node which has received parameter
information from some service contains an entry which estimates the distance
to the node which provides the service. This distance information, called esti-
mated distance, is used by the search mechanism, in conjunction with a TTL
counter, to determine if a search message could reach the needed service. If the
TTL of a search message is lower than the distance to a compatible service, the
search message will be dropped and not further propagation will occur. This
optimization decreases the number of propagated messages which also reduces
the congestion of the mobile ad-hoc network.
This layer also manages the creation and maintenance of communication
routes among different nodes during the mobility of the ad-hoc network. Every
time a search or a response message is received by a node, the layer creates or
updates the node’s routing tables accordingly. Routing tables are removed when
neighbours disappear and a valid communication route is broken. The layer also
provides functionality to sent unicast or multicast messages to those destinations
which have been discovered thanks to previous message propagation.
3 Evaluation
The proposed protocol has been fully implemented and evaluated using the NS-
2 network simulator1 extended with AgentJ2 . An evaluation of the proposed
solution (DSM) was performed with the following simulation parameters: 30
nodes, 300 x 300 area, 100 meters transmission range, Random Waypoint model
with uniform speed distribution of 0-5 m/s, IEEE802.11 as the MAC protocol
with a transmission data rate of 54 Mb/s and a maximum packet size of 1500
bytes. The relation between the transmission range and the area tries to simulate
a small network, conference or meeting room, where devices are carried by people
which move at walking speeds (equivalent to 10 m transmission range on a 30 m
x 30 m area). A 30 % of the nodes provide services and randomly selected nodes
start composition searches with a frequency of 1 search per second.
Obtained results have been compared with a flood based composition. In the
flood-based search (FBS), services which need to compose a service start the
process by sending a message to all nodes in the network. Each time the com-
position message is received by a network node, the current service composition
is checked against all the services provided by the node and added if matching
exists. Message propagation finishes when the END services receive the compo-
sition search message. Figure 3 shows the results of the experimentation while
finding compositions of increasing length (3, 5, 7 services). As can be seen, the
proposed solution obtains compositions faster than the flood based one because
the usage of the service graph avoids the sending of search messages through
unconnected routes. However, it also reduces the ratio of found compositions
because graph maintenance has a cost during network mobility. It could be pos-
sible for the search process to explore routes which are not longer valid, which
produces that some composition search messages are lost.
Fig. 3. Comparison of proposed architecture for service composition with a flood based
solution for compositions of 3, 5 and 7 services
1
The Network Simulator - ns-2 - http://www.isi.edu/nsnam/ns/
2
AgentJ Java Network Simulations in NS-2 -
http://cs.itd.nrl.navy.mil/work/agentj/
296 U. Aguilera and D. López-de-Ipiña
4 Related Work
The application of dynamic service composition to mobile networks has been
studied in [2] where the authors propose the usage of a Hierarchical Task Net-
work to decompose the searched composed service into simpler parts, and in
[3] which proposes the integration of a group-based service discovery protocol
and a selective forwarding of services based on discovered paths. The usage of
an overlay network for service composition was firstly proposed in [8]. Another
solution is proposed in [7] where an specification of the required service as an
abstract graph is instantiated using the services of the network. A more complete
solution which covers the service discovery, composition and substitution of bro-
ken compositions is presented in [14]. Other solutions for service composition in
mobile ad-hoc networks have been proposed. For example, in [13] fuzzy logic is
applied to select among available services. Another solution is proposed in [10]
where the usage of a Distributed Constraint Satisfaction Problem is applied to
find connections among services. In [15] the authors study the composition in
opportunistic networks by propagating composition services when connections
among neighbour nodes exist. In those works the process is performed by means
of instantiating a pre-constructed work-flow graph while our proposal creates
the composition work-flow from scratch starting with the provided description.
The idea of performing dynamic service composition with the usage of a pre-
computed service graph has been firstly proposed in [4] or in [6]. However, the
solution is only applied to static networks while our work is directed to mo-
bile ad-hoc networks. On the other hand, the usage of taxonomies and semantic
technologies has been previously explored in [3] and [5]. However, these two ap-
proaches treat the services as a whole and do not enable to search for services
using the parameters they provide. In [9] the authors also propose a service dis-
covery mechanism which uses ontology information during service search. The
dissemination and search layers have some resemblances to a pub/sub system.
Application of pub/sub systems to mobile ad hoc networks has been studied in
[16] and in [12]. Our proposal includes the usage of a taxonomy of concepts while
establishes and maintains communication routes across the network among the
different nodes which provide services.
References
1. Aguilera, U., López-de-Ipiña, D.: A Parameter-Based Service Discovery Protocol
for Mobile Ad-Hoc Networks. In: Li, X.-Y., Papavassiliou, S., Ruehrup, S. (eds.)
ADHOC-NOW 2012. LNCS, vol. 7363, pp. 274–287. Springer, Heidelberg (2012)
2. Basu, P., Ke, W., Little, T.D.C.: Scalable service composition in mobile ad hoc
networks using hierarchical task graphs. In: Proc. 1st Annual Mediterranean Ad
Hoc Networking Workshop (2002)
3. Chakraborty, D., Yesha, Y., Joshi, A.: A distributed service composition protocol
for pervasive environments. In: 2004 IEEE Wireless Communications and Network-
ing Conf., WCNC, vol. 4, pp. 2575–2580. IEEE (2004)
4. Gu, Z., Li, J., Xu, B.: Automatic service composition based on enhanced ser-
vice dependency graph. In: Proc. of the 2008 IEEE Intl. Conf. on Web Services,
pp. 246–253. IEEE Computer Society (2008)
5. Helal, S., Desai, N., Verma, V., Lee, C.: Konark-a service discovery and deliv-
ery protocol for ad-hoc networks. Wireless Communications and Networking 3,
2107–2113 (2003)
6. Hu, S., Muthusamy, V., Li, G., Jacobsen, H.: Distributed automatic service com-
position in large-scale systems. In: Proc. of the Second International Conference
on Distributed Event-Based Systems, pp. 233–244. ACM, Rome (2008)
7. Hu, Z., Tang, X., Wang, X., Ji, Y.: A distributed algorithm for DAG-Form service
composition over MANET. In: Intl. Conf. on Wireless Communications, Network-
ing and Mobile Computing, WiCom 2007, pp. 1664–1667 (2007)
8. Huang, J., Bai, Y., Zhang, Z., Kong, J., Qian, D.: Service forest: Enabling dynamic
service composition in mobile ad hoc networks. In: Proc. of the 2007 Intl. Conf. on
Intelligent Pervasive Computing, pp. 174–177. IEEE Computer Society (2007)
9. Islam, N., Shaikh, Z.A.: Towards a robust and scalable semantic service discovery
scheme for mobile ad hoc network. Pak. J. Engg. & Appl. Sci. 10, 68–88 (2012)
10. Karmouch, E., Nayak, A.: Capability reconciliation for virtual device composition
in mobile ad hoc networks. In: 2010 IEEE 6th Intl. Conf. on Wireless and Mobile
Computing, Networking and Communications (WiMob), pp. 27–34 (2010)
11. McGuinness, D.L., Van Harmelen, F.: OWL web ontology language overview. W3C
Recommendation 10, 2004–03 (2004)
12. Paridel, K., Vanrompay, Y., Berbers, Y.: Fadip: Lightweight Publish/Subscribe for
Mobile Ad Hoc Networks. In: Meersman, R., Dillon, T., Herrero, P. (eds.) OTM
2010. LNCS, vol. 6427, pp. 798–810. Springer, Heidelberg (2010)
13. Prochart, G., Weiss, R., Schmid, R., Kaefer, G.: Fuzzy-based support for service
composition in mobile ad hoc networks. In: IEEE Intl. Conf. on Pervasive Services,
pp. 379–384 (2007)
14. Ruta, M., Zacheo, G., Grieco, L.A., Di Noia, T., Boggia, G., Tinelli, E., Camarda,
P., Di Sciascio, E.: Semantic-based resource discovery, composition and substitution
in IEEE 802.11 mobile ad hoc networks. Wireless Networks 16(5), 1223–1251 (2010)
15. Sadiq, U., Kumar, M., Passarella, A., Conti, M.: Modeling and simulation of service
composition in opportunistic networks. In: Proc. of the 14th ACM International
Conference on Modeling, Analysis and Simulation of Wireless and Mobile Systems,
MSWiM 2011, pp. 159–168. ACM, New York (2011)
16. Yoo, S., Son, J.H., Kim, M.H.: A scalable publish/subscribe system for large mobile
ad hoc networks. Journal of Systems and Software 82(7), 1152–1162 (2009)
Resource Recommendation for Intelligent Environments
Based on a Multi-aspect Metric
1 Introduction
Intelligent Environments host a diverse ecosystem of devices, services and multime-
dia content. Users interact with these resources, either by using them directly or con-
suming them via a plethora of mobile devices. As the environments become more
sophisticated even more of these resources will be made available for the user. All
these resources can be overwhelming, making it difficult to find those more suitable
for the current situation. In order to tackle this problem Intelligent Environments must
be able to react to user needs in order to fulfill the request and desires of the users. To
do this the system must know the user preferences, tastes and limitations. It must be
capable of analyzing the different aspects that define a resource to offer the most suit-
able one.
To do this the recommendation system must be able to process the heterogeneity of
the analyzed resources. To be able to do this recommendation we have identified the
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 298–305, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Resource Recommendation for Intelligent Environments 299
2 Related Work
Since the mid-1990s recommender systems have become an important research area
attracting the attention of e-commerce companies. Amazon [1], Netflix and Yahoo!
Music [2] are widespread examples on making recommendations to its users based on
their tastes and previous purchases. Although these systems have evolved becoming
more accurate, the main problem is still out there: to estimate the rating of an item
which has not been seen by users. This estimation is usually based on the rest of items
rated by the current user or on the ratings given by others where the rating pattern is
similar to the user’s one. Therefore, the problem consists on extrapolating somehow
the utility function (which measures the usefulness of an item to a user) to the whole
rating space. This utility function is represented by all the ratings made by the user.
This way, recommendation engines have to be able to predict or estimate the ratings
of the not yet rated items for users. Content-based systems recommend items which
are similar to those that a user rated positively in the past [3]. Shardanand et al [4]
state some of the problems of this approach, as the vagueness in the description of an
item, which clearly affects the whole system. Items need to have enough descriptive
features to enable the recommendation engine to recommend them accurately. The
problem is that different items with the same features can be indistinguishable to the
system. Collaborative filtering techniques deal with the concept of similarity between
users. The utility of an item is predicted by those items which have been rated by
similar users. Sarwar et al [5] defend this approach by defining collaborative filtering
as the most successful recommendation technique to date. In [4] a personalized music
recommendation system is presented, namely Ringo, which is a social information
filtering system which purpose is to advise users about music albums they might be
interested in. By building a profile for each user based on their ratings, it identifies
similar users so that it can predict if a not yet rated artist/album may be to user's lik-
ing. LikeMinds [6] defines a closeness function based on the ratings for similar items
from different users to estimate the rating of these items for a specific user. It consid-
ers a user which has not already rated the item and a so-called mentor who did it.
Introducing two new concepts (horting and predictability) horting is a graph-based
technique in which users are represented as nodes and the edges between them indi-
cate their similarity (predictability) [7]. The idea is similar to nearest neighbor, but it
300 A. Almeida et al.
differs from it as it explores transitive relationships between users who have rated the
item in question and those who have not. In order to reduce the limitations of pre-
viously reviewed methods, hybrid approaches combine both of them [8]. Others have
introduced new concepts to this area, such as semantics and context [9].
However, one of the most important improvements in the recommendation systems
field is the definition of measures (or aspects) to describe the utility and relevance of
the items. Aspects play an important role in data mining, regardless of the kind of
patterns being mined [10]. Users’ ratings are a good way to trace the interestingness
and the relevance of items. Despite of the ratings, there are many measures which
allow us to go into these items taking into account the use of them (their consump-
tion) by the users. In other words, we look into the behavior of users for measuring
their interestingness for these “items” (for now on we will refer items as resources).
From our point of view a resource could be a product, an application or any kind of
service (e.g., multimedia, news and weather or connectivity infrastructure services).
We have studied several measures from the literature to evaluate those which best fit
in our recommender system, such as minimality [11,12], reliability [13], novelty [14],
horting, predictability and closeness [10], and utility [5].
To be able to evaluate the suitability of the resources for a given user we have identi-
fied a series of aspects that define any given resource. These aspects must be generic
enough to be able to use them to describe any type of resource (services, content and
so on) and expressive enough to capture the different facets of the resources. In the
current implementation (see Figure 1) we have considered four of them, but we dis-
cuss the other ones in the future work section. The four aspects that we currently take
into account are the following: Predictability, Accessibility, Relevancy and Offen-
siveness. Each one of those aspects is used in the calculation of the suitability value
(see Formula 1). The weight of each aspect on the final value can be modified to bet-
ter adapt the recommendation system to the specific domain of each smart environ-
ment (e.g. to the business plan of a hotel, to prioritize those aspects demanded by the
clients). The suitability value is always personalized to a specific user and can change
over the time along the preferences of the user.
∑ (1)
Where is the value of the suitability of each resource, is the weight for an
aspect and is the value of the aspect of a resource. The values of the aspects are
normalized.
Resource Recommendation for Intelligent Environments 301
3.1 Predictability
The first aspect we evaluate is the predictability. This aspect reflects how likely a
resource is to be used based on the resources consumed previously. This likeliness is
expressed as a probability value between 0 and 1. We use Markov Chains to create
the model of the user’s resource usage. This model allows us to ascertain patterns in
the user behavior. E.g. When one user stays on the hotel his morning routine consists
in using the “Press Digest” to recover the headlines of the day, the “Room Service” to
order breakfast and the “Transport Service” to call a taxi. With the generated model
we will able to predict that after using the “Room Service” the most probable service
to be consumed is the “Transport Service”. To build the transition matrix for the Mar-
kov Chains we use the previous history of the user’s resource consumption as the
training set. This transition matrix can be retrained with the new data recovered from
the user with each visit to the hotel, adapting itself to the changes in the user prefe-
rences. As we discuss in the future work section one of the main problems with using
Markov Chains is that we only take into account the last consumed resource to predict
the next one due to the Markov Property.
3.2 Accesibility
One of the most important aspects is the accessibility features of the resource. Users
of intelligent environments possess a wide variety of abilities (sensorial, cognitive and
302 A. Almeida et al.
so on) that must be taken into account to assess the suitability of the resources. What-
ever the resource is, users must be able to consume it. We have used the user abilities
taxonomy proposed in [15]. We have restricted the user abilities to three groups: 1)
Sensorial abilities: Those abilities related to the user input; 2) Communicational abili-
ties: Those abilities related to the user output and 3) Physical: Those abilities related
with the capability of the user to move his extremities .
Each resource has two types of abilities associated, the required and recommended
user abilities. If the user does not have one of the required abilities the value of the
aspect is automatically set to 0. This is done to reflect the fact that the user can not
consume the resource, thus being completely useless for that user. If the user does not
have a recommended ability the accessibility value receives a penalization (see For-
mula 2).
1 | | (2)
Where is the accessibility value for the resource, is the penalization weight
and | | is the number of recommended abilities not met by the user.
3.3 Relevancy
This aspect measures the importance of a given resource to the user’s current context.
For example, a user jogging may be interested in the location of parks and running
routes but a user having breakfast in the hotel may be interested instead in the public
transports available in the city. One of the main problems we encountered evaluating
this aspect was the selection of the context variables. The selected variables must be
significant enough to be applicable to any type of resource in any given domain. We
have identified three context variables that meet these requisites: 1) User location. In
the tourism domain we have considered the following locations: client’s room, hotel’s
lobby, hotel’s restaurant, hotel’s swimming pool, hotel’s gymnasium and outside the
hotel; 2) Time of the day. We have divided the day in twelve periods of two hours and
3) Current activity. In the tourism domain we have identified seven activities: sleep-
ing, morning routine, having breakfast, exercising, working, shopping and visiting
tourist attractions.
The context information is provided by other modules of the THOFU project that
are out of the scope of this paper. Using the usage data recollected from the users we
have trained a soft classifier that, given those three context variables, calculates the
relevancy of a resource. For the classifier we have used a nearest neighbor search. To
implement this classifier we have used the libraries included in the Weka framework.
We have used LinearNNSearch as the nearest neighbor search algorithm, with the
Euclidean distance as the distance function.
3.4 Offensiveness
This aspect measures the suitability of a resource based on a rating system. We use the
age categories (3, 7, 12, 16 and 18) and the content descriptions (violence, bad language,
Resource Recommendation for Intelligent Environments 303
fear, sex, drugs, gambling, discrimination and online) developed for the PEGI (Pan Eu-
ropean Game Information) rating system. To evaluate it we use a similar system that the
one used in Section 3.1 to calculate the accessibility, but taking the age categories as
required constraints and the content descriptions as the recommended ones.
4 Use Case
To better illustrate how the developed system works we will explain how the system
works taking two different users as examples. The first user is a 27 year old male with
a hearing impairment. The second one is a 6 year old child. The users have five re-
sources available to them in this example: The wake up service (R1), the room service
(R2), the press digest (R3), the multimedia system (R4) and the transport service
(R5). For this example the weights for the metric calculation are:
• predictability and relevancy have a weight of 1
• accessibility and offensiveness have a weigh of 0.5
We assume that both users are in their rooms and that the wake up service has just
been activated by an alarm.. The wake up service and multimedia system both have
hearing requirements, but offer alternative means to use them. The first user has not
stated any content restriction. The results are shown in Table 1.
The second user has not any disability, so every resource attains the maximum
score in accessibility. The press digest has a minimum age category of 7 and it rece-
ives a score of 0 in offensiveness. The results are shown in Table 2.
Using the Formula 1 the recommended resource for the first user will be the room
service (R2) in this scenario.
304 A. Almeida et al.
In the case of the second user the selected resource will be the multimedia system
(R4).
Adding these new aspects we aim to create more significant resource recommenda-
tions that meet better the user’s needs. Finally we would like to include in the context
data information about the vagueness and uncertainty of the model. This will allow us
to model the context more realistically and will improve the overall preciseness of the
system.
References
1. Linden, G., Smith, B., York, J.: Amazon. com recommendations: Item-to-item collabora-
tive filtering. IEEE Internet Computing 7(1), 76–80 (2003)
2. Chen, P.L., et al.: A Linear Ensemble of Individual and Blended Models for Music Rating
Prediction
3. Pennock, D.M., Horvitz, E., Lawrence, S., Giles, C.L.: Collaborative filtering by perso-
nality diagnosis: A hybrid memory-and model-based approach. In: Proceedings of the 16th
Conference on Uncertainty in Artificial Intelligence, pp. 473–480 (2000)
4. Shardanand, U., Maes, P.: Social Information Filtering: Algorithms for Automating ‘Word
of Mouth’ (1995)
5. Sarwar, B., Karypis, G., Konstan, J., Reidl, J.: Item-based collaborative filtering recom-
mendation algorithms. In: Proceedings of the 10th International Conference on World
Wide Web, pp. 285–295 (2001)
6. Greening, D.: Building consumer trust with accurate product recommendations. Like-
Minds White Paper LMWSWP-210-6966 (1997)
7. Aggarwal, C.C., Wolf, J.L., Wu, K.L., Yu, P.S.: Horting hatches an egg: A new graph-
theoretic approach to collaborative filtering. In: Proceedings of the Fifth ACM SIGKDD
International Conference on Knowledge Discovery and Data Mining, pp. 201–212 (1999)
8. Claypool, M., Gokhale, A., Miranda, T., Murnikov, P., Netes, D., Sartin, M.: Combining
content-based and collaborative filters in an online newspaper. In: Proceedings of ACM
SIGIR Workshop on Recommender Systems, p. 60 (1999)
9. Kim, S., Kwon, J.: Effective context-aware recommendation on the semantic web. Interna-
tional Journal of Computer Science and Network Security 7(8), 154–159 (2007)
10. Geng, L., Hamilton, H.J.: Interestingness measures for data mining: A survey. ACM Com-
puting Surveys (CSUR) 38(3), 9 (2006)
11. Padmanabhan, B., Tuzhilin, A.: Small is beautiful: discovering the minimal set of unex-
pected patterns. In: Proceedings of the Sixth ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, pp. 54–63 (2000)
12. Bastide, Y., Pasquier, N., Taouil, R., Stumme, G., Lakhal, L.: Mining Minimal Non-
redundant Association Rules Using Frequent Closed Itemsets. In: Palamidessi, C., Moniz
Pereira, L., Lloyd, J.W., Dahl, V., Furbach, U., Kerber, M., Lau, K.-K., Sagiv, Y., Stuck-
ey, P.J. (eds.) CL 2000. LNCS (LNAI), vol. 1861, pp. 972–986. Springer, Heidelberg
(2000)
13. Tan, P.N., Kumar, V., Srivastava, J.: Selecting the right interestingness measure for asso-
ciation patterns. In: Proceedings of the Eighth ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, pp. 32–41 (2002)
14. Sahar, S.: Interestingness via what is not interesting. In: Proceedings of the Fifth
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining,
pp. 332–336 (1999)
15. Almeida, A., Orduña, P., Castillejo, E., López-de-Ipiña, D., Sacristán, M.: Imhotep: an ap-
proach to user and device conscious mobile applications. Personal and Ubiquitous Compu-
ting 15(4), 419–429 (2011)
Social Network Analysis Applied to Recommendation
Systems: Alleviating the Cold-User Problem
Abstract. Recommender systems have increased their impact in the Internet due
to the unmanageable amount of items that users can find in the Web. This way,
many algorithms have emerged filtering those items which best fit into users’
tastes. Nevertheless, these systems suffer from the same shortcoming: the lack of
new user data to recommend any item based on their tastes. Social relationships
gathered from social networks and intelligent environments become a challenging
opportunity to retrieve data from users based on their relationships, and social
network analysis provides the demanded techniques to accomplish this objective.
In this paper we present a methodology which uses users’ social network data
to generate first recommendations, alleviating the cold-user limitation. Besides,
we demonstrate that it is possible to reduce the cold-user problem applying our
solution to a recommendation system environment.
1 Introduction
As the amount of information available in the Web increases [12] recommender systems
appeared as a tool to filter data to users based on their tastes. Taking advantage of
the opportunities the Web 2.0 provides, researches started to work in such algorithms
which were able to channel sets of items to users. These filtering algorithms started to
be known as recommender systems, taking the opinions of a community of users to
help individuals in that community to more effectively identify content of interest from
a potentially overwhelming set of choices [11]. Over the past years there have been
many progresses in this area [1]. E-commerce companies, such as Amazon.com, use
every user’s purchases, ratings and searches as inputs to their algorithms to generate
recommendations [8]. Others, such as YouTube, deal with explicit (e.g. rating a video,
favoriting/taste a video, subscribing to an uploader, etc.) and implicit data, generated
as a result of users watching and interacting with videos (e.g. user started to watch
a video and user watched a large portion of the video) [4,2]. Both solutions pursue
the same goal: to present to the user the most taste amount of items. However, these
systems suffer and share the so-called cold-start problem. Cold-start includes users,
items, even systems, and it is about new entities entering a new system for the first
time. Cold-user problem (our solution focuses on it) appears when a new user has no
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 306–313, 2012.
c Springer-Verlag Berlin Heidelberg 2012
SNA: Extracting Cold-User Data 307
previous search, rating or purchase. This way a recommender system will be unable
to find any interesting set of items. Sometimes many ratings are needed before being
able to provide a reasonable recommendation1. Therefore, we thought about finding
cold-user data out of the boundaries of a recommendation system, and we realized that
interactions and relationships among users conceal much information to work with.
Fig. 1. A graph built from a user check-in where there are 4 more Foursquare users
The experiment has been developed among 5 users checking in controlled venues in
which we had already created mock users with controlled recommendations to avoid
sparsitySocial networks allow users to interact each other establishing virtual relation-
ships with people and companies. There is such amount of users information in these
networks that new challenges arise to take advantage of it. Besides, intelligent and social
environments (such as hotels, restaurants, etc.) provide the necessary infrastructure to
collect social data from users. This way, our proposal tackles the opportunity of exploit-
ing social data to alleviate the cold-user problem in a recommendation environment.
The remainder of this paper is structured as follows: first, in Section 2, we analyse
the current state of the art in recommendation systems and the best known techniques
to avoid the cold-user problem. Next, we present our methodology for collecting valu-
able data from social networks taking into account users’ relationships (Section 3). In
Section 4 we analyse the results obtained from our proposal. Finally, we summarize our
experiences and discuss some conclusions and future work (Section 5).
2 Related Work
Since the mid-1990s recommender systems have become an important research area
attracting the attention of e-commerce companies. Amazon [8], Netflix and Yahoo!
Music [3] are widespread examples on making recommendations to their users based
on their tastes and previous purchases. Even though the evolution of these systems the
main problem is still out there: to estimate the rating of an item which has not been seen
by any user before. This estimation is usually based on the rest of items rated by the
user or on the ratings given by others where the rating pattern is similar to the user’s
one. Although there are different kinds of recommendation systems (content-based,
collaborative filtering and hybrid techniques) [1] they all suffer from the same main
1
http://movielens.umn.edu
308 E. Castillejo, A. Almeida, and D. López-de-Ipiña
limitations: sparsity and scalability [12] and cold-start problems [10]. Moreover, some
authors have improved their algorithms combining users’ social data with collaborative
recommendation systems [6,5]. Although they don’t tackle the cold-user problem, the
idea of using users’ available data as an input represents a new starting point for these
systems (e.g. Foursquare adds information about our geolocation). There is also an
open research in the Carnegie Mellon University’s School of Computer Science2 about
how people really inhabit their cities based on Foursquare data. Grouping check-ins
by physical proximity they measure “social proximity” by how often different people
check in similar places. This way resulting areas are dubbed.
But social networks are more than users, relationships and data. Social network anal-
ysis (SNA) refers to methods and techniques used to analyse social networks, which are
social structures made up of individuals (called “nodes”) connected by different repre-
sentations of relationships (e.g. friendship, kinship, financial exchange, etc.). Once we
have empirical data on a social network new questions arise: Which nodes are the most
central nodes of the grid? What people are influenced by others? Which connections
are most crucial? These questions and their answers represent the basic domain of SNA
[9]. There are many metrics which measure different aspects in a social network taking
into account the nodes and their edges. In [9] Newman details some of the best known
metrics in SNA. As depicted in Section 3 we have chosen the eigenvector centrality
metric to face up to the cold-user problem. A variant of eigenvector centrality is used
by Google search engine to rank Web pages [7], but it is based on the premise that the
system already has data from the user to work with.
3 Proposed Solution
This section details the developed system to enable the generation of generic recom-
mendations to the user based on the rest of the users who checked in the same venue
using Foursquare. To get the rest of the users (the network nodes) who checked in the
same place we have used the Foursquare API3 . Once we get the nodes we calculate
those which are the most important at the current venue and then we obtain the recom-
mendations which better fit into user’s tastes using probabilistic.
Since degree centrality gives a simple count of the number of ties a node has, eigen-
vector centrality acknowledges that not all connections are equal. Therefore, and be-
cause some edges represent stronger connections than others, the edges can be weighted.
To sum up, connections to nodes which are themselves influential to others will lend
a node more influence than connections to less influential nodes. Denoting the central-
ity of a node i by xi , then it is possible to make xi proportional to the average of the
centralities of i’s network neighbours:
n
1
xi = Aij xj , (2)
λ j=1
where A is the adjacency matrix which represents the ties between nodes i and j and λ
is a constant. This can also be rewritten x = (x1 , x2 , ...):
λx = A · x, (3)
where x is an eigenvector of the adjacency matrix with eigenvalue λ.
All these calculations are computed in the user’s device. The following method uses
the JAMA6 library to obtain the eigenvectors to the given adjacency matrix. First the
corresponding eigenvalues are estimated. Then we extract those values of the eigenvec-
tors which are related with the highest value of the obtained eigenvalues, which corre-
sponds to the most important node of the grid. Applying this eigenvector calculation to
the A matrix from Section 3.1 we obtain that the highest eigenvalue λ1 = 12.502, and
the corresponding eigenvector to this eigenvalue is
⎛ ⎞
0.569
⎜0.491⎟
⎜ ⎟
e1 = ⎜ ⎟
⎜0.401⎟ , (4)
⎝0.392⎠
0.349
where the first value corresponds to the current user or user 0 (and it is also the highest
value), so we have to ignore it. The next highest value is the one we will take into
account as the most important node of the grid, in this case, the user or node 1.
Once we have obtained the most important node of the grid for a given venue we
are ready to start making recommendations to the current user. We have encapsulated
the Foursquare user object into a new “CompactUser” which also has a set of recom-
mendations assigned to it, each one composed by a series of items. Taking as example
the default categories of Amazon.com we have tested our solution using a few con-
trolled users who are friends in Foursquare and some random generated users in order
to have a controlled scenario. Results are detailed in Section 4. Once the recommen-
dations of the most important users are obtained, we upload them to a web server by
Google AppEngine7 using a simple Python service. For each user we store all possi-
ble recommendations (we manage nine main categories) and we update the estimate
and the probability of fitting with his tastes. To evaluate this the developed application
6
http://math.nist.gov/javanumerics/jama/
7
https://appengine.google.com/
SNA: Extracting Cold-User Data 311
asks first about user’s tastes among the cited categories. This information is stored in a
SQLite database (this is just to evaluate the solution).
The service responses a JSON object with the recommendations and their likelihood
probabilities for the user. This JSON object is parsed in the device side in order to
generate the corresponding recommendations to the user.
4 Results
The experiment has been developed among 5 users checking in controlled venues in
which we had already created mock users with controlled recommendations to avoid
sparsity. Our solution has been evaluated by presenting to our users the default cat-
egories Amazon.com uses and a list with our categories recommendations. Once our
users have compared both lists, they have fulfilled a questionnaire to capture their sat-
isfaction level with the presented results. By default, and without any previous inter-
action, Amazon.com recommends Kindle related products, clothing trends, products
being seen by other customers, best watches prices, laptops best prices and top seller
books. These recommendations do not make up of its default categories, which demon-
strates the existence of the cold-user problem.
Table 2. Results obtained from our system for one user performing 3 and 5 check-ins
Table 3. Comparison of probabilities obtained from the questionnaire and the probabilities cal-
culated with the proposed solution using 3 and 5 check-in results
values corresponds to the same user doing 5 check-ins. This way the new values are
more refined and up to date.
Users have also to fulfil a questionnaire rating the presented categories. This rating
includes mandatory and controlled answers, with values from 1 to 4. This way we can
compare the results with the obtained probability. Table 3 compares both probabilities
and calculates the approximation of each estimation. We have also denoted a deviation
error by . The more approximate to 0.0 is, the more accurate our solution becomes.
There are some values of which shows that more check-ins are needed to refine the
obtained probabilities. On the one hand, in case of 3 check-in results the worst ones are
for “Automotive & industrial”, “Grocery, health & beauty”, “Toys, kids & baby” and
“Sports & outdoors”. This means that if the user is interested in “Sports & outdoors”
the system could not recommend any item from this category, or even worse, it may
recommend items from “Grocery, health & beauty”. On the other hand, 5 check-in
error column is more accurate and its values are closer to 0.0. This test shows how
more check-ins come out onto more refined recommendations.
This paper explores the possibility of using relevant data from users’ social network
to alleviate the cold-user problems in a recommender system domain. The proposed
solution extracts the most valuable node in the graph generated by check in a venue with
an Android application using the Foursquare API. By obtaining the recommendations
to this node we estimate the probability of some categories to be similar to users tastes.
In the near future it will be interesting to store the obtained matrices for each venue
and update them with every check-in. By now matrices are not stored, so they are not
dynamic, which means that a new matrix is built every time the user checks in a venue,
overwriting any other previous matrix.
SNA: Extracting Cold-User Data 313
Finally, it becomes necessary to test the solution among a higher number of users,
increasing their tastes possibilities and the offered items. We are limited by our environ-
ment because of the small amount of users and check-ins available (sparsity problem).
References
1. Adomavicius, G., Tuzhilin, A.: Toward the next generation of recommender systems: A sur-
vey of the state-of-the-art and possible extensions. IEEE Transactions on Knowledge and
Data Engineering 17(6), 734–749 (2005)
2. Baluja, S., Seth, R., Sivakumar, D., Jing, Y., Yagnik, J., Kumar, S., Ravichandran, D., Aly,
M.: Video suggestion and discovery for youtube: taking random walks through the view
graph. In: Proc. of the 17th Intl. Conf. on World Wide Web, pp. 895–904. ACM (2008)
3. Chen, P., Tsai, C., Chen, Y., Chou, K., Li, C., Tsai, C., Wu, K., Chou, Y., Li, C., Lin, W.,
et al.: A linear ensemble of individual and blended models for music rating prediction. In:
KDDCup 2011 Workshop (2011)
4. Davidson, J., Liebald, B., Liu, J., Nandy, P., Van Vleet, T., Gargi, U., Gupta, S., He, Y.,
Lambert, M., Livingston, B., et al.: The youtube video recommendation system. In: Proc. of
the Fourth ACM Conf. on Recommender Systems, pp. 293–296. ACM (2010)
5. Kautz, H., Selman, B., Shah, M.: Referral web: combining social networks and collaborative
filtering. Communications of the ACM 40(3), 63–65 (1997)
6. Konstas, I., Stathopoulos, V., Jose, J.: On social networks and collaborative recommendation.
In: Proc. of the 32nd Intl. ACM SIGIR Conf. on Research and Development in Information
Retrieval, pp. 195–202. ACM (2009)
7. Langville, A., Meyer, C., FernÁndez, P.: Googles pagerank and beyond: The science of
search engine rankings. The Mathematical Intelligencer 30(1), 68–69 (2008)
8. Linden, G., Smith, B., York, J.: Amazon. com recommendations: Item-to-item collaborative
filtering. IEEE Internet Computing 7(1), 76–80 (2003)
9. Newman, M.: The mathematics of networks. The New Palgrave Encyclopedia of Eco-
nomics 2 (2008)
10. Nguyen, A., Denos, N., Berrut, C.: Improving new user recommendations with rule-based
induction on cold user data. In: Proc. of the 2007 ACM Conf. on Recommender Systems, pp.
121–128 (2007)
11. Resnick, P., Varian, H.R.: Recommender systems. Communications of the ACM 40(3), 56–
58 (1997)
12. Sarwar, B., Karypis, G., Konstan, J., Reidl, J.: Item-based collaborative filtering recommen-
dation algorithms. In: Proc. of the 10th Intl. Conf. on World Wide Web, pp. 285–295 (2001)
The Voice User Help, a Smart Vehicle Assistant
for the Elderly
1 Introduction
The rapid advancement of vehicular technologies during the last years has resulted in
an exponential increase of electronics in automobiles, that brought new vehicle con-
trol functions, cutting-edge sensors, rigorous fuel efficiency and optimized perfor-
mance in breaking actions, lateral control or navigation and routing systems, among
others. Along with the increased functionality, vehicles have become a place for in-
formation access, media consumption and personal entertainment [1]. In-Vehicle
Infotainment Systems offer now all kinds of information, but also drivers and passen-
gers are bringing their personal consumer electronics into the cockpit turning cars into
highly interactive spaces. All these advances come however with a down-side, ve-
hicles are becoming incredibly complex machines. With the average population’s age
increasing worldwide, predictions point out that within 50 years one third of the popu-
lation in regions like Japan, Europe, China and North America, will be over 60 years
old [2]. It is therefore safe to assume that a great number of drivers will be elders in
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 314–321, 2012.
© Springer-Verlag Berlin Heidelberg 2012
The Voice User Help, a Smart Vehicle Assistant for the Elderly 315
the future. Many of current in-vehicle technologies are specially designed to be of aid
to elder drivers. However, interactions with in-vehicle environments have become
increasingly complex and drivers are susceptible to cognitive and perceptual informa-
tion overload [3]. The cause of such distractions relates to the fact that humans have a
limited amount of available cognitive resources, according to the Dual-Task Paradigm
[4]. In driving conditions, mental resources are allocated for the primary driving task,
leaving little capacity to secondary actions. Furthermore, aging has been found to
have negative effects on dual task performance [5] and elder drivers present declines
in information processing and driving performance [6].
Safety concerns have put In-vehicle Infotainment Systems (IVIS) in the spotlight
[7] and studies have demonstrated that the use of IVIS’s contributed to 25% to 30% of
crash risk [8]. However, the use of infotainment and gadgets in the vehicle does not
cease. The multiple resource principle states that multitasking of actions can take
place if allocated in different perception modalities [9], which supported the design of
hands-free, eyes-free vehicular interfaces. Different studies have addressed the effects
of modality for dual-task performance in driving scenarios. Many present auditory
interfaces are preferred media for in-vehicle warnings [10] or navigation [11]. Effects
on driver distraction have been proven the lowers, on voice interfaces.
This paper presents the Voice User Help, a smart voice-operated system that utiliz-
es natural language understanding and emotional adaptive interfaces to assist drivers
when problems occur on the road with minimal effect on their driving performance.
Additionally, the system presents an opportunity for elder drivers to reduce the learn-
ing curve of new in-vehicle technologies and serves as a platform for the next genera-
tion of user-centered intelligent vehicle assistant. The rest of the paper describes the
VUH architecture, and presents results on user acceptance, usability, driver perfor-
mance results, preliminary results on emotional adaptive UIs and future work.
3 Experimental Results
Evaluating Drivers Acceptance
While some studies have showed that humans tend to communicate with computers
using short commands [13, 14], spoken queries tend to be longer than written ones.
The increased semantic content of natural spoken utterances increases the perfor-
mance of the voice recognition [15], and decreases the cognitive load necessary to
formulate the question. With that theory in mind, a technology acceptance study was
conducted during the early stages of the development of the Voice User Help. The
focus of this study was to determine how people would react to this innovation and if
their biases towards the use of the VUH would be positive, following the Technology
Acceptance Model (TAM) for evaluating attitudes towards new information technol-
ogies [16]. A description of the VUH was presented in a 5 minute video. An on-line
survey was distributed through university mailing lists.
The sample size of the study was 101 participants, 55% women, distributed across
an age range of 18 to 64 years (Mean=29). The majority revealed that they had pre-
viously used a user manual (66%). The technology acceptance results revealed that
the general attitude towards the Voice User Help technology in the car is very posi-
tive. 76% of the participants in the study were favorably disposed towards the use of a
speech driven manual. For 72.37% of the participants the VUH became their pre-
ferred consultation media in the vehicle and more than half of them would like it to be
a standard feature of their automobiles. When asked explicitly about the risks of using
the technology risks, 74% of the sample did not sense any danger in using the VUH.
The perceived threats from the other participants could be grouped into driver distrac-
tion risks, and frustration due to voice recognition problems or long waiting times in
information retrieval.
Driver Distraction
In order to evaluate the benefits of the Voice User Help, another study was designed
in which usability and driver distraction of the VUH were measured and compared to
other manual formats, a printed manual and a multimedia in-vehicle manual. A within
subjects, repeated measures experiment took place in a BMW X3 vehicle, trans-
formed into a driving simulator running the Lane Change Task simulation and analy-
sis software (LCT) [17]. Each track displayed a 3 km flat three-lane roadway. Drivers
were asked to respond to 18 road signs that prompted them to change lanes while
maintaining a constant speed of 60 km/h. While driving, participants were explained
realistic scenarios in which they had to search the manual to find information to solve
the problem. After completion of 3 tasks, participants were asked to complete a
survey using the System Usability Score [18] and to evaluate their mental workload
using the NASA–TLX questionnaire [19].
The Vo
oice User Help, a Smart Vehicle Assistant for the Elderly 317
100% 73%
75% 63%
4
43%
50%
25%
0%
BA IBA VUH
100%
72%
75% 49%
50% 26%
25%
0%
BA IBA VUH
Fig. 2. - Average Subjective Cognitive Load reported on the NASA TLX questionnaire
A within-subject baselin
ne was computed in the training phase and each seconddary
task was compared against the
t individual baseline to obtain driver performance metrrics.
318 I. Alvarez, M.K. Lop
pez-de-Ipiña, and J.E. Gilbert
All performance metrics weere extracted running the LCT data files trough a data minning
script in NI DIAdem [20]. Metrics collected following this procedure included m mean
lateral deviation, SD of the lateral
l deviation, reaction time and wrong lane changes. TThe
mean lateral deviation meassures the driver deviation from a theoretical perfect drivving
path, in which s/he maintaiins the middle of the lane until prompted to deviate for the
lane changing maneuvers. TheT SD of the lateral deviation measures the “steadiness”” of
the driving path. Results dissplayed in Figure 3 demonstrated that lateral deviation ussing
the VUH was not significan ntly higher to the baseline task, as opposed to consulting the
paper manual and manipullating the in-vehicle integrated multimedia manual whhich
reported very high significance on mean lateral deviation (p<0.01). The in-vehicle m mul-
timedia manual also was the t condition under which the user could control less the
vehicle and reported a sign nificant increment (p=0.04) compared with the single drrive
baseline.
2 1.6 1,7*
1,6**1,5** 1.5
1.4
deviation [m]
1.3 Baseline
1.1
1 BA
IBA
0 VUH
meaan deviation standard deviation
Fig. 3. - Mean Lateral deviattion and SD of the lateral deviation on the Lane Change T
Task
across modalities
2 1,9**
[Number of wrong
1,5*
lane chnages]
Baseline
1 0.8 BA
IBA
VUH
0.1
0
Fig. 4. - Mean numbeer of Wrong Lane Changes across conditions in the LCT
The Vo
oice User Help, a Smart Vehicle Assistant for the Elderly 319
Emotion Recognition
The results from the driverr simulation study suggest that the VUH is an optimiized
system to be used under drriver conditions. But in many cases, users might be predis-
posed to emotional distress when consulting the Voice User Help since they are tryying
to find information to solvee an issue with the vehicle. To mitigate the effects of thhese
situations the VUH includes an Emotion Recognition Engine that analyses voice uttter-
ances to detect the current emotional
e state based on prosodic cues. The purpose off the
emotional taxonomy for thee VUH is to identify the user’s mental state while interactting
with the application. Thus,, only emotions that provided information for an adaptive
interface that optimized thhe interaction for driver distractions were included in the
taxonomy. The actual distriibution in a valence – arousal spatial representation is dis-
played in figure 5. Due to the
t subjectivity of different emotion theories and the unccer-
tainty of the emotion recoggnizer, crosses indicate regions where the emotions aree lo-
cated, rather than exact posiitions. This two dimensional vector can be used as dialoogue
control parameters to adapt the dialogue flow to the emotional states detected.
A first emotion recognizer for the Voice User Help was built using Hidden Markkov
Models (HMM) for classiffication of emotions based on Mel Ceptstral Coefficieents
(MFCCs). The models weree trained upon the recorded utterances of five female ussers.
However, the poor results, around 10%, made us shift the attention to Support Vecctor
Machines (SVMs), which reported 56.20% correctly classified utterances. Prosoodic
variations between users in ndicated that the emotion recognition would perform beetter
when trained on a single user. A preliminary study was set up in which one speaaker
recorded several queries to o the VUH in each of the emotional states. The prosoodic
features were extended to include values of Pitch, Intensity and Energy, across the
utterance. The Weka Datam mining Software [21], was used to evaluate the most efffec-
tive classification algorithm
m for the defined emotional speech vector using labeeled
corpora of different sizes in
n a 10-fold cross validation set-up. The results, displayedd in
Table 1, revealed that the Logistic
L Regression Trees algorithm (LMT) performed the
best for the defined emotion n vector. LMT achieved a 73% of succeeded classificatiions
for small corpora, where th he user only had to train one repetition of each emotioonal
state in 10 different sentencces. The algorithm scaled up to 89% successful classifi fica-
tions when the number of saamples for the training corpora grew.
320 I. Alvarez, M.K. Lopez-de-Ipiña, and J.E. Gilbert
These preliminary results lead us to believe that the personalized training of the emo-
tion recognizer for the VUH is a feasible option to achieve recognition rates around the
90% with minimal user collaboration. The training of the system could be accomplished
during the set up phase asking the user to act the different emotions and would be im-
proved upon the constant usage of the application. Human operator would review in the
back-end the automatically labeled utterances, in case the system indicated a failed inte-
raction in order to improve performance of the adaptive interface.
References
1. Schmidt, A.: Automotive User Interfaces: Human Computer Interaction in the Car. In: CHI
2010, Vancouver, Canada (2010)
2. Lutz, W., Sanderson, W., Scherbov, S.: The coming acceleration of global population
ageing. Nature 451(2), 716–719 (2008)
3. Scherer, K.R.: Vocal markers of emotion: Comparing induction and acting elicitation.
Computer Speech & Language (2011)
4. Goselin, P.A., Gagné, J.: Use of a Dual-Task Paradigm for Measure Listening Effort.
Canadian Journal of Speech-Language Pathology and Audiology 3(2), 159–177 (2010)
5. Holtzer, R., Burright, R., Donovick, P.: The sensitivity of dual-task performance to cogni-
tive satus in aging. Journal of the International Neuropsychological Society 10, 230–238
(2004)
6. Schieber, F., Benedetto, J.: Age differences in the functional field-of-view while driving:
A preliminary simulator-based study. In: Human Factors and Ergonomics Society, pp.
176–180. HFES (1998)
7. Pitterman, J., Pitterman, A., Minker, W.: Handling Emotions in Human-Computer Dialo-
gues. Springer, Ulm (2010)
8. Ziefle, M.: Future technology in the car. Visual and auditory interfaces on in-vehicle tech-
nologies for older adults, pp. 62–69. Springer (2008)
9. Eyben, F.: Emotion on the Road; Necessity, Acceptance, and Feasibility of Affective
Computing in the Car. In: Advances in Human-Computer Interaction (2010)
10. Scherer, K.R.: Psychological models of emotion. In: Broad, J.C. (ed.) The Neuropsycholo-
gy of Emotion, pp. 137–162. Oxford University Press, New York (2000)
11. Jeon, M., et al.: Enhanced auditory menu cues improve dual task performance and are pre-
ferred with in-vehicle technologies. In: Proceedings of the First International Conference
on Automotive User Interfaces and Interactive Vehicular Applications, pp. 91–98. ACM,
Essen (2009)
12. Alvarez, I.: Voice Interfaced User Help. In: Proceedings of the Second International Con-
ference on Automotive User Interfaces and Interactive Vehicular Applications, Pittsburgh,
PA, USA (2010)
13. Chang, J.: Usability evaluation of a Volkswagen Group in-vehicle speech system. In:
Proceedings of the First International Conference on Automotive User Interfaces and
Interactive Vehicular Applications. ACM, Essen (2009)
14. Turunen, M., et al.: An architecture and applications for speech-based accessibility
systems. IBM Syst. J. 44, 485–504 (2005)
15. Crestani, F., Du, H.: Written versus spoken queries: a qualitative and quantitative
comparative analysis. Journal of the American Society for Information Science and
Technology 57(7), 881–890 (2006)
16. Venkatesh, V., et al.: User acceptance of information technology: towards a unified view.
MIS Quaterly 27(3), 425–478 (2003)
17. Bruyas, M.P., et al.: Consistency and sensitivity of lane change test according to driving
simulator characteristics. IET Intelligent Trasnport Systems 2(4), 306–314 (2008)
18. Lewis, J.R., Sauro, J.: The Factor Structure of the System Usability Scale. In: Kurosu, M.
(ed.) HCD 2009. LNCS, vol. 5619, pp. 94–103. Springer, Heidelberg (2009)
19. Hart, S.G., Stavenland, L.E.: Development of NASA-TLX (Task Load Index): Results of
empirical and theoretical research. Human Mental Workload, 139–183 (1988)
20. Instruments, T.N., NI DiaDem (2012)
21. Hall, M.: The WEKA Data Mining Software. SIGKDD Explorations 11(1) (2009)
A Knowledge-Driven Approach to Composite Activity
Recognition in Smart Environments
1 Introduction
Activity recognition is a process to infer activities from a series of observations col-
lected from a user situated environment [1]. It plays a key role in the provision of activi-
ty assistance in smart environments such as smart homes. The majority of existing work
on knowledge-driven activity recognition [2-4] has focused on activity recognition for
single-user, simple activity scenarios. It still remains a challenge to recognize composite
activities. We have characterized activities as actions, simple activities and composite
activities in a previous study [1]. To help understand this piece of work we briefly intro-
duce the concepts here. An action refers to an atomic activity (or indivisible) activity,
e.g. flushing a toilet. A simple activity is defined as an ordered sequence of actions, e.g.
the actions executed while having a bath. A composite activity refers to a collection of
two or more simple activities occurring within a given time interval, e.g. drinking juice
while watching television. Further, composite activities can be grouped into sequential
or multi-tasked activities. A sequential composite activity occurs when two or more
activities occur in consecutive time intervals. Conversely, a multi-tasked composite
activity occurs when the user performs two or more activities simultaneously, i.e., inter-
leaved and concurrent activities. This paper addresses the problem of recognition of
composite activities in a single-user environment.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 322–329, 2012.
© Springer-Verlag Berlin Heidelberg 2012
A Knowledge-Driven Approach to Composite Activity Recognition 323
Currently there are three main approaches for composite activity recognition,
namely, data-driven, knowledge-driven, and hybrid activity recognition. Data-driven
activity recognition creates user activity models from existing datasets through ma-
chine-learning techniques, and then uses the learnt models to infer activities [5, 6].
Alternatively, knowledge-driven activity recognition uses knowledge engineering
techniques to specify activity models by encoding commonsense and domain know-
ledge [7]. Artificial intelligence based reasoning is then used to infer activities. Hybr-
id activity recognition approaches combine knowledge engineering and machine
learning to formulate activity models [8, 9].
The use of ontologies in activity recognition [2] has attracted increasing attention
but existing research has mainly focused on simple activity modelling and recogni-
tion. We have made effort to develop a knowledge-driven approach to composite
activity recognition. Nevertheless, our earlier work concentrated on composite activi-
ty modelling, namely combining ontological and temporal knowledge modelling for-
malisms to create composite activity models [1]. This paper presents our follow-on
work on knowledge-driven composite activity recognition, which is built upon pre-
vious research results of composite activity modelling. This paper makes two main
knowledge contributions. Firstly, we propose a knowledge-driven approach to com-
posite activity recognition and introduce a process and related methods for activity
recognition. Secondly, we develop a prototype system and associated recognition
algorithms that can support the application and deployment of the proposed approach.
Experiments have been carried out and initial experimental results have shown that
average recognition accuracy for simple and composite activities is 100% and
88.26%, respectively.
The remainder of the paper is organized as follows. Section 2 discusses related
work. Section 3 describes the composite activity recognition approach. We present the
system prototype and the experimental results in Section 4. Finally, Section 5 con-
cludes the paper.
2 Related Work
Much of the research related to composite activity recognition has been based on
data-driven [5, 6, 10] and hybrid [8, 9] approaches to activity recognition. Patterson et
al. [10] investigated the use of hidden Markov models (HMM) to recognize inter-
leaved and concurrent activities from object use. Modayil et al. [5] explored the use of
interleaved HMMs for recognition of multi-tasked activities in mobile platforms. Both
works highlighted the use of intra-and inter-activity dynamics, e.g. temporal relation-
ships among activities, in activity modelling. Gu et al. [6] investigated the use of
emerging patterns based approach, a data mining technique, to interleaved and con-
current activity recognition in a sensor-based platform. Helaoui et al. [8] investigated
the use of Markov logic networks (MLN), a statistical relational approach able to
encode domain knowledge, to recognize interleaved and concurrent activities. Also,
Steinhauer et al. [9] investigated the use of HMMs enhanced with qualitative temporal
relationships based on Allen logic [11]. Data-driven and hybrid approaches are well
supported since they are based on well-developed learning and probabilistic tech-
niques. Nevertheless, large amounts of initial training data are needed to learn the
324 G. Okeyo et al.
activity models, leading to the “cold start” problem. In addition, since users perform
activities in a variety of ways, activity models for one user may not be applicable to
the other, resulting in model reusability and applicability problems.
The knowledge-driven approach specifies activity models based on domain heuris-
tics and prior knowledge, thus solving the “cold start” problem. Nevertheless, there
is little research in composite activity modelling and recognition. Work presented by
Saguna et al. [7] combines ontological and spatio-temporal modelling and reasoning
to recognize interleaved and concurrent activities. Our work adopts a systematic me-
thod for encoding and reasoning with temporal knowledge based on 4D-fluents [12]
and therefore provides a clear mechanism for seamlessly integrating and exploiting
qualitative temporal knowledge in activity recognition.
actions are grouped into one or more activity descriptions corresponding to the simple
activities that are defined in the activity of daily living (ADL) ontology [2, 14]. An ac-
tivity description refers to a collection of primitive actions that together, partially or
fully, describes a simple activity. As more sensor data is obtained new activity descrip-
tions are created or the existing ones are updated. The modified approach then compares
each activity description with activity models in the ADL ontology using semantic rea-
soning and reports the activity model that is closest to the activity description as the
ongoing simple activity. To perform composite activity recognition, the results of sim-
ple activity recognition are aggregated using inference rules as described by the me-
chanism described in the next section. By separating activity recognition into interde-
pendent tasks, it is possible to use different techniques for each task. In this work, in-
stance retrieval or subsumption and equivalence reasoning is used for action and simple
activity recognition. Instance retrieval determines which objects are instances of a given
ontology concept. Subsumption reasoning finds all concepts that are sub-concepts of a
given concept. Equivalence reasoning returns all concepts that are semantically equiva-
lent to a given concept. For composite activity recognition, rule-based inference tech-
niques are exploited.
the ontology. To perform this analysis, the approach uses temporal inference rules.
The rules can infer qualitative temporal relationships, derive corresponding composite
activities from the dynamic activity models, and then check for corresponding com-
posite activities in the static activity models. Due to space limitations, the interested
reader is referred to [1] for details of rule specification and usage.
Table 1. Sum
mmary of composite activities in sythetic data set
Concurrent and interleaved Instances Sequential Instances
MakePasta and MakeTea (a) 3 MakePasta then HaveBath(g) 6
MakePasta and WatchTelevisio
on(b) 5 MakeTea then WashHands(h) 4
MakePasta and HaveBath(c) 8 WashHands then MakeTea(i) 6
WatchTelevision and MakeTea (d) 5 MakeTea then WatchTelevision (j) 3
MakePasta and MakeChocolatee (e) 1 WatchTelevision then MakeTea(k) 5
MakePasta and MakeCoffee(f) 1 HaveBath then MakePasta(l) 1
Total 23 Total 25
The overall accuracy ob btained for simple activities is 100% since all 104 simmple
activities were successfullyy recognized. This level of accuracy is attributed to the
creation and use of activity descriptions as described in Section 3. Figure 2 shows the
precision, recall, and accurracy values for composite activities. An overall accurracy
value of 88.26% was obtain ned.
1
Precision
0.5
Recall
0
Accuracy
a b c d e f g h i j k l
Fig. 2. Summary of preecision, recall, and accuracy results for composite activities
composite activities and simple activities, respectively. To the best of our knowledge,
this is the first purely knowledge-driven approach that can infer both simple activities
and composite activities. Future work involves conducting further experiments to
assess the performance requirements of the algorithms.
References
1. Okeyo, G., Chen, L., Wang, H., Sterritt, R.: A hybrid ontological and temporal approach
for composite activity modelling. In: 11th IEEE Int. Conf. on Ubiquitous Computing and
Communications, pp. 1763–1770. IEEE Press, New York (2012)
2. Chen, L., Nugent, C., Wang, H.: A Knowledge-Driven Approach to Activity Recognition
in Smart Homes. IEEE Trans. Kno. & Data Eng. 24(6), 961–974 (2011)
3. Storf, H., Becker, M., Riedl, M.: Rule-based activity recognition framework: Challenges,
technique and learning. In: 3rd Int. Conf. on Pervasive Computing Technologies for
Healthcare, pp. 1–7. IEEE Press, New York (2009)
4. Chen, L., Nugent, C., Mulvenna, M., Finlay, D., Hong, X., Poland, M.: Using Event
Calculus for Behaviour Reasoning and Assistance in a Smart Home. In: Helal, S., Mitra,
S., Wong, J., Chang, C., Mokhtari, M. (eds.) ICOST 2008. LNCS, vol. 5120, pp. 81–89.
Springer, Heidelberg (2008)
5. Modayil, J., Bai, T., Kautz, H.: Improving the recognition of interleaved activities. In: 10th
Int. Conf. on Ubiquitous Computing, pp. 40–43. ACM, New York (2008)
6. Gu, T., Wang, L., Wu, Z., Tao, X., Lu, J.: A Pattern Mining Approach to Sensor-Based
Human Activity Recognition. IEEE Trans. Kno. & Data Eng. 23, 1359–1372 (2011)
7. Saguna, Zaslavsky, A.B., Chakraborty, D.: Recognizing concurrent and interleaved activi-
ties in social interactions. In: DASC, pp. 230–237. IEEE Press, New York (2011)
8. Helaoui, R., Niepert, M., Stuckenschmidt, H.: Recognizing interleaved and concurrent
activities using qualitative and quantitative temporal relationships. J. Perv. & Mob.
Comp. 7(6), 660–670 (2011)
9. Steinhauer, H.J., Chua, S., Guesgen, H.W., Marsland, S.: Utilising temporal information in
behaviour recognition. In: 2010 AAAI Spring Symp., pp. 54–59 (2010)
10. Patterson, D.J., Fox, D., Kautz, H., Philipose, M.: Fine-grained activity recognition by
aggregating abstract object usage. In: 9th IEEE Int. Symp. on Wearable Computers,
pp. 44–51. IEEE Press, New York (2005)
11. Allen, J.F.: Maintaining knowledge about temporal intervals. Commun. ACM 26(11),
832–843 (1983)
12. Welty, C., Fikes, R.: A reusable ontology for fluents in OWL. In: 4th Int. Conf. on Formal
Ontology in Information Systems, pp. 226–236. IOS Press (2006)
13. Horrocks, I., Patel-Schneider, P., Bechhofer, S., Tsarkov, D.: OWL rules: A proposal and
prototype implementation. J. Web Semantics 3(1), 23–40 (2005)
14. Chen, L., Nugent, C.: Ontology-based activity recognition in intelligent pervasive
environments. Int. Journal of Web Information Systems 5, 410–430 (2009)
15. Grau, B.C., Horrocks, I., Motik, B., Parsia, B., Patel-Schneider, P., Sattler, U.: OWL 2:
The next step for OWL. J. Web Semantics 6, 309–322 (2008)
Ontology Based Resource Allocation (OBRA)
for Adaptive Intelligent Grid Environment
1 2 3
Japhynth Jacob , Elijah Blessing Rajsingh , and Isaac Balasingh Jesudasan
1
St. Mother Theresa Engineering College, TamilNadu, India
jafijacob@yahoo.co.in
2
Karunya University, TamilNadu, India
elijahblessing@karunya.edu
3
Dr. G.U. Pope College of Engineering, TamilNadu, India
isaacbalasinghj@yahoo.co.in
1 Introduction
The resources can be distributed in different geographical regions. Allocating jobs to
distributed resources is one of the key issues in the distributed environment. In
traditional resource allocation algorithms, allocation is based on static attributes, the
dynamicity of the resource pool is not considered. This leads to incomplete jobs, job
failures, user dissatisfaction, infinite waiting of jobs, and load imbalance in grid sites.
Hence selecting appropriate resource for a job plays a vital role in resource allocation.
Therefore a novel Ontology Based Resource Allocation model with intelligent
parameters is proposed. A resource-federation is used for decentralized resource
management. Resource-Federation is a large scale resource sharing system that
consists of a coordinated federation of distributed datas. This gives access to a large
pool of resources to all users, which provides users with the most appropriate resource
for a job. The detailed service and application ontology of the proposed model are
presented.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 330–333, 2012.
© Springer-Verlag Berlin Heidelberg 2012
OBRA for Adaptive Intelligent Grid Environment 331
2 Intelligent Parameters
The job and resource descriptions are semantically done in domain specific
ontologies. The user provides the expected intelligent parameters values (exogenous
parameters). The intelligent parameters values of each resource of the service
provider are determined by the knowledge unit of the Ontology Based Resource
Allocation (OBRA) model. The intelligent parameters with their notations are given
in table. 1. Intelligent parameters are defined as the parameters, whose values depend
on its previous behavior. The definitions for the intelligent parameters viz. success
rate, hit rate, failure rate and recovery time are as follows.
Success rate
Intelligent parameters Hit rate
Failure rate
Recovery time
3 OBRA Model
OBRA model matches the ontology of job and the resource by calculating the relative
match value. Let j represents job’s ontology with intelligent parameters where
varies from 1 to where is the number of parameters and i represents resources
ontology where varies from 1 to where is number of resources. Let there be
number of parameters in job ontology and number of resources available for
matchmaking with different parameter values in resource ontology. If the relative
match value of assigning parameter to resource is , the assignment method
is shown in Fig. 1.
The relative matched value is identified for all th parameter of th resource in the
ontology and the match matrix is formed. The total match value of assigning static
parameters to resources is calculated using the Equation
∑ ∑ (4)
Where = 1, 2… and = 1, 2…
Where is the total number of resources and is total number of parameters in
the ontology
1
Where
0
4 Conclusion
In this paper an ontology based resource allocation is proposed to select optimal
resource to job. The inclusion of intelligent parameters in OBRA predicts the
behavior of the resources and allocates job according to user request. This improves
the precision of matching. By calculating the correlation between the resource and job
ontology best resource can be allocated to jobs. The intelligent parameters also in turn
reduce the search time of the job irrespective of the number of resources.
References
1. Imamagic, E., Radic, B., Dobrenic, D.: An Approach to Grid Scheduling By Using
Condor-G Matchmaking Mechanism. Journal of Computing and Information Technology -
CIT 14(4), 329–336 (2006), doi:10.2498/cit.2006.04.09
2. Bai, X., Yu, H., Ji, Y., Marinescu, D.C.: Resource Matching and a Matchmaking Service
for an Intelligent Grid. World Academy of Science, Engineering and Technology 1 (2005)
3. Liu, Y., He, H.: Grid Resource Discovery Approach Based on Matchmaking Engine
Overlay. In: Third International Conference on Semantics, Knowledge and Grid
4. Vijay, Naik, C.L.K., Liu, C., Wagner, J.: On-line ResourceMatching for Heterogeneous
Grid Environments. In: IEEE International Symposium on Cluster Computing and the Grid
(2005)
5. Li, F., Qi, D.: Research on Grid Resource Allocation Algorithm Based on Fuzzy
Clustering. In: Second International Conference on Future Generation Communication and
Networking (2008)
6. Clematis, A., Corana, A., D’Agostino, D., Galizia, A., Quarati, A.: Job_resource
matchmaking on Grid through two-level benchmarking. Future Generation Computer
Systems (June 2010)
7. Shu, G., Rana, O.F., Avis, N.J., Chen, D.: Ontology-based semantic matchmaking
approach. Advances in Engineering Software 38, 59–67 (2007)
8. Han, W., Shi, X., Chen, R.: Process-context aware matchmaking or web service
composition. Journal of Network and Computer Applications 31, 559–576 (2008)
9. Wang, C.-M., Chen, H.-M., Hsu, C.C., Lee, J.: Dynamic resource selection heuristics for a
non-reserved bidding- based Grid Environment. Future Generation Computer Systems 26,
183–197 (2010)
10. Grigori, D., Corrales, J.C., Bouzeghoub, M.: Behavioral matchmaking for service retrieval:
Application to conversation protocols. Information Systems 33, 681–698 (2008)
Easily Deployable Streetlight Intelligent Control System
Based on Wireless Communication
Pilar Elejoste1, Asier Perallos1, Aitor Chertudi1, Ignacio Angulo1, Asier Moreno1,
Leire Azpilicueta2, José Javier Astráin3, Francisco Falcone2, and Jesús Villadangos3
1
Deusto Institute of Technology (DeustoTech), University of Deusto, 48007 Bilbao, Spain
{pilar.elejoste,perallos,achertudi,ignacio.angulo,
asier.moreno}@deusto.es
2
Electrical and Electronic Engineering Department, Universidad Pública de Navarra,
31006 Pamplona, Spain
{leyre.azpilicueta,francisco.falcone}@unavarra.es
3
Mathematics and Computer Engineering Department, Universidad Pública de Navarra,
31006 Pamplona, Spain
{josej.astrain,jesusv}@unavarra.es
1 Introduction
Street lighting in Spain accounts for 10% of total energy consumption in lighting
(116kWatt per year and per inhabitant), and becomes one of the highest energy
consumption profiles in Europe (3,630 GWh/year for the whole country). With more
than 4,800,000 luminaries, third of them are based on outdated and inefficient
technologies, lighting is the cause of the greatest impact on energy consumption of a
municipality (around 54% of total energy consumption of municipal facilities and
61% of electricity, according to some sector studies).
Although there are several projects focused on improving energy efficiency in
street lighting, these systems often require the installation of new lights [1] hindering
their deployment in functional facilities. The overcrowding of multiple sections of
lights in the same electric cabinet, coupled with the significant loss of tension on
power lines cause serious interference to Power Line Communications (PLC)
based systems [2] [3] [4] that sometimes prevent their proper functioning over poorly
sized installations. Moreover, the evaluated systems [5] [6] [7] based on wireless
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 334–337, 2012.
© Springer-Verlag Berlin Heidelberg 2012
E
Easily Deployable Streetlight Intelligent Control System 335
communications, provide general solutions which do not take into account the
particularities of the specifiic scenario where system must be deployed.
its control and regulation [8] [9] and include 802.15.4 transceivers to communicate
with the other lamps of the section, creating a Mesh type network. The second
level is made up of remote concentrators (embedded micro servers) located in the
electrical panels that power the light section to control all the lamps in the facili-
ties. Next hierarchical level allows some of the end nodes, equipped with a second
transceiver, communicate directly with the cabinet via wireless communication
(868MHz) performing a bridge and increasing the coverage of the mesh network
when needed. Each remote concentrator will connect with the top level of the sys-
tem, the central server, though the Internet using a 3G/GPRS type communication.
In addition, we propose an innovative fuzzy system in charge of information fusion
used for supporting decision making which cooperates with an ontology charged in
the sensors. The open architecture used for the design will create a robust, secure and
flexible networking environment, in order to meet the world's digital demands.
The selected street section has an estimated length of 1 km in Areta Street, a stretch
of two-way road connecting the district of Areta with the city center of Llodio/Laudio
(in Spain). The facility has 9 streetlights evenly distributed (manufacturer SIMON
Lighting, model ALYA LED). Distance to the electrical panel is about 300m from the
nearest luminary but it´s location (at the rear of a residential area) entails adverse
conditions for the establishment of communication link and its integrity preservation
(private access points, distance, and interference). It becomes an ideal scenario; it is
less traffic zone at night, the town has to deal with extreme weather conditions and
the benefits of the intelligent system would be quickly checked.
The specific characteristics of electromagnetic propagation environment must be
taken into account in order to solve physical barriers and other issues affecting radio
communication. The assessment on electromagnetic spectrum is of importance to
model overall performance of the system under analysis in terms of coverage and
capacity analysis and leads to an optimal configuration of sensors to bear a competi-
tive, flexible and scalable solution. The schematic scenario is shown in Figure 2.
Simulations have been done with the aid of a 3D ray launching algorithm [10] imple-
mented within our research team, based on MatlabTM programming environment.
Several transmitters can be placed within a scenario, in which power is modeled as a
finite number of rays launched within a solid angle. Figure 3 shows simulation results
obtained for the received power using a transmitter antenna from Libelium (802.15.4
PRO 5dBi Digimesh protocol) placed at a height of 3.6m. To illustrate the relevance
in this propagation channel, the power delay profile at the second streetlight for the
same height of the transmitter antenna is depicted in Figure 3.
The main challenge has been to design a smart enough system capable of working in
an autonomous way, according to the environment conditions, designed to facilitate
the deployment in existing facilities, minimizing the cost of investment. This has been
achieved by using wireless technologies and through a depth analysis of the deploy-
ment scenario. The combination of advanced wireless communications, sensing and
metering capacities in a unique infrastructure will contribute to the deployment of the
future smart cities. In that way, the long term future work could be focused on taking
advantage of this infrastructure for the development of intelligent services.
Acknowledgments. This work has been partially funded by the Basque Government
under GAITEK funding program (Grant IG-2011/00264). Special thanks to
URBABIL S.L. for their support.
References
1. Black Sea Regional Energy Centre, Procurement evaluation report of project Intelligent
Road and Street Lighting in Europe (E–Street), Grant Agreement: EIE/05/157/SI2.419662
2. Atıcı, C., Özçelebi, T., Lukkien, J.J.: Exploring User-Centered Intelligent Road Lighting
Design: A Road Map and Future Research Directions. IEEE Transactions on Consumer
Electronics 57(2) (May 2011)
3. Liu, J., Feng, C., Suo, X., Yun, A.: Street Lamp Control System Based on Power Carrier
Wave. In: International Symposium on Intelligent Information Technology Application
Workshops (IITAW), Shanghai, China, December 21-22, pp. 184–188 (2008)
4. Liu, Y., Chen, X.: Design of Traffic Lights Controlling System Based on PLC and Config-
uration Technology. In: International Conference on Multimedia Information Networking
and Security, Hubei, China, November 18-20, pp. 561–563 (2009)
5. Nam, K.Y., Jeong, S.H., Choi, S.B., Ryoo, H.S., Kim, D.K.: Development of Zigbee based
Street Light Control System. In: Power Systems Conference and Exposition, PSCE 2006.
IEEE PES, October 29-November 1, 2006, Atlanta, GA, pp. 2236 – 2240 (2006)
6. Jing, C., Shu, D.: Design of Streetlight Monitoring and Control System Based on Wireless
Sensor Networks. In: Second IEEE Conference on Industrial Electronics and Applications,
Harbin, China, May 23-25, pp. 57–62 (2007)
7. Daza, D., Carvajal, R., Mišic, J., Guerrero, A.: Street Lighting Network formation mecha-
nism based on IEEE 802.15.4. In: 8th International Conference on Mobile Ad-Hoc and
Sensor Systems, Valencia, Spain, October 17-22, pp. 164–166 (2011)
8. Spanish Ministry of Industry, Tourism and Trade. The Royal Decree 1890/2008 and its
Complementary Technical Instructions EA-01 a EA-07. BOE núm. 279 (November 14,
2008)
9. Spanish Ministry of Industry, Tourism and Trade. Saving and energy efficiency strategy in
Spain 2004-2012 (E4). Action Plan 2008-2012 (July 2007)
10. Saez de Adana, F., et al.: Propagation model based on ray tracing for the design of
personal communication systems in indoor environments. IEEE Transactions on Vehicular
Technology 49, 2105–2112 (2000)
Supporting Collaboration for Smarter City Planning
1 Introduction
Global population is growing fast and mega cities are appearing all over the world.
These cities are facing day by day the complex task of offering various services
without interruption to millions of people [1]. However, in order to manage the often
limited resources they count on to accomplish their duties it is necessary to them to
interact with each other and exchange information [2]. This is especially true when
they have to manage resources such as properties, installations, vehicles and any other
resources which may vary their locations when they are on duty. In such cases they
face the problem of managing geo-located resources, which has to be solved by
various actors.
In the past, some systems supporting decision making processes have been devel-
oped for Wind Farm Sites [3], Water Resource management [4] and Urban Design
[5]. However, they have been developed to support a specific entity using data
generated exclusively inside one organization without having the possibility of using
important information generated by other entities, nor sharing the own data with
others. In order to design a proper platform to support the complex process of
decision making in a “mega cities” scenario it is necessary to consider that entities
interacting with each other when offering their services are of various areas, each one
having its particular perspective of the problem and hence they might consider a
different solution to the problem[6].
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 338–341, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Supporting Collaboration for Smarter City Planning 339
Recently, in Chile a law was issued regulating the installation of antennas for
mobile phone communication. This law particularly restricts the location and height
of the antenna according to the context of the surroundings where it will be installed.
It also requires the company to make some urban developing works which should be
approved by the people living in the area.
The process of installing an antenna tower starts involving four governments and
two the company wishing to install the antenna tower apply for the required permits
to the local public work agency, the telecommunications regulatory authority, the
ministry of Housing and Urban Development and the company’s representative also
notifies the affected neighbors. From the moment the citizens’ and their organization
are notified, they have 30 days to analyze it. The project includes a presentation of the
characteristics of the antenna tower and the proposed urban improvement works. The
discussion process is carried out by the neighborhood residents’ committee, the peo-
ple living in the area affected by the improvement works, and the local public work
agency. The results of the discussion might be a complete acceptance to the proposed
project, a proposal modifying the work plan for urban improvements or a rejection of
the project due to non-compliance the technical requirements. Beside the technical
requirements there are some other regarding the surrounding environment encompass-
ing a certain distance from the tower: a) neighbors living in a circle with a radius
equal to two times the height of the tower location must be notified, b) there should be
no nurseries, kindergartens, schools, hospitals or any other health caring facilities
inside a circle with radius four times the antenna’s height, c) there must be only one
antenna inside a circle with a radius equal to 100 meters, d) the affected neighbors
may require urban improvement works inside an area defined by a circle with radius
equal to 250 meters. As we see, the entities checking the project have to answer two
questions: Q1) Does the installation complies with the distance requirements? Q2)
which are the urban development works are going to be required. Since these two
questions must be answered in a period of 30 days’ time, a tool supporting the analy-
sis of the situation as well as the decision process in a collaborative and rational way
becomes highly necessary.
2 Platform Description
The collaborative process can be divided in activities having direct relation with the
answers the proposed questions. These were identified as follows:
In order to manage these activities efficiently we used a software platform for devel-
oping geo-collaborative application supporting decision making, over which the tool
presented in this work was implemented [7]. This platform allows the management of
multiple projects categorizing them in the typical discussion steps: brainstorming,
planning and execution. Moreover, it allows the management of workspaces associat-
ed to geographical areas including space-time metrics, including ichnographic infor-
mation; tools for collecting information from various sources, as well incorporating
autonomous agents in order to perform simulations and analysis over the physical
area represented by a map. This tool has been extended in order to include an auton-
omous agent specialized in representing transmission antennas. When one of these
agents is located on a certain location over a map (by drag and drop) the tool created
two working spaces, which will be used by the public work agency and the affected
neighbors for working collaboratively.
The generated workspaces are accessible via an URL, each one helps users to
answer each one of the questions presented in the previous section. The first one is
related to the question wheatear the requirements about distances have been fulfilled
(Q1). In order to help answer this question the workspace displays a view with the
map of the area in being analyzed (where the antenna was located by drag and drop)
showing in a simple way all circles corresponding to the restrictions and urban issues
imposed by the law, considering the position and height of the tower (see Figure 1).
Fig. 1. Red circles are to sensitive zones like schools and health caring facilities; The green
circle is the urban improvement zone; the black circle is the zone of affected neighbors; the
blue circle is the zone where only one antenna can be placed
As we can see in figure 5 this first view supports the performance of activities A2,
A2 by providing a clear picture of the situation. It also displays a link to the second
view which supports the revision of the proposals for urban improvement works and
provides a link to the project proposal A3. The second space is oriented to answer the
second question (Q2) which refers to the revision of the urban improvement work.
This view is oriented on provide a participatory discussion for urban improvements
(A4, A5 and A6); any user can add a title, description, image and an estimate
cost. These proposals can be voted and commented using social network integration
(Facebook and Twitter).
Supporting Collaboration for Smarter City Planning 341
Fig. 2. Second question view. Left size: Urban improvements votes system; Right: allows add-
ing an urban improvement proposal by clicking on the location.
3 Conclusion
This works aims to make a contribution to the development of smarter cities by allow-
ing their inhabitants to actively participate in the decisions about its development. The
need of this tool was triggered by a new law issued in Santiago de Chile which allows
citizens to participate in the decision making process about granting permission for
erecting an antenna tower in their neighborhood. Since the time given by this law for
discussing the proposal presented by the company is rather short, it is necessary to
count with a tool which facilitates this process. We developed such a tool on top an
existing platform for supporting the construction of systems for geo-referenced deci-
sion making. The tool easies and speeds up this process by presenting the relevant
information in such a way that efficient and effective decisions are easier to make.
References
[1] Juan, Y.K., Wang, L., Wang, J., Leckie, J., Li, K.M.: A decision-support system for smarter
city planning and management. IBM Journal of Research and Development 55(1.2), 3 (2011)
[2] Huestis, E.M., Snowdon, J.L.: Complexity of legacy city resource management and value mod-
eling of interagency response. IBM Journal of Research and Development 55(1.2), 1:1–1:12
[3] Simão, A., Densham, P., Haklay, M.: Web-based GIS for Collaborative Planning and
Public Participation: An Application to the Strategic Planning of Wind Farm Sites. Journal
of Environmental Management 90(6), 2027–2040 (2009)
[4] Nyerges, T., Jankowski, P., Tuthill, D., Ramsey, K.: Collaborative Water Resource
Decision Support: Results of a Field Experimen. Annals of the Association of American
Geographers 96(4), 699–725 (2006)
[5] Ligtenberg, A., de Vries, B., Vreenegoor, R.C.P., Bulens, J.D.: SimLandScape, a sketching
tool for collaborative spatial planning. Urban Design International 16, 7–18 (2011)
[6] Armstrong, M.P., Densham, P.: Cartographic support for collaborative spatial decision-
making. AUTOCARTO-CONFERENCE, pp. 49–58 (1995)
[7] Frez, J., Baloian, N., Zurita, G.: Software Platform to build geo-collaborative Systems support-
ing design and planning. In: Proceedings of the CSCWD Conference, Wuhan, China (2012)
Ubiquitous Data Management in Public Transport
1 Introduction
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 342–349, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Ubiquitous Data Management in Public Transport 343
Fig. 1. General view of the system, showing the data distribution structure
In the transport context, illustrative scenarios where the proposed masive management
model could be used are, for example, guidance systems for travellers of public trans-
port or control operations systems that are executed in each vehicle of a fleet of public
transport. Both examples are characterized by having a huge quantity of required data
because they need data that represent the transport network, for instance, all bus stop
and bus stations including the geographical localization data. Besides they requiere
data that represent the different routes covered by the vehicles and the planning of
operations and timetable at different points throughout the transport network. The
aforementioned examples requiere catalogues and different data schemas and due to
mobility of the involved systems (travellers´ mobile devices for guidance systems and
onboard systems in the vehicles for the control operations systems), spontaneous
connections and disconnections may ocur, making possible data inconsistency.
3 System Description
This section describes the main aspects of the design and implementation of the
system from the point of view of data management. The architecture of the system is
based on three paradigms of computation: ubiquitous computing, agents oriented and
services oriented. Bearing in mind the importance of the mobile communications
in ubiquitous systems, we will focus on how the applications and the infrastructure
handle the data and how the communications affect this management.
3.1 Agents
In this system, each computing device can execute one or several pervasive applica-
tions, conceptually each pair formed by a device and the ubiquitous application is
named agent. Each agent interacts autonomously and spontaneously with the
environment. The environments are represented by a shared data space that can be
used by the agent to obtain or to produce data. This shared data space is structured in
different data subsets named logical contexts; each logical context is associated to a
functional area of the ubiquitous system: traveller information, operation control,
production, etc.
The information services are provided by agents named information providers
agents; these agents, mobile and not mobile agents executing on public transports
infrastructure, produce the data required by the consumer agents, belonging these data
to one or several transport areas or contexts. These data are stored in distributed
repositories located in different elements of the public transport infrastructure, see
Figure 1. Specifically, there is a central repository of data that contained all the data
required by the different information services offered in the transport network, a local
repository in each vehicle containing all data required by the service offered by the
vehicle and besides, if the service is accessible from users´ mobile devices, local
repositories in the mobile terminals for each accessible service of information from
the terminal All data required by information services are based on a common
conceptual model and the declaration about information services availability is made
Ubiquitous Data Management in Public Transport 345
by provider agents using a registration service. Due to the heterogeneous nature of the
ubiquitous environment, this register service is also used to declare basic specification
needed to access the service. Basically, the message used in the register service
includes the network address of the device where the provider is executing (a), the
service identifier (s), the identifier of the data map (m), and the expired time of the
registration message (t).
R = (a, s, m, t) (1)
For example, in a case of a traveller information service, the providers of this service
are mobile agents running on each of the buses and no mobile agents running on
relevant points of the transport network, for example, stations. In the vehicles, the
provider agents inform about the route of the vehicle that is based on dynamic data
stored in a local repository placed in the onboard system; the access to this type of
data is achieved using the field m of the message R. The no mobile agents of this
service provide static information about the transport network and new data version to
the consumer agents; the data required for this static information are stored in local
and remote repositories. A general view of the system is showed in Fig. 1.
The service consumer agents are programs running on the users mobile devices.
The information consumed by these agents is based on data of the different contexts
involved, being produced on demand by the providers. In the case of a traveller guid-
ance system, a consumer agent of the traveller information service running on the
mobile phone of the traveller must discover a proper provider agent of this service.
This task is achieved analysing the register messages of provider agents, specifically
using the fields s and m, and the travel specification introduced by the traveller.
the provider supplies data concerning the route used by the traveller: route number
and description, current stop, destination and route stops. Then, if the traveller decides
to take the bus, most of the required data are already cached on his device.
About the use of different catalogues and data schemas, the heterogeneity arises
because several transport companies can operate in the system, using each of them
different catalogue and data schemas. In the field of public transport of passengers,
there are standard specifications about conceptual data models for transport network,
for example, the TransModel specification [2]. Using these specifications, the system
has defined a common conceptual data model that includes ontology and a set of
entities and relationships.
4 Results
The model of data management explained in this paper has been applied in a real case,
specifically in the interurban public transport of Gran Canaria (Canary Islands) with a
fleet of 300 buses that transports 30.000.000 passengers per year. To illustrate the per-
formance, some practical results related to the characteristic aspects of the ubiquitous
data managements are presented, these are: autonomously, distribution and erroneous
data transactions due to no guarantee of reconnection. These results have been obtained
applying the data management model in the production activity of buses. All the data
related to this activity produce over one thousand data transactions daily between the
mobile data repository of the vehicles and the common data repository.
Production data transactions are generated autonomously between the data reposi-
tory of the system installed on the buses and the central repository. The system is
capable to carry out autonomously one hundred thousand of data transactions, without
restricting the movement of the vehicles.
The erroneous data transactions are defined as the percentage of correct data trans-
actions and erroneous data transactions due to the autonomy of movement of agents
that causes randomly connections and disconnections. This result shows how the
autonomous and unpredictable movements of the mobile systems affect the integrity
of data. The rate of erroneous data transactions is less than 8 percent.
Finally, regarding the update time, the mobile systems are not permanently
connected to the system where the global data repository is placed. This connection is
spontaneous and it is made in an autonomous and automatic way when the agents in
buses detect a point of synchronisation, using IEEE 802.11 infrastructure. The results
obtained show that the time required to update the entire mobile data repository
located in the vehicles, when a modification is made in the global data repository,
varies between less than one day for the seventy-eight percent of the vehicles fleet,
one day for the eighteen percent of the fleet and two or more days for the four percent
of the fleet.
5 Related Works
The problem of data management in mobile environments has been well studied by
many authors. Dunham [3] describes databases in mobile computing environment as
348 C.R. García et al.
6 Conclusions
References
1. Perich, F., et al.: On Data Management in Pervasive Computing Environments. IEEE
Transaction on Knowledge and Data Engineering 16(5), 621–663 (2004)
2. CEN TC278: Reference Data Model For Public Transport, Comité Européen de Normali-
sation (2005)
3. Dunham, M., Heal, A.: Mobile Computing and Databases: Anything New? SIGMOD Rec-
ord 5(4), 5–9 (1995)
4. Joshi, A.: On Proxy Agents. Mobility and Web Access. Mobile Networks and Applica-
tions 5(4), 233–241 (2000)
5. Bobineau, C., et al.: Scaling Down Database Techniques for the Smart-card. In: Proc. 26th
International Conference on Very Large Database (2000)
6. Imielinski, T., et al.: Data on Air: Organization and Access. IEEE Transaction on
Knowledge and Data Engineering 9(3), 352–372 (1997)
7. Achayra, S., et al.: Broadcast Disk: Data Management for Asymmetric Communication
Environment, Mobile Computing. The International Series in Engineering and Computer
Science, vol. 353, pp. 331–361. Springer, US (2007)
8. Lee, D.L., et al.: Management in Location-Dependent Information Services. IEEE Perva-
sive Computing 1(3), 65–72 (2002)
9. Cohen, N.H., et al.: Composing Pervasive Data Using iQL. In: Proc. Fourth IEEE Work-
shop on Mobile Computing Systems and Applications (2002)
10. Cohen, N.H., et al.: Challenges in Flexible Aggregation of Pervasive Data, IBM Research
Report. RC 21942 (98646), IBM Research Division, pp. 1–13 (2001)
11. Lamsfus, C., Martin, D., Alzua, A., López-De-Ipiña, D.: Towards Context-Aware
Push/Filter Information Dissemination. In: Proc. of the 5th International Symposium of
Ubiquitous Computing and Ambient Intelligence (UCAMI 2011), Riviera Maya, Mexico
(2012)
Context Model for Ubiquitous Information Services
of Public Transport
University of Las Palmas de Gran Canaria, Institute for Cybernetic Science and Technology,
Campus Universitario de Tafira, 35017 Las Palmas, Spain
{rgarcia,gpadron,aquesada,falayon,rperez}@dis.ulpgc.es
1 Introduction
All the agents involved in public transport (authorities, users and operators) agree
about the convenience to provide quality services in this activity. This quality of
service implies to improve the safety, accessibility and environmental and economic
efficiency, for example, more efficient use of existing transport infrastructures,
enhancing efficiency and safety as well as reducing energy consumptions and acci-
dents. The information and communication technologies can play an important role in
this challenge introducing new attractive services for the users. In the modern public
transport information systems, the interoperability is a key aspect that can be achieved
applying the ubiquitous paradigm in the design of these systems; specifically the
context awareness and conceptual context modelling are suitable aspects for this goal.
It is in this context where this contribution is presented; it consists in the description
of a context model for developing ubiquitous information services. The main charac-
teristics of this model are: it is complete because it covers all the areas of the public
transport activity and it is interoperable because it can be used by different compa-
nies, in different transport modes (air, maritime and land) and technologies.
The description of the context-model made in this paper is structured as follows.
The justification of this work is presented in section 2. The third section is dedicated
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 350–358, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Context Model for Ubiquitous Information Services of Public Transport 351
3 Related Work
The UK Department of Transport [1] explains the benefits and costs of Intelligent
Transport Systems (ITS). This organism describes how ITS can be used to solve tradi-
tional problems associated to the people mobility like congestion, pollution, safety
and demand management. For example, advances in ITS offer possibilities to the
transport authorities for monitoring the current status of the transport network,
352 C.R. García et al.
predicting the future mobility demand, applying electronic toll collection to reduce
delays at toll-booths. In the particular context of public transport, Tyrinopoulos [2]
studied the relevance of the interoperability of the public transport information
systems, proposing a complete conceptual model for public transport information
systems. But this conceptual model is conceived for traditional information systems.
In the side of frameworks or architectures for developing interoperable ITS solutions,
iTransIT [3] framework provide an object-based spatial programming environment
for capturing, managing and storing transport information in a scalable and dynamic
manner. But the iTransIT context model lacks a methodology to represent the
semantic of the context information.
Currently, in frameworks and architectures for ubiquitous systems development,
the context-aware is decoupled from the sensors infrastructures. GAIGA project [4]
is an example of a framework that follows this line, incorporating ontology for
reasoning about context information in environments with a limited number of users,
applications and sensors. For large scale pervasive systems, Nexus platform [5] is an
example that integrates local context models from different providers into an object-
based federated model. In the specific field of the context modelling in information
systems where the spatial information plays a main role, Becker [6] proposes a com-
bined strategy based on the spatial context modelling and the use of Ontology. Lee [7]
assumes this combined strategy for pervasive transportation services, describing a
primary context model and an ontology associated and incorporating the management
and communication benefits of the traditional context modelling.
4 Model Description
Our context model is designed to provide interoperable ubiquitous information services
for public transport. Therefore the information services are available regardless of the
transport mode (land, air and maritime), the transport network location (stations, buses,
aircrafts, trains, etc.), the user devices (mobile phones, smart phones, tablets PC,
laptops, etc) and the technological infrastructures (mobile and non mobile computing
and communication elements). In order to provide interoperable services at any time
and every place, the technological elements of an ubiquitous information system are
deployed in different places of the transport network. Basically, these elements are
computing devices (mobile and non mobile computers), devices for manual and auto-
matic payment (ticketing consoles, magnetic card readers, contactless card readers, etc),
sensors (traveller counters, open doors, video cameras, etc), location systems (GPS,
tachometric systems …) and communications infrastructure (mobile and non mobile
communication technologies). Figure 1 shows a general view of the system.
Fig. 1. Overview of an intermodal public transport system providing ubiquitous information services
• The actor domain. All the concepts related to the actors of the intermodal public
transport network belong to this domain. The main actors are travellers, transport
operators and transport authorities and the main actor roles are infrastructure or
service provider and infrastructure or service consumer. Specifications about the
travellers, for example, kind of traveller, transport operators, for instance, type of
operator (urban or inter-city), belong to this domain.
• The infrastructure domain. Concepts related to basic resources for the execution
of the ubiquitous applications (internal operators computing processes, informa-
tion services providers and client applications) belong to this domain. These
resources are: computing, communication, location and sensors. This is the most
complex domain of the architecture because the concepts and functionalities
provided in this domain must guarantee the interoperability of all the ubiquitous
information services. Physically, this domain is deployed in the transport network
elements and the transport authority is the responsible actor of this domain and
the beneficiary actors are the transport operators.
• The service provider domain. Concepts related to different ubiquitous informa-
tion services, for example, payment systems, traveller assistants, control opera-
tion, etc,… belong to this domain. The responsible actors of this domain are
transport authorities and transport operators and the beneficiary actors can be
operator staff, transport authorities staff and travellers. Specifications about
technological and functional aspects belong to this domain
354 C.R. García et al.
• The information consumer domain. All the concepts related to the accessibility,
usability and reliability of the ubiquitous client applications belong to this
domain. Therefore, this is a high level abstraction domain. The responsible actors
of this domain are transport authorities and transport operators and the benefici-
ary actors can be operator staff, transport authorities staff and travellers.
This model follows the line proposed by Hervas [8] that describes a context model
composed of four related ontology: users, devices, environment and services. This
ontology is global for the public information system; therefore, it covers these four
conceptual domains. There is a hierarchical relationship between these sub domains
based on the concept of “belonging”; the actor domain is in the first hierarchy level,
the infrastructure and services domain are in the second and third level respectively,
and finally in the last hierarchy level we find the information consumer domain. Also
the specific concepts defined in each sub domain are hierarchically organized. In the
actor domain the root concept is the “authority actor”, in the next hierarchical level the
“operator actors” are defined and in the next level the “staff actors”, the “traveler
actors” and the “public actors” are defined. In the case of the infrastructure domain, the
main infrastructure elements of the public transport network (stations, stops, mainte-
nance service centre, vehicles, etc) belong to the first hierarchical level; to belong to
this level the conceptual criteria used is “to be a container of technological infrastruc-
ture required to provide ubiquitous information service”. For the service provider
domain the first level is formed by the public transport main activities defined by
Transmodel [9], which is an European standard data model specification for public
transport. These activities are: tactical planning, information management, personal
disposition, multimodal operation, fare collection, operations control and passenger
information. Each ubiquitous information service is provided by one of these main
areas depending on the technological infrastructure, in first instance, and the transport
operator in second instance. Finally, in the consumer information domain, the first
level is formed by different client applications, the main technological infrastructure
required for running the client applications are defined in the second level, the
transport information requirements configure the next level and finally, the required
functionalities, under the point of view of accessibility, usability and reliability of the
client applications, are located in the last hierarchical level of this sub domain.
The concepts that belong to this ontology are based on the Transmode standard. A
general view of this ontology is showed on Figure 2. With the objective of guarantee
the interoperability between transport modes and transport operators, this specifica-
tion defines all the entities and relationships that model a public transport network,
conceptualize, for example, the infrastructure of the transport network, the different
activities of the transport operators, the planning of operations, the incidences, etc.
transport operators) and finally, the third level is the Data Interoperability Level that
is formed by components whose purpose is to reconcile the data representation of
the data entities provided by transport operators and authorities. Figure 3 shows an
overview of this data organisation.
The data model that represents the system ontology plays an important role in the
system interoperability. Each operator represents the different conceptual domains
described in section 4.1 based on the standard Transmodel. The interoperability data-
base is used to make compatible this representation defined by the different operators.
This database is structured in three levels. In the first level and based on the model
Transmodel, the entities that participate in the service are defined for each interoper-
able information service. This definition is composed of an entity identifier and a
description. The entity fields are described in the second level. This representation
uses three components: the attribute identifier, its name and the data’s format that
represents the attribute. Finally, in the third level the entities are represented through
the attributes described in the previous level. This representation is organized in two
subsets of attributes: the first subset is formed by the attributes that all actors (trans-
port operator or authority) must use in the representation of the entity and the second
subset is formed by optional attributes that an actor (transport operator or authority)
can use in the representation of the entities. The structure of a representation register
of an entity has an attribute identifier, its position at the register and its value.
Context Model for Ubiquitous Information Services of Public Transport 357
5 Practical Results
The context model described in this paper is being currently applied by the Public
Transport Authority of Gran Canaria (Canary Islands, Spain). This authority is the
responsible for managing the land public transport of the Island. The land public
transport network of Gran Canaria integrates several bus transport companies and in
the near future a train transport company. Some of these transport companies work in
a context of urban transport and two of them in a context of a metropolitan transport.
The dimensions of these transport operators are different; some of these corporations
are small companies with a fleet of over 15 vehicles, transporting about 100.000
passengers per year and two are medium companies with a fleet of more than 150
vehicles, transporting more than 20.000.000 of passengers per year. This diversity is
reflected also in the technological infrastructure; for example, in some of these
companies the onboard ticketing activity is manually executed using only cash as a
payment method while in others is automatically executed using different payment
methods (cash, magnetic cards and contactless smart cards).
The described context model is a component of a framework for developing trans-
port information services. Currently, using this context model, the transport authority
has deployed three cases of ubiquitous information services. The first is a payment
system based on smart contactless card, using this service the traveller can pay using
a personal smart contactless card in the public transport vehicles; this card works as
an electronic cash card that charges automatically. This means that the required in-
formation is directly obtained from the vehicle infrastructure and the amount charged
is based on the distance travelled by the user. The second transport information sys-
tem developed consists on a transport information service for in transit passengers.
This service is deployed in the public transport network in Gran Canaria (stations,
stops, vehicles); using this service the travellers can obtain real time information
about the transport (timetables, delays, incidents, etc). Finally, the third transport
information service developed using this context model is a web information service
about general aspects of the public transport in Gran Canaria (timetable, assistant for
travel planning, fares, notices, etc); the users can access these services on the internet
using mobile and non mobile personal devices.
6 Conclusions
The context model described in this work is a basic component of a framework for the
development of public transport ubiquitous information services. This context model
permits to make interoperable ubiquitous information services for all the public trans-
port network actors (travellers, transport operators and transport authorities), provid-
ing all the necessary components (ontology, data and data schemes for the context
representation) to develop integrated and interoperable information services. Inte-
grated service means a service that is available regardless of the transport mode and
transport operators in any place of the network and at any time. Interoperable service
means an accessible and usable service regardless of the technology used by the pub-
lic transport actors. Currently, this context model is being used by the Public Trans-
port Authority of Gran Canaria (Canary Islands, Spain). Using this context model
358 C.R. García et al.
three integrated services have been developed: a payment system based on contactless
smart card, an information system for transit passengers and a web portal to provide
Internet-based information services related on general aspects of the public transport
in Gran Canaria (fares, time tables, user opinions and complaints, etc.).
References
1. Traffic Advisory Leaflet, UK Deparment for Transport: Understanding the Benefits and
Costs of Intelligent Transport Systems – A Toolkit Approcah (2005)
2. Tyrinopoulos, Y.: A Complete onceptual Model for the Integrated Management of the
Transportation Work. Journal of Public Transportation 7(4), 101–121 (2004)
3. Meier, R., Harrington, A., Cahil, A.: A Framework for Integrating Existing and Novel Intel-
ligent Transportation Systems. In: Proceedings of the Eight International IEEE Conference
on Intelligent Transportation Systems (IEEE ITSC 2005), Vienna, Austria, pp. 650–655
(2005)
4. Ranganathan, A., Campbell, R.H.: A Middleware for Context-Aware Agents in Ubiquitous
Computing Environments. In: Proceedings of the ACM/IFIP/USENIX International
Middleware Conference (Middleware 2003), Rio de Janeiro, Brazil, pp. 143–161 (2003)
5. Lehmann, O., Bauer, M., Becker, C., Nicklas, D.: From Home to World – Supporting
Context-Aware Applications through World Models. In: Proceedings of the Second IEEE
International Conference on Pervasive Computing and Communications (PerCom 2004),
Orlando, USA, pp. 297–307 (2004)
6. Becker, C., Nicklas, D.: Where do spatial context-models end and where do ontologies
start? A proposal of a combined approach. In: Proceedings of the First International Work-
shop on Advanced Context Modeling, Reasoning and Management in Conjunction with
UbiComp 2004, Nottingham, England, pp. 48–53 (2004)
7. Lee, D., Meier, R.: Primary-Context Model and Ontology: A Combined Approach for
Pervasive Transportation Services. In: Proceedings of the First International Workshop on
Pervasive Transportation Systems (PerTrans 2007), New York, USA, pp. 419–424 (2007)
8. Hervás, R., Bravo, J., Fontecha, J.A.: Context Model based on Ontological Languages: a
Proposal for Information Visualization. Journal of Universal Computer Science 16(12),
1539–1555 (2010)
9. CEN TC278: Reference Data Model For Public Transport, Comité Européen de
Normalisation (2005)
Driver Drowsiness Monitoring Application
with Graphical User Interface
1 Introduction
Drowsiness and fatigue are the causes of many traffic accidents in the developed
countries. For instance, fatigue was the fourth most important factor in mortal traffic
accidents in 2010 in Spain [1]. Moreover, the deaths caused by fatigue increase 7.4%
with respect to the previous year. These figures show the importance that the driver
alertness and drowsiness level monitoring applications can have to decrease the
number of traffic accidents.
Fatigue measurement is a complex problem because there are few direct measures
and most of them are merely measures of the outcomes of the fatigue. There are dif-
ferent types of fatigue measures: behavioral, physiological, subjective self-report, and
performance measures [2]. The only direct measure of fatigue involves self-reports of
internal states. However, there are problems regarding using any self-report due to
factors such as motivational influences and demand effects [3]. Behavioral measures
of the vehicle such as lateral position and steering wheel movements are subject to
several limitations, e.g. the vehicle type, driver experience, geometric characteristics,
and state of the road [4]. Physiological measures such as brain waves and heart rate
are accurate but intrusive, since they need to attach some electrodes on the driver. A
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 359–366, 2012.
© Springer-Verlag Berlin Heidelberg 2012
360 D. González-Ortega et al.
project following this approach was the ASV (Advanced Safety Vehicle) project per-
formed by Toyota and Nissan [5]. People in fatigue show some visual behavior easily
observable from changes in the head and face, mainly in the eyes. This way, computer
vision can be a non-intrusive and easy technique to monitor driver fatigue.
An important physiological measure that has been studied to detect drowsiness is eye
motion. Several eye motions were used to measure fatigue such as blink rate, blink du-
ration, long closure rate, blink amplitude, and saccade rate. PERCLOS measure is the
percentage of eyelid closure over the time. It reflects slow eyelid closures rather than
blinks [6]. A PERCLOS drowsiness metric was established in a driving simulator study
as the proportion of time in a minute that the eyes are at least 80 percent closed [7].
PERCLOS showed the clearest relation with performance compared to a number of
other potential drowsiness detection devices including a head tracker device, two elec-
troencephalographic (EEG) algorithms, and two wearable eye blink monitors [8].
PERCLOS is used by many commercial and experimental sensors [9][10].
In this paper, we present a real-time application with Graphical User Interface
developed with the FLTK (Fast Light Toolkit) library [11], to detect the driver drows-
iness based on the PERCLOS measure with a consumer-grade computer, an inexpen-
sive Universal Serial Bus camera, and passive illumination. The application obtains
the PERCLOS measure with the eye state detection (open or closed) in the frames of a
video sequence.
The rest of the paper is organized as follows. Section 2 presents the computer
vision-based approach to achieve the driver drowsiness level monitoring. Next,
Section 3 details the Graphical User Interface and its functionalities. Finally, Section
4 draws the conclusions about the proposed application.
y2
IPFv ( x ) = ¦ I( x, y ) (2)
y = y1
Once we have the information of an eye image captured in a feature set, a big number
of similarity measures between the eye regions and three templates (of open eye,
nearly close eye, and close eye) were extracted: histogram-based and template-based
similarity measures. The PERCLOS computation makes it necessary the use of the
three templates.
Finally, the classifier is an MLP. It is an ANN which has been very successful in a
variety of applications, producing results that are at least competitive and often
exceed other existing approaches [13]. A dimensionality reduction was carried out to
select the best intputs to the MLP to maximize its performance. The output of the
MLP is the eye state detection for each processed image: open, nearly close and
close.
After training the MLP with a great number of eye images with different people
and illumination conditions, we tested the classifier with videos from publicly
databases, taken in the laboratory, and inside a car while a user is driving in real con-
ditions. The average overall accuracy was 95%. The frame rate of the classifier is
limited by the 30 fps given by the camera used in the experiments: Logitech Quick
Cam Zoom. Fig. 2 shows frames of a video sequence from the publicly available ZJU
Eyeblink database [14] and the output of the monitoring application. Face detection
and eye region location are drawn in the frames together with the output of our eye
state monitoring application.
362 D. González-Ortega et al.
The user can interact with the monitoring application easily and intuitively through
the developed GUI. The user can log in the application, select the proper values of the
configuration parameters, observe the result of the drowsiness level monitoring, and
save files with detailed information about the monitoring.
The GUI was implemented with the FLTK library, which was developed in C++
language. This library has a series of advantages that led us to adopt it. It is multiplat-
form, does not use advanced C++ features (which reduces its learning curve), gives
rise to modular applications, and has low consumption of resources. Although an
application with FLTK links the library statically, its size is rather small. Moreover,
FLTK is free software under GPL (GNU General Public License), thus it can be used
in applications and its source code can be changed without license problems.
FLTK can be divided in three main components: widgets definition, event handler,
and callback function. The widgets are the core of FLTK. Through them, buttons,
tags, windows, and any other component inside the user interface environment, either
interactive or not, can be created. These widgets have attributes that have to be
adjusted, such as their size, appearance, and their possible states. A widget can
contain other widgets. To interact with a particular widget, a callback function has to
be specified. These functions are run when the associate widget is activated with the
suitable state. One of the most common callback functions is which corresponds with
pressing a button to closed a program. In some cases, it is necessary to run a function
through which the interaction with the FLTK widgets is achieved, for instance, click
with the right or left mouse button on a windows is carried out without interactive
widgets. In these cases, the event handlers are useful. FLTK provides a large number
of event handlers, for instance, the drag and drop events or the upwards or downwards
mouse displacements.
Driver Drowsiness Monitoring Application with Graphical User Interface 363
When the application is run, first the user is asked for his name. The user can select
one of the previously registered users or register as a new user in the application.
Fig. 3 shows the home window of the application.
Once a registered user logs in the application from the home window, the window
with the main menu is showed as in Fig. 4. In this menu, the user can choose to start
the monitoring, select open eye, nearly close, or close eye templates, or observe the
files in which previous monitorings were saved.
Each user must select three eye templates (close, nearly close, and open eyes) to
start the drowsiness monitoring. The computer vision application compares the eye
region of each frame of a video sequence with those templates to extract features that
will be later the input to the MLP classifier. This classifier finally obtains the eye
state. The templates are stored both in JPG and XML format. The user can easily
modify the templates, selecting one of the already created or creating new templates.
For the creation of new templates, a photograph, a video sequence, or a webcam can
be selected. With one of these sources, the application will achieve face detection and,
later eye detection from the face region. This eye region can be selected as a template
for the monitoring. Fig. 5 shows the window that allows to observe the selected tem-
plates and create new templates.
for the three aforementioned measures (b), and the PERCLOS graph (c). During the
first minute of the video monitoring, this graph is in white color because there is not a
real PERCLOS measure as it has to be computed with data within an entire minute.
From the first minute, the graph will be in green color if the PERCLOS threshold is
not exceeded. Otherwise, it will be in red color as observed in Fig. 6.
The application records the data from all the monitorings, each one associated with
the time in which they were made and the monitored user. These files can be opened
anytime to extract the PERCLOS graph, the time where the alarms went off, and, in
short, to know the user drowsiness level monitoring in detail.
4 Conclusions
In this paper, an application for the driver drowsiness monitoring is presented. Moni-
toring is achieved through the computer vision-based eye state monitoring in frames
of a video sequence. Eye state detection is based on an MLP classifier using as pattern
features different similarity measures between the eye region and one of the three
templates: open, nearly close and close eyes. The selection of features of different
nature stressed the complexity of characterizing such a deformable and highly
variable object as the eye. The application achieved an overall accuracy of 95% with
videos taken with different conditions.
The application includes a GUI that allows the driver to configure and start the
monitoring and to consult the results of each monitoring. To develop the GUI, an
open source library (FLTK) was used, which is developed in C++ language like the
monitoring application. Moreover, it is multiplatform and requires few resources.
These features are suitable to transfer the application to a mobile device that can be
carried in a vehicle. Through the GUI, three thresholds can be adapted. If one is
366 D. González-Ortega et al.
exceeded, the application enters the alarm state. Thus, it is possible to be more restric-
tive in the attention level required for driving (high, medium, or low). The changes of
the thresholds could be made in the head office of a transportation enterprise as a
function of the driving hours of the employees and the features of the routes. The
store of information about all the driving stretches is interesting to study in detail the
evolution of the drowsiness level of each driver as time goes by.
References
1. RACE. Informe 2011: La fatiga en la conducción, http://www.race.es
2. Sherry, P.: Fatigue countermeasures in the railroad industry: past and current develop-
ments. Association of American Railroads, Washington, D.C. (2000)
3. Williamson, A., Chamberlain, T.: Review of on-road driver fatigue monitoring devices.
NSW Injury Risk Management Research Centre, University of New South Wales, New
South Wales, Australia (2005)
4. Ueno, H., Kaneda, M., Tsukino, M.: Development of drowsiness detection system. In:
Proceedings of the Vehicle Navigation and Information Systems Conference, pp. 15–20
(1994)
5. Kircher, A., Uddman, M., Sandin, J.: Vehicle control and drowsiness. Swedish National
Road and Transport Research Institute, Linköping, Sweden (2002)
6. Dinges, D.F., Grace, R.: PERCLOS: a valid psychophysiological measure of alertness as
assessed by psychomotor vigilance. US Department of Transportation, Federal Highway
Administration, Office of Motor Carrier Research and Standards, Washington, D.C., USA
(1998)
7. Wierwille, W.W., Ellsworth, L.A., Wreggit, S.S., Fairbanks, R.J., Kirn, C.L.: Research on
vehicle-based driver status/performance monitoring: development, validation, and refine-
ment of algorithms for detection of driver drowsiness. National Highway Traffic Safety
Administration, Washington, D.C., USA (1994)
8. Wang, Q., Yang, J., Ren, M., Zheng, Y.: Driver fatigue detection: a survey. In: Proceed-
ings of the World Congress on Intelligent Control, pp. 8587–8591 (2006)
9. Seeing Machines Limited, http://www.seeingmachines.com
10. Ji, Q., Yang, X.: Real time eye gaze and face pose tracking for monitoring driver vigilance.
Real-Time Imaging 8(5), 357–377 (2002)
11. Fast Light Toolkit, http://www.fltk.org
12. Open Computer Vision Library,
http://sourceforge.net/projects/opencvlibrary
13. Haykin, S.: Neural networks and learning machines, 3rd edn. Prentice-Hall, London
(2008)
14. Pan, G., Sun, L., Wu, Z., Lao, S.: Eyeblink-based anti-spoofing in face recognition from a
generic webcamera. In: Proceedings of the IEEE International Conference on Computer
Vision, pp. 1–8 (2007)
Communication Platform to Enable
Collaborative Tourism Applications
1 Introduction
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 367–370, 2012.
c Springer-Verlag Berlin Heidelberg 2012
368 G. Urzaiz et al.
2 Methodology
In a typical current solution (Figure 1a), the end user receives information either
from the local application or from the server. The end user is limited to the
information that is available in the local application runnning at the end user
device and the information that could be obtained from a remote server.
Our proposal (Figure 1b) is based on the idea of implementing an overlay
network between all the nodes, including not only the local end user device and
the server, but also all other participants that may be relevant to the application.
Some of these relevant devices may include the mobile phones of other tourists
in the same museum or in other cities, more remote servers, etc.
The benefit of this proposal is the possibility to provide richer and better
information to the end user.
The overlay network is implemented as a distributed application by means
of an object-oriented middleware for distributed systems (i.e. ZeroC ICE), and
it is designed to support a heterogeneous environment where a wide variety of
devices (such as mobile phones, tablet computers, laptops, servers, etc.) and
technologies could be connected transparently.
The application code is either automatically deployed to several devices or
individually downloaded by each one of them, but must be installed in every
device that will be part of the overlay network. This piece of code includes not
only the application-related semantics but also some other elements that are
needed to communicate within the network.
It is possible for the application developer to include any needed functionality,
depending only on the memory and processing capacities of the nodes. It is
possible to have bigger nodes performing much more storage and processing
functions than the little ones.
Once the overlay network is established, the end user local application does
not need to know the specific addresses of the destination nodes, and just need to
invoke an object instead. All the semantically-related nodes will provide relevant
information that could be used to build a richer and better answer to the specific
requirement.
We might consider an example of a tourist that is visiting a museum, and
that would like to get not only the typical information about an specific piece of
art, which is usually stored in any tourist guide either in paper or electronically
in the internet, but also some other useful information that is provided in real
time by the museum personnel or even the visitors that are at that moment in
the musueum, which is not published in the tourist guides. For instance, the
information about some areas that are temporarily closed, or tips about the
number of visitors in certain area and the convenience of going there in certain
moment, etc.
The tourist may also be interested on knowing something about another fa-
cility that he or she is planning to visit later. It is possible for the user to get
real time information about any other location that is connected to the overlay
network.
It is important to say that most of this information may be produced auto-
matically, based on the geolocation information of the users. The information
could also be complemented by some other information that is manually entered
by other participants.
3 Evaluation
best time to visit it. Information was provided automatically by the mobile
applications of all the participants that were attending the show while the end
user requirement was sent.
The proof of concept was useful to implement an heterogeneous multi-node
scenario, in which an end user received information from all the other nodes that
were relevant for the specific requirement.
An innovative tourism approach has been proposed which provides user infor-
mation not only based on the local application or the server, but also from other
network participants.
The solution is based on a communication platform which is built by means
of an object-oriented middleware for distributed applications, and provides the
end user with better and richer information than the existing solutions so far.
A proof of concept was implemented to demonstrate feasilbility. Future work
includes the development of a full-function mobile phone application, probably
using Android as the operating system, and Java with ICE for Android [8] to
develop the distributed application.
References
1. Brown, B., Chalmers, M.: Tourism and mobile technology. In: Proceedings of the
Eighth Conference on European Conference on Computer Supported Cooperative
Work, pp. 335–354 (2003)
2. Garcia, A., Arbelaitz, O., Linaza, M.T., Vansteenwegen, P., Souffriau, W.: Person-
alized Tourist Route Generation. In: Daniel, F., Facca, F.M. (eds.) ICWE 2010.
LNCS, vol. 6385, pp. 486–497. Springer, Heidelberg (2010)
3. Schneider, S., Ricci, F., Venturini, A., Not, E.: Usability Guidelines for WAP-based
Travel Planning Tools. In: Information and Communication Technologies in Tourism
2010, pp. 125–136 (2010)
4. Huang, Y.P., Chang, Y.T., Sandnes, F.E.: Experiences with RFID-based interactive
learning in museums. Int. J. Auton. Adapt. Commun. Syst. 3, 59–74 (2010)
5. Blockner, M., Danti, S., Forrai, J., Broll, G., De Luca, A.: Please touch the exhibits!:
using NFC-based interaction for exploring a museum. In: Proceedings of the 11th
International Conference on Human-Computer Interaction with Mobile Devices and
Services, MobileHCI 2009, pp. 71:1–71:2 (2009)
6. Bravo, J., Lopez-de-Ipina, D., Hervas, R.: RFID breadcrumbs for enhanced care
data management and dissemination. In: Personal and Ubiquitous Computing, pp.
1–10 (2012)
7. Ruf, B., Kokiopoulou, E., Detyniecki, M.: Mobile Museum Guide Based on Fast
SIFT Recognition. In: Detyniecki, M., Leiner, U., Nürnberger, A. (eds.) AMR 2008.
LNCS, vol. 5811, pp. 170–183. Springer, Heidelberg (2010)
8. ZeroC ICE for Android page, http://www.zeroc.com/android.html (accessed on
June 15, 2012)
LinkedQR: Improving Tourism Experience
through Linked Data and QR Codes
1 Introduction
Since Sir Tim Berners-Lee announced the best practices to publish semantic
data through the Web [1], Linked Data principles have been introduced into a
wide variety of application domains, e.g. Market Information Systems (MIS) [3],
crowdsourcing [13], text processing [12] or documentation management systems
[5]. Berners-Lee described these practices as: 1) use URIs as names for things;
2) use HTTP URIs so that people can look up those names; 3) when some-
one looks up a URI, provide useful information, using the standards (RDF*,
SPARQL); 4) include links to other URIs so that they can discover more things.
On the other hand, QR codes have demonstrated their usefulness in fields such
as logistics, indoor-localization systems [6] and so on. These successful experi-
ences with QR codes suggest that they are a good partner to collaborate with
Linked Data technologies. On the other hand, the “tourism experience” term
has many interpretations from social, environmental, and activities components
of the overall experience, inside some tourism activity. More information about
“tourism experience” concept can be found at [15].
In this paper, we present LinkedQR, a tool to improve the collaboration
between QR codes and Linked Data, through mobile and Web technologies.
LinkedQR allows the data curators of an art gallery or of any kind of tourism
installation, to retrieve enriched data from a vague information fragment. This
information retrieval process helps them to provide more detailed information
about an artwork or about an object of the environment. This information is
1
This research is founded by project IE10-290 of Etortek 2010 program, Basque
Government.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 371–378, 2012.
c Springer-Verlag Berlin Heidelberg 2012
372 M. Emaldi et al.
stored into a RDF store, following the Linked Data principles and providing
them through a SPARQL endpoint. This data is consumed by a mobile applica-
tion that uses the QR codes to identify each of the artworks inside the gallery
through their unique HTTP URI. Besides, this paper is illustrated with a case
study based on the tourism experience located into an art gallery, concretely
“Sala Kubo” art gallery from San Sebastián2 .
The remainder of the paper is organized as follows. Section 2 discusses related
work. Section 3 presents our solution to improve the tourism experience inside an
art gallery. Section 4 refers about the architecture used on this project. Section 5
talks about the performance obtained from the experimentation of the developed
system. Finally, Section 6 concludes and outlines the future work.
2 Related Work
As far as we know, the work related to the collaboration between Linked Data
and QR Codes is limited. We can talk about More! [10], a mobile application that
enables the exploration of the information about a speaker of an academic event.
More! asks to the speaker to fill an application form with all his/him relevant
data with the aim of showing them to the attendees. These data is stored into a
server, and they can be retrieved through a HTTP URI that has been assigned
previously to the speaker data. This URI is stored into a QR code, and it can
be retrieved using More! mobile application.
Despite the fact that More! uses unique HTTP URIs, it does not offer the
data through standard languages as RDF or SPARQL and it does not link the
data with another semantic datasets, so that, we can say that the Linked Data
offered by More! only fulfils two of the four principles announced by Berners-Lee.
Furthermore, the speaker has to type his data manually into More!, making this
process very tiresome.
has to navigate through the different menus to find the concrete art piece he/she
is viewing. LinkedQR allows to retrieve this information from a QR code, in an
easier way. Directly related to LinkedQR, we can talk about MoMu (Mobile
Museum) [8]. The aim of MoMu was to offer additional information about the
pieces of art we observe, in a more innovative way than traditional audio-guide
systems. Usually, the visitor has to rent these traditional audio-guide systems,
and she/he has to type a number to retrieve more information about an art-
work. MoMu transforms our particular mobile device into a guiding device. It
uses TRIP codes [7] to identify the artwork and retrieves enriched information
from a central server. LinkedQR improves the aim of MoMu because the em-
ployees of the gallery retrieve this enriched information in a semiautomatic way,
with the help of Linked Data techniques.
3 Proposed Solution
Throughout this section the proposed solution to ease the information manage-
ment inside an art gallery is explained, describing the system architecture and
showing how it works. In general terms, our solution improves the traditional
guide-systems in three ways: 1) the data curator of the museum obtains invalu-
able help at data retrieving and merging stage, 2) the visitor has not to rent the
audio-guide hardware and 3) the enriched data can be published and re-used by
other museums or art galleries.
Information Consuming Scenario: Once the employee has completed the in-
formation enrichment process, the QR codes are ready to be attached to the art
pieces. At this scenario, the visitor of the museum has to download the appli-
cation from Google Play Store or from a QR code attached at the entrance of
the gallery, pointing to the URL of the application. When the visitor has the
application installed into her/his smartphone, she/he only has to choose the
“QR” option from the main menu, as can be seen at Figure 2. Immediately,
the QR code reader software is launched to retrieve the URI of the piece of art.
This URI is queried against the RDF server and the description of the resource
is given. It is important to emphasize that this description is retrieved in the
default language of the mobile device, allowing multilingual Linked Data. This
semantic data is processed by Sesame, an architecture for storing and processing
RDF data [4]. Sesame transforms the data stream given from the HTTP request
done, to semantic triples. Finally, all the information about the piece of art, its
author and the audio-guide are presented on the user’s smartphone, as can be
seen at Figure 2.
7
http://dbpedia.org/ontology/
8
http://dbpedia.org/sparql
LinkedQR: Improving Tourism Experience 375
Fig. 2. Application main screen (left) and information presentation screen (right)
4 Experimentation
At this section, the results of the testing of the system are presented. Concretely,
this experimentation have been focused on the performance of the system at the
following aspects: 1) information retrieval from the local repository by mobile
application and its representation; 2) information found about the authors of
exposed art pieces. To test this performance, the delay of graphs with different
number of triples is presented. We have taken ten different samples, calculating
the arithmetic mean, to avoid any distortion of the results. Figure 3 shows the
performance of the mobile application9 . As can be seen at this graph, the time
employed by the smartphone to visualize the information grows as the number
of triples to be processed grow, just as as we expected; but the time employed
to retrieve the information from the server is nearly always 1,500 ms, despite
of the increase of the triples. These results show that both server and network
delays are not affected by the number of triples retrieved, but the smartphone
suffers the consequences of the increase of the triples retrieved. This means that
we have to take a lot of effort on optimizing the visualization of the information
inside the smartphone. Despite of these delays, the total delay (≈2,000 ms) is
acceptable for the user, following the criteria explained at [9], that limits the
maximum delay time for an user interaction near 10 seconds.
At the server side of the system, related to the Linked Data retrieval stage, we
have obtained these results: from 87 authors of the exposition, the system has
recovered information about 57 of them (67.51%), generating 1756 new triples.
9
The code of the mobile application can be found at https://github.com/memaldi/
LinkedQR-android
376 M. Emaldi et al.
1500
1000
500
This data retrieval stage has been done with 2 depth levels: at the first level 885
triples (50.39%) have been retrieved and at the second level, 871 triples (49.61%).
The true/false positive/negative rates show that the system has confused 6 per-
sons that they are not authors of the exposition; and one author that she/he
really is part of the exposition but the system rejects her/him. We can not talk
in terms of retrieved triples, because we can not actually know the amount of
total triples related to the artist inside DBPedia. In terms of found authors, the
precision (Eq. 1), recall (Eq. 2) and true negative rate (Eq. 3) of the system can
be calculated as follows:
Tpositives 51
P recision = = = 89.47% (1)
Tpositives + Fpositives 51 + 6
Tpositives 51
Recall = = = 98.07% (2)
Tpositives + Fnegatives 51 + 1
Tnegatives 29
T rue negative rate = = = 96.66% (3)
Tnegatives + Fpositives 29 + 1
With these ratings, the obtained accuracy can be seen at Eq. 4. To measure the
accuracy of this test, we have calculate the F1 score (Eq. 5).
Tpositives + Tnegatives
Accuracy = =
Tpositives + Tnegatives + Fpositives + Fnegatives
(4)
51 + 29
= 91.95%
51 + 29 + 6 + 1
LinkedQR: Improving Tourism Experience 377
Authors Triples
Found Not found Total Depth 1 Depth 2 Total
57 30 87 885 871 1756
(67.51%) (32.59%) (100%) (50.39%) (49.61%) (100%)
Authors Triples
True False True False True False True False
positives positives negatives negatives positives positives negatives negatives
51 6 29 1 1612 144 - -
References
1. Berners-Lee, T.: Linked data - design issues (2006),
http://www.w3.org/DesignIssues/LinkedData.html
2. Bizer, C., Cyganiak, R.: D2r server-publishing relational databases on the semantic
web. In: 5th International Semantic Web Conference, p. 26 (2006)
3. de Boer, V., Gyan, N.B., Bon, A., de Leenheer, P., van Aart, C., Akkermans, H.:
Voice-based access to linked market data in the sahel. In: DownScale 2012, p. 16
(2012)
4. Broekstra, J., Kampman, A., van Harmelen, F.: Sesame: A Generic Architecture
for Storing and Querying RDF and RDF Schema. In: Horrocks, I., Hendler, J.
(eds.) ISWC 2002. LNCS, vol. 2342, pp. 54–68. Springer, Heidelberg (2002)
5. Emaldi, M., Buján, D., López-de-Ipiña, D.: Towards the integration of a research
group website into the web of data. In: CAEPIA 2011 (2011)
6. López-de-Ipiña, D., Klein, B., Guggenmos, C., Pérez, J., Gil, G.: User-Aware se-
mantic location models for service provision. In: UCAMI 2011 (2011)
7. López-de-Ipiña, D., Mendonça, P.R., Hopper, A., Hopper, A.: TRIP: a low-cost
vision-based location system for ubiquitous computing. Personal and Ubiquitous
Computing 6(3), 206–219 (2002)
8. López-de-Ipiña, D., Vázquez, I., Sainz, D.: Interacting with our environment
through sentient mobile phones, pp. 19–28 (2005)
9. Nielsen, J.: Usability engineering. Morgan Kaufmann (1994)
10. Parra, G., Klerkx, J., Duval, E.: More!: Mobile interaction with linked data, vol. 6,
p. 37 (2011)
11. Raimond, Y., Abdallah, S., Sandler, M., Giasson, F.: The music ontology. In: Pro-
ceedings of ISMIR 2007, pp. 417–422. Citeseer (2007)
12. Rusu, D., Fortuna, B., Mladenic, D.: Automatically annotating text with linked
open data (2011)
13. Simperl, E., Norton, B., Vrandecic, D.: Crowdsourcing tasks in linked data man-
agement. In: COLD 2011 (2011)
14. Suchanek, F., Kasneci, G., Weikum, G.: Yago: a core of semantic knowledge. In:
Proceedings of WWW 2007, pp. 697–706. ACM (2007)
15. Tussyadiah, I.P., Fesenmaier, D.R.: Interpreting tourist experiences from first-
person stories: A foundation for mobile guides. In: Proceedings of the 15th Eu-
ropean Conference on Information Systems Switzerland (2007)
16. Weibel, S., Kunze, J., Lagoze, C., Wolf, M.: Dublin core metadata for resource
discovery. Internet Engineering Task Force RFC 2413, 222 (1998)
New Approaches in Context Modelling
for Tourism Applications
Abstract. The notion of context has been widely studied and there are several
authors that have proposed different definitions of context. However, context
has not been widely studied in the framework of human mobility and the notion
of context has been directly imported from other computing fields without spe-
cifically addressing the tourism domain requirements. In order to store and
manage context information a context data model is needed. Ontologies have
been widely used in context modelling, but many of them are designed to be
applied in general ubiquitous computing environments, do not contain specific
concepts related to the tourism domain or some approaches do not contain
enough concepts to represent context information related to the visitor or tourist
on the move as we need in the TourExp project. That is why we propose a new
approach to provide a better solution to model context data in tourism environ-
ments, adding more value to our solution reusing Open Data about tourist
resources from Open Data Euskadi initiative and publishing it as Linked Data.
1 Introduction
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 379–386, 2012.
© Springer-Verlag Berlin Heidelberg 2012
380 D. Buján et al.
The notion of context has been widely studied and there are several authors that
have proposed definitions of context. Some of these definitions consider context as
the surroundings of the interaction between the user and the application [1] [2] [3].
Other authors consider the activity or the task of the user as the main context informa-
tion for the system [4] [5]. A third group of authors consider that context is the
required information to characterize the situation of an entity [6] [7].
In mobile environments, location is the main context parameter to be considered in
order to personalize the behaviour of context-aware tourism systems [8]. Most of the
existing commercial mobile guides make use of the visitor´s location and user profile
as the main context parameters [9]. However, context has not been widely studied in
the framework of human mobility and the notion of context has been directly
imported from other computing fields without specifically addressing the tourism
domain requirements.
Most of the existing context models are designed to be applied in general ubiqui-
tous computing environments or do not contain specific concepts related to the
tourism domain. There are some approaches for context modelling in tourism domain
but they do not contain enough concepts to represent context information related
to a visitor or tourist on the move as we need in the TourExp project. That is why
we propose a new approach to provide a better solution to model context data in
tourism environments. This solution tries to add even more value reusing Open
Data about tourist resources from Open Data Euskadi initiative and publishing it as
Linked Data.
The rest of this paper is structured as follows. In Section 2, we introduce different
data structures and techniques that can be used in order to represent context informa-
tion. Section 3 describes our TourExp context data model, remarking aspects to con-
sider for context management and some particular details of our solution, including
how the information represented by our data model is published. Finally, we describe
our work in progress in Section 4, referring to different applications that are being
developed in this project and will access TourExp context information.
2 Background
In order to store and manage context information a context data model is needed.
There are several data models that can be used in order to represent context informa-
tion [10]. These are classified by the data structures that are used to store contextual
information.
• Key-value pairs: this is the most simple data structure for modelling contextual
information. This data model is easy to manage, but lack capabilities for enabling
efficient context retrieval algorithms and data inference mechanisms.
• Markup scheme models: this model is based on a hierarchical data structure
consisting of tags with attributes and content. An example of this approach is the
extensible markup language (XML).
• Object oriented models: this approach is based on the benefits of encapsulation
and reusability. An example of these kinds of modelling techniques is the Java
programming language, which is based on classes and objects to represent data.
New Approaches in Context Modelling for Tourism Applications 381
• Ontology Based Models: ontologies can specify concepts with properties and
interrelations between those concepts. Also, they offer a very expressive
language in order to define axioms and restrictions over those concepts.
Ontologies have been widely used in context modelling, so they can be considered as
a valid approach in order to represent context data. One of the first examples of
context ontologies is the Context Broker Architecture (CoBrA) ontology [11]. It is
expressed in OWL and it represents a collection of terms describing places, software
agents, events, and their associated properties. The Standard Ontology for Ubiquitous
and Pervasive Computing (SOUPA) ontology is an evolution of the previous ontology
[12]. It represents a shared ontology that includes modular vocabularies to represent
software agents with associated beliefs, desires and intentions, time, space, events,
user profiles, actions and policies for security and privacy. It is divided in two differ-
ent ontologies, namely SOUPA–Core and SOUPA-Extensions. SOUPA-Core defines
a set of classes and entities common to almost all scenarios within pervasive comput-
ing, while SOUPA-Extension ontologies extend from the Core and define additional
vocabularies to support specific domains. The GAS ontology [13] was developed in
order to semantically describe the basic concepts within a ubiquitous computing envi-
ronment. Its main objective was to provide a common vocabulary for heterogeneous
devices that constitute a pervasive computing environment. The ONCOR [14]
ontology is basically thought to provide a flexible and practical ontology to describe
locations, devices and sensors within ubiquitous computing systems that delivers
personalized information in a building environment.
On the one hand, all the above ontologies are designed to be applied in general
ubiquitous computing environments, but do not contain specific concepts related to
the tourism domain. On the other hand, there are several ontologies in order to repre-
sent the tourism domain [15] [16] [17], but these approaches do not contain concepts
to represent context information related to the visitor. This way, there is a need
to extend and merge the existing modelling approaches in order to provide a better
solution to model context data in tourism environments.
Human Factors
Information about the user. User ID (e-mail), password, gender, country of origin,
birth date, physical limitations (blindness, reduced mobility, deafness, etc.), feeding
intolerances, religious tendencies and preferences.
Social environment of the user. Information related to the user’s social networks
(Facebook, Twitter, FourSquare, etc) and travel type (family/ friends/ business/
couple).
User tasks. Track of the user’s interactivity with the system to register information
about bookings, ratings, selected favorites, check-ins, etc. In addition to this,
it is relevant to save information about the travels made by the user and keep
them saved on the travel profile (type and motivation of the travel, cost, duration,
area of activity, etc). This way, it is possible to make suggestions to the user and
create user profiles from the recommendation module that is being developed in
this project.
Environmental Factors
User location history. The tracking of the travelers can be made by GPS or by detect-
ing the placement of the network they are using at the moment. Apart from the actual
localization, TourExp context model allows to represent historical data about the
traveler’s placement.
Weather. The weather forecast is retrieved from web services like Yahoo Weather so
the recommendation module could suggest suitable activities in each case.
The general data model that represents all the information related to TourExp system
has been designed after the analysis made taking into account different input sources:
tourist experiences defined at Euskadi Turismo website [18], open datasets at Open
Data Euskadi initiative website [19] and requirements extracted from TourExp's
applications. Figure 1 only shows part of the huge data model diagram related to
context modeling entities, although the general data model developed in this project is
documented in E2.2 deliverable at TourExp website [20].
New Approaches in Context Modelling for Tourism Applications 383
We invite the reader to look for more details of the whole data model in the
TourExp project’s website [20], where several aspects taken into account about the
relation among the new entities for context modelling and those already present in this
project’s general model are described. Besides, there are some other aspects of the
user context modelling that affect the development of certain modules of the project
and that are represented in our model, such as TourExp does not keep track of the use
made of it by not logged in users, locates the traveler periodically to keep a tracking
record of him/her, relates the actual context of a user during a booking or stores rat-
ings (experiences, activities, tourist resources) made by users. Another remarkable
aspect of TourExp developments is that retrieving personal data from a user’s social
network profile to fill the registration form makes it more user-friendly. On a side
note, the data mining process being developed in this project is leading to create trav-
eler profiles for describing the behaviour of the users and enabling the search of
“twin-souls” within it to be used in the recommendation module.
Other approaches do not contain enough concepts to represent context information
related to the visitor or tourist on the move as we need in the TourExp project, so we
have proposed a new approach to model context data in tourism environments.
TourExp system publishes all its information in RDF as Linked Open Data by means
of a D2R server [23] following the data model described in Section 3. This tool
enables RDF and HTML browsers to navigate the content of the database, and allows
querying the database using the SPARQL query language. This tool generates a
mapping file that relates entities and columns from a database to entities and proper-
ties of ontologies, and it is used by the D2R server to generate RDF files dynamically.
We have ruled out a static RDF storage due to the dynamic nature of our database.
Figure 2 shows a visual interface for any standard web-browser provided by our
TourExp D2R demo [22] and the URLs of both SPARQL endpoint [22] and SPARQL
visual browser [22] to query the RDF information.
New Approaches in Context Modelling for Tourism Applications 385
This research work presents a new approach to parameterize, model and share context
data related to a visitor or tourist on the move. Human and environmental factors are
the main domains that have been taken into account in order to represent the context
of a certain visitor at a specific location and time. Also, the bookings of the visitor are
being tracked in order to create a better recommendation system based on context
data. All the entities that represent context data have been modeled on a relational
database that has been mapped to created ontologies. This context data is used by
other modules of the TourExp system and also by third party services. We invite the
reader to look for more details about the TourExp general architecture at the project’s
website [20], since every module and application of the system is described in the
corresponding deliverables, as well as how they use context data.
References
1. Brown, P.: The stick-e document: a framework for creating context-aware applications. In:
Proc. Electronic Publishing, vol. 8, pp. 259–272 (September 1995)
2. Hull, R., Neaves, P., Bedford-roberts, J.: Towards Situated Computing. In: 1st IEEE Inter-
national Symposium on Wearable Computers, ISWC 1997 (1997)
386 D. Buján et al.
3. Chen, G., Kotz, D.: A Survey of Context-Aware Mobile Computing Research. Technical
Report Number TR2000-381, Dartmouth College, Dept. of Computer Science, UK (2000)
4. Henricksen, K.: A Framework for Context-Aware Pervasive Computing Applications,
University of Queensland (2003)
5. Bucur, O., Beaune, P., Boissier, O.: Representing Context in an Agent Architecture for
Context-Based Decision Making. In: Proc. Workshop on Context Representation and Rea-
soning, CRR 2005 (2005)
6. Schmidt, A., Aidoo, K.A., Takaluoma, A., Tuomela, U., Van Laerhoven, K., Van de
Velde, W.: Advanced Interaction in Context. In: Gellersen, H.-W. (ed.) HUC 1999. LNCS,
vol. 1707, p. 89. Springer, Heidelberg (1999)
7. Dey, A., Abowd, G., Salber, D.: A Conceptual Framework and a Toolkit for Supporting
the Rapid Prototyping of Context-Aware Applications. In: Human-Computer Interaction,
vol. 16, pp. 97–166 (2001)
8. Steiniger, S., Neun, M., Edwardes, A.: Foundations of Location Based Services. Lecture
Notes on LBS, Department of Geography, University of Zürich (2006)
9. Grün, C., Werthner, H., Pröll, B., Retschitzegger, W., Schwinger, W.: Assisting Tourists
on the Move - An Evaluation of Mobile Tourist Guides. In: 7th International Conference
on Mobile Business, pp. 171–180. IEEE (2008), doi:10.1109/ICMB.2008.28
10. Strang, T., Linnhoff-popien, C.: A Context Modeling Survey. In: Workshop on Advanced
Context Modelling, Reasoning and Management, UbiComp - The Sixth International Con-
ference on Ubiquitous Computing, Nottingham (2004)
11. Chen, H., Finin, T., Joshi, A.: An ontology for context-aware pervasive computing envi-
ronments. The Knowledge Engineering Review 18(3), 197–207 (2003)
12. Chen, H., Finin, T., Joshi, A.: The SOUPA Ontology for Pervasive Computing. Whitestein
Series in Software Agent Technologies. Springer (July 2005)
13. Christopoulou, E., Goumopoulos, C., Kameas, A.: An ontology-based context manage-
ment and reasoning process for ubicomp applications. In: Joint Conference on Smart Ob-
jects and Ambient Intelligence: Innovative Context-aware Services, vol. 121, pp. 265–270
(2005)
14. Kay, J., Niu, W.T., Carmichael, D.J.: Oncor: Ontology- and evidence-based context
reasoned. In: Proceedings of the IUI, pp. 290–293. ACM Press (January 2007)
15. Mondeca. Semantic Web Methodologies and Tools for Intra-European Sustainable
Tourism, White Paper (September 2004)
16. CEN Workshop Agreement. CEN Workshop on Harmonization of data interchange in
tourism - WS/eTOUR, Draft for public review (February 08, 2009)
17. DERI. OnTour Ontology, http://etourism.deri.at/ont/index.html (last
visited: March 15, 2009)
18. Turismo Euskadi website, http://turismo.euskadi.net (last visited: June 25,
2012)
19. Open Data Euskadi initiative website, http://opendata.euskadi.net (last
visited: June 25, 2012)
20. TourExp project website, http://www.tourexp.es (last visited: June 25, 2012)
21. Lamsfus, C., Martin, D., Salvador, Z., Usandizaga, A., Alzua-Sorzabal, A.: Human-
Centric Ontology-Based Context Modelling in Tourism. In: MCIS 2009 Proceedings.
Paper 64 (2009), http://aisel.aisnet.org/mcis2009/64
22. TourExp development website, http://tourexp.morelab.deusto.es (last visit-
ed: June 25, 2012 )
23. D2RQ platform, http://d2rq.org/d2r-server (last visited: June 25, 2012)
An Evaluation of Multiobjective Urban Tourist Route
Planning with Mobile Devices
1 Introduction
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 387–394, 2012.
© Springer-Verlag Berlin Heidelberg 2012
388 I. Ayala et al.
and results are then transferred to the device. In addition, little or no support is gener-
ally provided to consider different criteria at the same time. Limited route reschedul-
ing is sometimes available [6], but interactive changes are generally not allowed.
This work addresses the development of a context-aware, interactive, multiobjec-
tive tourist urban route planner on mobile devices. Part of this work was presented at
[9]. This paper reviews the main ideas and contributes a more detailed evaluation of
the approach. In particular, this paper contributes a more detailed evaluation of: (a)
the significance of the approach in the proposed domain, (b) the range of choices
available to the tourist, (c) the feasibility of multiobjective analysis with currently
available resources.
This paper is organized as follows. The following section reviews related work.
Then, a brief background on multicriteria decision making is provided. The architec-
ture of the system is described, with special attention to multi-objective route plan-
ning and user interaction. An evaluation of the approach is presented on a realistic
sample scenario. Finally some conclusions and future work are outlined.
2 Related Work
Most current mobile devices incorporate a web-based route planner such as Google
Maps. This application can calculate a route from origin to destination either driving,
walking, or via public transport. Routes are calculated and provided by an external
web service and transferred to the device through an Internet connection. Another
popular application is the mobile TomTom navigator. These applications can be
used both before and during the trip, because most mobile devices are equipped with
a GPS. This allows tourists to find their way from their current location to their
destination on the fly. However, none of these applications currently allows multiple
objectives to be considered simultaneously when selecting a route.
Apart from these common and well-known applications, there are also several
mobile applications and services specifically devised for tourism. We focus here only
on those that support tourists in context-based decision making and route generation.
A common feature of the systems considered below is that they run on a mobile
device (totally or partially) and provide context-aware services to the tourists based
on their current situation. They differ mainly in the context information considered in
route generation, or the way they use the current context to customize the route. The
CONCERT framework [2] exploits context information to match relevant tourism
objects. Contextual information is gathered from Internet and the mobile embedded
sensors (e.g. GPS). The matchmaking framework VMTO, which runs on top of
CONCERT, ranks the selected tourism objects by CONCERT according to personal
tourist preferences. Telemaco [3] makes use of web technologies to provide a mobile
travel guide application that supports place recommendation adapted to user profiles.
However, these approaches do not support route generation.
Other approaches on mobile devices provide personalized place and route recom-
mendations. They mainly differ in the algorithms and criteria used for route genera-
tion. P-Tour [4] allows users to specify multiple destinations and time constraints on
arrival and duration at each site, and returns the best schedule calculated by a genetic
An Evaluation of Multiobjective Urban Tourist Route Planning with Mobile Devices 389
algorithm. The DTG system [6] plans individual tours allowing users to select attrac-
tions. The tour is calculated using an approximation algorithm that combines depth-
first search and a set of heuristics. The work presented in [3] proposes tourist routes
according the user’s interest and available resources. Route generation integrates GIS
spatial analysis functions and a Tabu Search method. The system in [4] generates
vehicle routes optimizing the visibility of scenery. The work in [5] uses complex net-
works to derive optimal routes in conjunction with user preferences (represented by a
social tourist network). However, none of these systems is interactive. A change in
criteria involves the generation of a new route, and the device needs to be connected
to the Internet to access the web-based route generation service.
The system presented in this paper presents three distinctive features when com-
pared to the systems described above: a multicriteria approach to route planning, an
interactive interface, and local computation on the user’s device.
maker on his/her choice among this limited set of alternatives. The goal is to reach a
Pareto-optimal solution representing an acceptable tradeoff in a limited number of
interactions.
Route planning problems are generally modeled as search in a graph. Current route
planners (e.g. car navigation systems) frequently rely on Dijkstra’s or the A* algo-
rithms [11]. A number of extensions of these algorithms with different multicriteria
preferences were analyzed in [12]. However, recent evaluations reveal that the appli-
cation of multiobjective search algorithms to route planning in road networks can be a
computationally demanding task [13][14].
4 Contextual Information
As already stated, this paper deals with the development of a context sensitive,
multicriteria route planner to be used by the general public on mobile devices. More
specifically, the planner deals with pedestrian tourist routes in relatively small urban
areas. The system allows tourists to choose a walking route between two locations
that is short, but at the same time traverses interesting tourist areas. These may
depend on several aspects of the query’s context.
The application runs on an Android mobile phone. The system gathers information
from four different sources in order to plan adequate routes. An interactive
multicriteria route planner uses this information to support the user in the selection of
a walking route that satisfies his/her needs. The sources of information used by the
system are:
• Geographical information about areas of tourist interest. These areas can be con-
text sensitive for each town and depend on date (e.g. winter, summer fair, festi-
vals, etc.), time (e.g. day time or night time), and/or user interests (e.g. shopping,
or sightseeing). Each area can be tagged with a set of keywords, and chosen dy-
namically for a particular context. For example, a particular area can be attractive
both for the cultural and shopping services offered. Once the context is estab-
lished, the route planner can generate a path considering the relevant areas. This
information is to be provided and maintained by a third party entity such as the
local council of a tourist city (in our test case the city is Málaga, in Spain). In the
current version of our prototype, the Web service is static (i.e. a single context
has been established for experimentation purposes), and information is provided
via an XML document. The application in the mobile device downloads the in-
formation provided by the server at startup and processes it using a DOM parser.
• The information on the urban area is described by means of a graph, where nodes
represent intersections, and arcs represent streets joining these intersections. This
information is presented in the user interface using the Google APIs.
• The application uses the current position of the user provided by the Android Loca-
tion API. This API uses different location sources in order to determine the current
position of the device. The current position is used as origin point for the route.
• Finally, the tourist (user) provides the system with a destination, and information
about his/her preferences. These allow the system to reach an adequate tradeoff
between the overall length and interest of the proposed route.
An Evaluation of Multiobjective Urban Tourist Route Planning with Mobile Devices 391
6 Evaluation
The evaluation of the proposed system is performed over a street map of the city of
Málaga (Spain), in particular in the area around the historic downtown, which is
392 I. Ayala et al.
Fig. 1. Recommended tourist areas provided by the City Council (left) and screenshot of the
initial solutions for the problem #1 (right)
A set of 20 route planning problems was generated choosing random origin and
destination points in the recommended tourist areas (which include many tourist
objects like monuments, museums, restaurants and hotels, and the harbour where
many cruise passengers start their short visits to the city). In the first place, we ana-
lyze the significance of the proposed multiobjective approach in this domain. Fig. 2
displays the length or the two extreme routes calculated for the problem set, i.e. the
one that minimizes distance (a), and the one that minimizes walking distance outside
recommended areas (b). Problems are ordered in the horizontal axis by increasing
value of alternative (a), which is depicted in the vertical axis. Each alternative also
displays the portion of its length inside and outside recommended tourist areas. In
some cases, like problems 3 and 2 there is not much difference between the extreme
alternatives, since there is no interesting route joining origin and destination, or it lies
completely along interesting streets respectively. However, in most other cases the
tourist experience can change a lot depending on the criteria used to generate the path.
In some remarkable results such as problems 5 and 18 a small increase in overall
distance can radically change the tourist experience, while in others there is an inter-
esting tradeoff between overall length and tourist interest (e.g. #1 in Fig. 1 right).
These broadly justify the use of multiobjective analysis in this domain.
In the second place we analyze the goodness of the interactive approach. Fig. 3
displays the different tradeoffs offered by the system in a sample problem instance
(#10). In general, the number of alternatives averaged 4.5 with a maximum of 7, and
any can be reached with at most 2 or 3 interactions. Results given in Fig. 2 and Fig. 3
are independent of the algorithm (A* or Dijkstra) used to calculate the route.
1
http://www.malagaturismo.com/jsp/malagapractica/folletos.jsp?
id_idioma=1
An Evaluation of Multiobjective Urban Tourist Route Planning with Mobile Devices 393
Fig. 2. Length (in meters) of extreme routes (a and b) for 20 different problems displaying
portion inside (light gray) and outside tourist areas (dark gray)
Fig. 3. Supported solutions for problem 10 illustrating the tradeoff between overall length and
portion of the route inside recommended tourist areas (light gray)
Table 1. Times for computing Dijkstra and A* (in milliseconds) using different devices
Dijkstra A*
Max Mean Min Max Mean Min
HTC Desire 911 281 31 886 175 24
Galaxy Nexus 579 205 36 511 133 12
7 Conclusions
This paper presents a multiobjective route planner for urban areas that runs in mobile
devices and incorporates tourist contextual information. The system allows the user to
select pedestrian routes according to two criteria: minimizing overall distance and
maximizing tourist interest, obtained from the current query’s context. The user can
394 I. Ayala et al.
accept one of the proposed routes or search for new intermediate solutions through a
simple interactive interface. Our evaluation of the approach in a sample scenario re-
veals that: (a) significant differences appear in the consideration of different planning
objectives, (b) a significant range of possibilities appears for each problem, all of
which can be easily generated after at most two or three interactions (c) the routes can
be feasibly computed locally on current mobile devices. Future work includes the
evaluation of the approach on different contexts.
References
[1] Ricci, F.: Mobile recommender systems. J. of IT & Tourism 12(3), 205–231 (2011)
[2] Lamsfus, C., Grün, C., Alzua-Sorzabal, A., Werthner, H.: Context-based Matchmaking to
Enhance Tourists’ Experiences. Upgrade XI(2), 14–21 (2010)
[3] Martin-Serrano, D., Hervas, R., Bravo, J.: Telemaco: Context-aware System for Tourism
Guiding based on Web 3.0. In: Workshop on Contextual Computing and AmI in Tourism,
UCAMI 2011 (2011)
[4] Maruyama, A., Shibata, N., Murata, Y., Yasumoto, K., Ito, M.: P-tour: A personal naviga-
tion system for tourism. In: Proc. of 11th World Congress on ITS, pp. 18–21 (2004)
[5] Sun, Y., Lee, L.: Agent-Based Personalized Tourist Route Advice System (2004)
[6] Zhang, J., Kawasaki, H., Kawai, Y.: A tourist route search system based on web infor-
mation and the visibility of scenic sights. In: ISUC 2008, pp. 154–161 (2008)
[7] Schering, A.-C., Dueffer, M., Finger, A., Bruder, I.: A mobile tourist assistance and rec-
ommendation system based on complex networks. In: Proceeding of the CNIKM, pp. 81–
84 (2009)
[8] ten Hagen, K., Modsching, M., Kramer, R.: A location aware mobile tourist guide select-
ing and interpreting sights and services by context matching. In: MobiQuitous, pp. 293–
304 (2005)
[9] Ayala, I., Mandow, L., Amor, M., Fuentes, L.: Multiobjective Tourist Urban Route Plan-
ning with Mobile Devices. In: Workshop on Contextual Computing and AmI in Tourism,
UCAMI 2011 (2011)
[10] Chankong, V., Haimes, Y.: Multiobjective Decision Making: Theory and Methodology.
North-Holland (1983)
[11] Delling, D., Sanders, P., Schultes, D., Wagner, D.: Engineering Route Planning Algo-
rithms. In: Lerner, J., Wagner, D., Zweig, K.A. (eds.) Algorithmics. LNCS, vol. 5515, pp.
117–139. Springer, Heidelberg (2009)
[12] Mandow, L., Pérez de la Cruz, J.L.: Multicriteria heuristic search. European Journal of
Operational Research 150, 253–280 (2003)
[13] Delling, D., Wagner, D.: Pareto Paths with SHARC. In: Vahrenhold, J. (ed.) SEA 2009.
LNCS, vol. 5526, pp. 125–136. Springer, Heidelberg (2009)
[14] Machuca, E., Mandow, L.: Multiobjective heuristic search in road maps. Expert Systems
with Applications 39(7), 6435–6445 (2012)
Cardiac Monitoring of Marathon Runners
Using Disruption-Tolerant Wireless Sensors
1 Introduction
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 395–402, 2012.
c Springer-Verlag Berlin Heidelberg 2012
396 D. Benferhat, F. Guidec, and P. Quinton
to the ECG sensor) that can serve as a relay between the sensor and base sta-
tions. With this configuration the ECG data stream produced by an ECG sensor
is transmitted directly and continuously to the associated smartphone through
a Bluetooth RFCOMM link. The smartphone processes this data stream, and
forwards the resulting data bundles whenever possible to roadside base stations
using Wi-Fi wireless links.
Current biomedical applications usually either perform data recording for
deferred analysis, or they depend on continuous connectivity for real-time trans-
mission and processing. For example projects Mobihealth and HYGEIAnet
assume continuous network connectivity, as they aim at developing new health-
care mobile services using 2.5/3G technology for physiological data collection [2].
Data recording for deferred analysis is notably used in [4,5], which rely on wear-
able recording sensors for health monitoring during sports activities. To the best
of our knowledge the utilization of disruption-tolerant techniques to monitor
athletes in outdoor conditions has not been investigated much so far, although
disruption-tolerant solutions for non-biomedical sensor-based applications have
already been proposed in the literature [3,6].
The remainder of this paper is organized as follows. In Section 2 we estimate
the number of base stations required to would be required to cover the full length
of a marathon route, and show that a sparse deployment of base stations can be
sufficient to cover this route satisfactorily. The SHIMMER sensor we use in this
project for ECG data acquisition is described in Section 3. Section 4 presents
the main features of the tranmission chain we designed in order to support the
disruption-tolerant transmission of ECG data between runners and a monitoring
center. In Section 5 we report on a field trial we conducted on our university
campus in order to validate this work. Section 6 concludes this paper.
The acquisition of ECG data on each SHIMMER sensor is performed on two 12-
bit channels (RA-LL and LA-LL leads), with a sampling frequency that can be
adjusted as needed. The data stream hence produced is transmitted on-the-fly
to the smartphone through a Bluetooth RFCOMM link.
Each SHIMMER sensor must be paired with a specific smartphone, and two
paired devices must of course be carried by the same marathon runner. The
Java application we designed for Android smartphones allows a user to locate
nearby SHIMMER sensors, and to pair the smartphone with one or several
of these sensors, using secured pairing if desired. The possibility for a single
smartphone to collect data from several sensors is a provision for future work:
several sensors may thus be attached to a single athlete, so different kinds of
data can be collected simultaneously.
Once a smartphone is paired with a sensor, an RFCOMM link is established
between them. Through this link the smartphone can control the sensor, and
send simple commands in order to adjust the sampling frequency or resolution,
to start or stop the data acquisition, etc. When data acquisition is enabled
on a sensor, a continuous data stream is sent to the smartphone through the
RFCOMM link.
The code we designed for both SHIMMER sensors and Android smartphones
can tolerate transient disruptions in RFCOMM links. For example, if paired sen-
sor and smartphone get disconnected for a while, they strive to re-establish the
connection, and data transmission (if enabled) resumes as soon as the connection
is re-established.
The data stream received by the smartphone is packetized in small bundles,
which are then stored in the smartphone’s SD-card, awaiting for transmission
to the monitoring center. Each bundle consists of a header and a payload. The
header includes an identifier of the source sensor and a timestamp. The pay-
load is simply a byte array that contains a sequence of data bytes received
from the sensor. The size of this byte array depends on the data acquisition
frequency and resolution on the sensor, as well as on the period set for data
bundling. For example, data acquisition on two 12-bit channels with 200 Hz
sampling produces a continuous data stream at 4.8 kbps. Assuming a bundle is
produced every 20 seconds on the smartphone, each bundle contains a payload
of 12 kiB.
Cardiac Monitoring of Marathon Runners 399
card and that have not been uploaded to the monitoring center yet. A graphical
application running in the monitoring center can thus display the latest cardiac
activity of a runner, while allowing an operator to rewind the ECG stream in
order to display past events if necessary.
5 Experimental Results
Before trying to capture the ECG stream of runners during a real marathon,
we decided to run field trials at a much smaller scale in order to validate our
approach. An experiment was thus conducted on our university campus. Three
volunteers were equipped with ECG-enabled SHIMMER sensors and HTC Wild-
fire S smartphones, and two base stations (BS1 and BS2) were placed about 1 km
apart along the running route. These base station were standard Wi-Fi access
points. They were both placed on a window-ledge, and connected to the campus
LAN. The runners had to run around the campus, passing twice close to each
base station.
During the race our prime motivation was to observe if the data bundles
produced continuously on each runner could actually be transmitted when the
runner’s smartphone established radio contact with one or another base station.
Figure 2 shows the timeline of transmissions between the smartphones carried
by the three runners (S1 to S3) and the two base stations. The rectangles in
dotted lines depict a radio contact between a smartphone and a base station.
The grey background shows the actual running time of a runner during the trial.
Let us examine the transmission timeline for S1. This smartphone was in-
stalled together with a SHIMMER sensor on a runner around 10:58. Both de-
vices were activated immediately, so S1 started collecting bundles of data from
that time on. Once the three runners were ready to go, they walked together to
the start line. Since BS1 was located near that line a connection S1 established a
connection with B1, and started uploading to the remote server all the bundles
it had recorded since its activation. At 11:04, the three runners started running.
Cardiac Monitoring of Marathon Runners 401
The connection between S1 and BS1 was therefore interrupted, after a 105 second
contact window during which 21 data bundles had been uploaded to the server.
Around 11:10 S1 established a connection with the second base station. This
new contact window lasted 40 seconds, and this time 19 bundles were uploaded
by S1 (17 of these bundles had been produced since S1 lost contact with BS1,
and 2 new bundles were produced while S1 was in contact with BS2). As the
runner carrying S1 continued running around the campus, S1 later established a
connection again with BS1 (around 11:19), and then with BS2 (around 11:28),
which was installed close to the finish-line.
Fig. 2. Timeline of data transmissions during the race for the three runners
During this trial no data bundle was lost, or failed to reach the remote server.
These results therefore confirm that the protocol we implemented on SHIMMER
sensors and Android smartphones can indeed tolerate episodic connectivity to
roadside base stations, with no loss of data. Further experiments involving a
larger number of runners covering a longer distance should of course be con-
ducted, but considering the high bandwidth available with Wi-Fi transmissions
it can be expected that dozens of runners can be monitored simultaneously using
this approach.
6 Conclusion
In this paper we investigated the possibility to monitor the cardiac activity of
runners during a marathon race. Unlike most current monitoring applications,
which imply either indoor real-time data streaming or ambulatory data record-
ing, the solution we propose makes it possible to get updates about each runner’s
402 D. Benferhat, F. Guidec, and P. Quinton
health regularly during the race, using episodic transmissions between sensors
carried by runners and base stations deployed sparsely along the marathon route.
Disruption-tolerant transmission techniques are used to cope with the partial
coverage of the route.
A preliminary field trial has been conducted with volunteers running around
our university campus, using SHIMMER sensors for data acquisition and An-
droid smartphones for Wi-Fi transmission to roadside base stations. This trial
confirms that capturing and transmitting ECG data during a running race is
indeed feasible with such devices and technologies. In future work we plan to
increase both the number of monitored runners and the duration of the field
trials in order to check the scalability of this approach. Ultimately we would of
course like to demonstrate that dozens (or possibly hundreds) of runners can be
monitored during a real marathon race.
References
1. Benferhat, D., Guidec, F., Quinton, P.: Disruption-Tolerant Wireless Biomedical
Monitoring for Marathon Runners: a Feasibility Study. In: 1st International Work-
shop on Opportunistic and Delay/Disruption-Tolerant Networking (WODTN 2011),
pp. 1–5. IEEE Xplore, Brest (2011)
2. Konstantas, D., Herzog, R.: Continuous Monitoring of Vital Constants for Mobile
Users: the MobiHealth Approach. In: 25th Annual International Conference of the
IEEE EMBS, pp. 3728–3731 (2003)
3. Nayebi, A., Sarbazi-Azad, H., Karlsson, G.: Routing, Data Gathering, and Neighbor
Discovery in Delay-Tolerant Wireless Sensor Networks. In: 23rd IEEE International
Symposium on Parallel and Distributed Processing, IPDPS 2009, Rome, Italy, May
23-29, pp. 1–6. IEEE CS (2009)
4. Kugler, P., Schuldhaus, D., Jensen, U., Eskofier, B.: Mobile Recording System for
Sport Applications. In: Jiang, Y., Zhang, H. (eds.) Proceedings of the 8th Inter-
national Symposium on Computer Science in Sport (IACSS 2011), Liverpool, pp.
67–70 (2011)
5. Chelius, G., Braillon, C., Pasquier, M., Horvais, N., Pissard-Gibollet, R., Espiau,
B., Azevedo Coste, C.: A Wearable Sensor Network for Gait Analysis: A 6-Day
Experiment of Running Through the Desert. IEEE/ASME Transactions on Mecha-
tronics 16(5), 878–883 (2011)
6. Pasztor, B., Musolesi, M., Mascolo, C.: Opportunistic Mobile Sensor Data Collection
with SCAR. In: Proc. IEEE Int’l Conf. on Mobile Adhoc and Sensor Systems (MASS
2007), pp. 1–22. IEEE Press (2007)
7. Fall, K.: Messaging in Difficult Environments. Technical report, Intel Research
Berkeley (2004)
8. Burns, A., Greene, B., McGrath, M., O’Shea, T., Kuris, B., Ayer, S., Stroiescu,
F., Cionca, V.: SHIMMER: A Wireless Sensor Platform for Noninvasive Biomedical
Research. IEEE Sensors Journal (9), 1527–1534 (2010)
Application of Kernel Density Estimators
for Analysis of EEG Signals
1 Introduction
Analysis of EEG signals is currently a major aspect of biomedical engineering
and science in general. Interest arises from a multitude of applications both in
civilian and military markets. Popular discussions of applications are steering
of vehicles for physically disabled and controllers for video games - generally
brain-computer interfaces[1]. Currently EEG sets, such as Emotiv EPOC (see
Fig. 1a), are commercially available at affordable prices allowing easy access
to research data. In order to construct any sort of reliable algorithm for using
EEG signals it is important to focus on problems of the processing of bio-signals
such as EEG [2,3,4]. In this research the main focus is on the experimental data
obtained in series of experiments related to classification of these signals related
to test subjects thinking about moving their hands. The conducted experiments
were concentrated on analysis of motor centres of brain, so the signals from F3
and F4 electrodes (placed according to the 10-20 system) were analysed (see Fig.
1b). During the experiments subject were inquired via visual stimuli to think
about moving their right hand or simply to – relax. This paper presents only
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 403–406, 2012.
c Springer-Verlag Berlin Heidelberg 2012
404 J. Baranowski et al.
the analysis of results for experiments that relied on imaging movement of right
hand only.
2 Signal Analysis
Signals obtained from Emotiv EPOC are sampled with frequency of 128 Hz
(limitation due to the device specification). For analysis of spectrum, the Welch
method of PSD estimation was used. Kaiser window was used with width of
256 samples, overlap was half the window length and parameter α = 20. Welch
method was used because of good behaviour for noisy signals and because it
has reduced spectral variance. As presented in the Fig. 2, spectra of multiple
signals from different subjects are very similar. Mostly the spectrum is flat with
one peak from DC part of the signal and a drop at higher frequencies obviously
caused by the operation of the hardware anti-aliasing filter.
Linear filtering of these signals proved difficulties, resulting in substantial tran-
sitional behaviour. That was the reason for the authors to approach a different,
novel analysis method based on non-parametric statistics. The concept of sliding
Fig. 3. EEG signal raw and filtered using traditional statistical methods – obtained
from the F3 electrode a) densitogram of this signal b)
− x2
1 x − xi
n n
1 1
fˆh (x) = Kh (x − xi ) = K , K(x) = √ e 2 (1)
n i=1 nh i=1 h 2π
Fig. 4. EEG signal raw and filtered using traditional statistical methods – obtained
from the F4 electrode a) densitogram of this signal b)
406 J. Baranowski et al.
3 Conclusions
In this paper a new approach for filtering of EEG signals with statistical meth-
ods was presented. Presented filtering approach constitutes an alternative to
popular filtering frequency methods, where one of its fundamental merits is to
eliminate initial transitional response, which is common for filters. Initial results
of conducted research are promising, further research will be carried out in order
to compare the efficiency of proposed method with the approach of the popular.
The authors are currently focusing on comparison of previously obtained results
on estimation [7,8] and signal processing [9] with the results presented shortly
in this paper. The main aim is to develop a tool, which allows better and as a
result – more efficient extraction of important patterns in EEG signals.
References
1. Kalcher, J., Flotzinger, D., Gölly, S., Neuper, C., Pfurtscheller, G.: Graz brain-
computer interface (bci) ii. In: Zagler, W., Busby, G., Wagner, R. (eds.) Computers
for Handicapped Persons. LNCS. Springer, Heidelberg (1994)
2. Mustafa, M., Taib, M., Murat, Z., Sulaiman, N., Aris, S.: The analysis of eeg
spectrogram image for brainwave balancing application using ann. In: Proc. 13th
UKSiM (April 2011)
3. Deepa, V.B.: A study on classification of eeg data using the filters. IJACSA (4)
(2011)
4. Šťastný, J., Sovka, P.: High-resolution movement eeg classification. Computational
Intelligence and Neuroscience, 54925 (2007)
5. Kulczycki, P.: Estymatory jądrowe w analizie systemowej. WNT - Publishing House
(2005)
6. Botev, Z.I., Grotowski, J.F., Kroese, D.P.: Kernel density estimation via diffusion.
Annals of Statistics (5) (2010)
7. Baranowski, J., Piątek, P.: Observer-based feedback for the magnetic levitation
system. Trans. of the Institute of Measurement and Control (2011)
8. Baranowski, J.: Tuning of strongly damped angular velocity observers. Przegląd
Elektrotechniczny (Electrical Review) (6) (2012)
9. Baranowski, J., Bauer, W., Oleszczyk, M.: Metody częstotliwościowe w analizie
zachowań pacjentów poz. In: Proc. XIV Symp. PPEEiM, Wisła (2011)
10. Pelc, M., Anthony, R., Kawala-Janik, A.: Context-aware autonomic systems in
real-time applications: Performance evaluation. In: 5th ICPCA 2010 (December
2010)
11. Kawala-Janik, A., Pelc, M., Anthony, R., Hawthorne, J., Ma, J.: Human-computer
interface based on novel filtering algorithm and the implementation of the emotiv
epoc headset. In: Proc. XXXV IC-SPETO (2012)
A System for Epileptic Seizure Focus Detection
Based on EEG Analysis
Maria Jose Santofimia Romero1 , Xavier del Toro1 , Jesús Barba1, Julio Dondo1 ,
Francisca Romero2 , Patricia Navas2 , Ana Rubio2 , and Juan Carlos López1
1
Computer Architecture and Network Group, School of Computing Science,
University of Castilla-La Mancha, Spain
2
Neuroscience Institute, Hospital Regional Universitario Carlos Haya,
Malaga, Spain
{MariaJose.Santofimia,Xavier.delToro,Jesus.Barba,Julio.Dondo,
JuanCarlos.Lopez}@uclm.es
1 Introduction
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 407–414, 2012.
c Springer-Verlag Berlin Heidelberg 2012
408 M.J. Santofimia Romero et al.
Moreover, the detection of an epileptic seizure brings into light a second chal-
lenge, as it is the detection of the focus or epileptogenic area. This aspect is
essential to determine whether the epileptic patient can go through surgery to
eradicate the disease [10].
The majority of the works found in the literature are targeted at identifying
epileptic seizures. In order to do so, different signal processing approaches are im-
plemented in combination with classification mechanisms. Having analyzed the
requirements and challenges faced by experts in this field, this work proposes a
two stage system. A primary artifact filtering stage is followed by a wave classi-
fication of EEG activity. In order to do so, a machine learning algorithm, known
as Bag of Words is used to train a Support Vector Machine (SVM) classifier.
The training and testing data, from anonymous patient, have been provided by
the University’s Regional Hospital Carlos Haya. The provided dataset consists of
EEGs from epileptic and non-epileptic patients. Those not showing any epilep-
tic activity, however, present artifacts, which could be erroneously interpreted
as abnormal or epileptic activity.
The rest of the paper is organized as follow. Section 2 presents some relevant
background information to better understand the contributions of this work.
Section 3 describes the overall system, organized in two stages: training and test-
ing. Section 4 discusses how artifact detection could be improved by combining
video sequences with the recordings of the EEG equipment. Section 5 highlights
some of the most important conclusions withdrawn from the proposed work.
The main contribution of this work lays on the proposed classifier and automatic
training system. Inspired in the work presented in [11], aimed at recognizing
human actions from video sequences, this paper proposes a similar methodology
applied to EEG analysis and epilepticform wave recognition.
The revision of the state of the art for automatic EEG analysis brings into light
that most of the approaches found on literature propose the use of different classi-
fication systems. Basically, they all follow the same methodology. First, each chan-
nel of the EEG is split into time windows, which are processed and characterized in
order to be computed. Second, the system is trained to identify seizure and normal
states, discarding those disturbances that have not been caused by brain activity
but, on the contrary, result from movements of the human body (i.e., eye blink-
ing). Finally, as a result of the training phase, a model for each state is computed
and provided to a classifier. The classifier is therefore responsible for classifying
each time window into any of the two or three considered states (if artifacts and
normal waves are considered as separate states).
Despite the fact that the majority of the studies in this field are targeted at
the classification stage, few have been to the automation of the training stage.
As mentioned in the previous section, most of the works found in the literature
either overlook or manually undertake the training phase. This imposes a major
drawback in deploying those systems in real environments.
410 M.J. Santofimia Romero et al.
The work in [12] claims that non-specific patient training yields poor results
in detecting seizures onsets due to the high variability in wavelet forms. Re-
cent studies address the development of patient-specific classifiers, such as [13].
However, this work is mainly concerned about how to distinguish artifacts from
epileptic activity, independently of patient-dependent characteristics, in an au-
tonomous and automatic fashion. Only when epileptic activity has been detected,
the proposed system is engaged in detecting the epileptogenic area, therefore
helping doctors in their diagnosis of those epileptic types that are suitable for
surgery treatments.
In this sense, this paper proposes an approach, which has already been vali-
dated in the fields of computer vision and natural language processing. A ma-
chine learning algorithm, know as Bag of Words (BoW) is in charge of computing
the models for the different events that are going to be identified. Then, the out-
put models are provided to a Support Vector Machine (SVM) classifier, which
ultimately performs the element identification.
As it has been already mentioned, the BoW algorithm was originally proposed
for natural language processing in [14]. Then, this approach has been gaining
importance in the computer vision field and, recently, in the field of video action
recognition [11]. This work proposes the use of BoW in the field of EEG analysis
for epilepticform waves and epileptogenic focus detection.
Following the methodology described for BoW in natural language and com-
puter vision, the first step consists in extracting the most relevant features that
characterize an EEG signal. Then, these features should be quantified so that
they can be computationally managed. The most appropriate mean to do so
is by compiling them into feature vectors and then, cluster them to compute
the model. The clustering criterion is based on the different events provided
to the system. Some works adopt a linear approach, in which only normal and
seizure events are considered. This work considers three different types of events:
normal, artifact, and seizure.
The clustering phase is known, under the BoW terminology, as the code book
generation phase, since it was originally targeted at text classification. This study
therefore proposes the use of a three-word language, in which the equivalent for
a word is simply one of the three considered events.
3 System Description
This work has a threefold aim: first, filtering out artifacts from the EEG dataset;
second, analyzing EEG in seeking for those epilepticform waves; and, finally,
when epileptic activity has been detected, identifying the epileptonogenic area.
Figure 1 depicts the different phases involved in the training and testing stages.
The filtering and feature extraction phases are common for both stages, the train-
ing and testing. The feature extraction phase is required as a mean to transform
a continuous signal into a discrete set of values fed to the computational system
proposed here.
In order to fulfill these three main aims, this paper presents a two-stage pro-
cess system. First, a machine learning algorithm is trained to identify artifacts,
A System for Epileptic Seizure Focus Detection Based on EEG Analysis 411
normal waveforms, and seizures. Then, the training process outputs a set of
models that are provided to a SVM classifier, in charge of performing the classi-
fication. Finally, whenever epileptic activity is identified from the EEG analysis,
an identification of the epileptogenic area is undertaken.
The dataset employed in this paper, provided by the University’s Regional
Hospital Carlos Haya, consists of EEG results recorded using a XLTEK
Neuroworks equipment.
various EEG channels (y1, y2,..., yi). In this work the following signal features
are calculated: average value, variance, maximum value, minimum value, RMS
value, peak frequency, and time derivative and integral.
A matrix of features F is obtained by computing each feature for each channel
in a given dataset. The resulting matrix of features if then provided to the
BoW or the SVM classifier, depending on whether the training or the testing
phase are considered. It should be noted that the computational cost of this
procedure depends on the computational complexity of the selected features, the
number of channels considered and the size of the time windows. Moreover, due
to the repetitive nature of the computations to be performed, it is apparent that
solutions based on parallel architectures can considerably reduce the lattency of
the procedure calculation.
1
http://tfinley.net/software/svmpython1/
A System for Epileptic Seizure Focus Detection Based on EEG Analysis 413
Identifying artifacts is not always a simple task. There are certain situations in
which, not even experts, can positively discern between artifacts and seizures
if only EEG data are considered. However, if EEG data analysis is enhanced
with annotations whenever patients blink, activate their facial muscles, or move
artifacts could be more easily identified.
In fact, EEG equipment do normally incorporate a video recording devices to
support doctors in their analysis of the EEG data. The same process could be
automatically implemented so that whenever the classifier identifies a potential
artifact, video images recorded at that exact time instant can be analyzed to
support the recognition system decision.
However, video analysis is a quite sensitive task, prone to significative vari-
ations in the results, due to changes, for example, in the lighting of the room
the capture is taking place or in the position of the camera (the calibration
problem). These are open issues in the video analysis knowledge area, which
nowadays can only be minimized in a controlled environment as it is the case of
clinical premises.
OpenCV2 (Open Source Computer Vision) library can be used to develop the
set of artifact recognition algorithms (ARA). OpenCV is free, multiplatform,
and contains hundreds of functions that span several areas in artificial vision
such as face recognition, object recognition, camera calibration, etc.
5 Conclusions
References
1. Shoeb, A., Edwards, H., Connolly, J., Bourgeois, B., Treves, S.T., Guttag, J.:
Patient-specific seizure onset detection. Epilepsy and Behavior 5(4), 483–498 (2004)
2. Gotman, J.: Automatic recognition of epileptic seizures in the EEG. Electroen-
cephalography and Clinical Neurophysiology 54(5), 530–540 (1982)
3. Carney, P.R., Myers, S., Geyer, J.D.: Seizure prediction: Methods. Epilepsy and
Behavior 22(4), 94–101 (2011)
4. Winterhalder, M., Maiwald, T., Voss, H.U., Aschenbrenner-Scheibe, R.,
Schulze-Bonhage, A., Timmer, J.: Quantitative neuroscience, pp. 103–116. Kluwer
Academic Publishers, Norwell (2004)
5. Yuan, Q., Zhou, W., Liu, Y., Wang, J.: Epileptic seizure detection with linear and
nonlinear features. Epilepsy and Behavior 24(4), 415–421 (2012)
6. Runarsson, T.P., Sigurdsson, S.: On-line detection of patient specific neonatal
seizures using support vector machines and half-wave attribute histograms. In: In-
ternational Conference on Computational Intelligence for Modelling, Control and
Automation, vol. 2, pp. 673–677 (2005)
7. Thomas, E.M., Temko, A., Lightbody, G., Marnane, W.P., Boylan, G.B.: A Gaus-
sian mixture model based statistical classification system for neonatal seizure de-
tection. In: IEEE Workshop on Machine Learning for Signal Processing (2009)
8. Shoeb, A., Kharbouch, A., Soegaard, J., Schachter, S., Guttag, J.: A machine-
learning algorithm for detecting seizure termination in scalp eeg. Epilepsy and
Behavior 22(4), 36–43 (2011)
9. Harikumar, R., Balasubramani, M.: Fpga synthesis of soft decision tree (sdt) for
classification of epilepsy risk levels from fuzzy based classifier using eeg signals.
International Journal of Soft Computing and Engineering 1(4), 206–211 (2011)
10. Luders, H., Bongaman, W., Najm, I.M.: Textbook Of Epilepsy Surgery. Informa
Healthcare (2000)
11. Nebel, J.-C., Lewandowski, M., Thévenon, J., Martı́nez, F., Velastin, S.: Are Cur-
rent Monocular Computer Vision Systems for Human Action Recognition Suitable
for Visual Surveillance Applications? In: Bebis, G., Boyle, R., Parvin, B., Koracin,
D., Wang, S., Kyungnam, K., Benes, B., Moreland, K., Borst, C., DiVerdi, S.,
Yi-Jen, C., Ming, J. (eds.) ISVC 2011, Part II. LNCS, vol. 6939, pp. 290–299.
Springer, Heidelberg (2011)
12. Wilson, S.B., Scheuer, M.L., Emerson, R.G., Gabor, A.J.: Seizure detection: evalu-
ation of the Reveal algorithm. Clinical Neurophysiology 115(10), 2280–2291 (2004)
13. Shoeb, A.H., Guttag, J.V.: Application of machine learning to epileptic seizure
detection. In: ICML 2010, pp. 975–982 (2010)
14. Joachims, T.: Text Categorization with Support Vector Machines: Learning with
Many Relevant Features. In: Nédellec, C., Rouveirol, C. (eds.) ECML 1998. LNCS,
vol. 1398, pp. 137–142. Springer, Heidelberg (1998)
Innovative Health Services
Using Cloud Computing and Internet of Things
Diego Gachet Páez1, Fernando Aparicio1, Juan R. Ascanio2, and Alberto Beaterio3
1
Universidad Europea de Madrid, 28670 Villaviciosa de Odón, Spain
{gachet,fernando.aparicio}@uem.es
2
Encore Solutions. C /Albalá 5, 28037 Madrid, Spain
juan.ascanio@encore.es
3
Eygema. C /San Sebastián 3, 21004 Huelva, Spain
abeaterio@gmail.com
Abstract. The demographic and social changes are causing a gradual increase
of the population in situation of dependency. The main concern of the elderly
people is their health and its consequences in terms of dependence and also is
the primary cause of suffering and self-rated ill health. Since the elderly have
different health problems that the rest of the population, we need a deep change
in national’s health policy to get adapted to population ageing. This paper
describes the preliminary advances of “Virtual Cloud Carer” (VCC), a spanish
national R&D project, whose primary purpose is the creation of new health ser-
vices for dependents and chronics, using technologies associated with internet
of things and cloud computing.
1 Introduction
Today, developed countries have great difficulties with effective health services and
quality of care in a context marked by the population ageing. This general world
trend, that can be seen in Fig. 1, has dramatic effects on both public and private health
systems, as well as on emergency medical services, mainly due to an increase in
costs and a higher demand for more and improved benefits for users, as well as for
increased personal mobility.
This demographic change will lead to significant and interrelated modifications in
the health care sector and technologies promoting independence for the elderly. As
representative data approximately 60% of the European population (58% in Northern
America) is made up of people aged 20 to 64 years, while the 65 and over group cov-
ers 19% (16% in Northern America). Thus, there are 3-4 working employees to every
pensioner. On the other hand, it is estimated that the 20 to 64 years old group will
decrease to 55% and the over 65 group will increase to 28% by the year 2050, making
the proportion 1 to 2 instead of 1 to 3-4. Spending on pensions, health and long-term
care is expected to increase by 4-8% of the GDP in the coming decades, with total
expenditures tripling by 2050.
Current estimates claims there are 1.300.000 dependent persons in Spain and the
public spending in 2010 was 5.500 million Euros for care of 650.000 dependents.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 415–421, 2012.
© Springer-Verlag Berlin Heidelberg 2012
416 D. Gachet Páez et al.
Fig. 1. Demographic change according to the foresight of the United Nations, http://esa.un.
org/unpp (access: 09/12/2010)
Although the budget has increased year by year since 2007, it is also true that the
current global economic crisis requires a rationalization of social and economic
resources, more when the public health system has not yet reached the maximum
number of dependents entitled to care.
At the other hand, in the last decades there exists an undeniable increase in chronic
diseases. Recent data of the European Union reveals the main chronic pathologies are
the following ones: diabetes; according to International Diabetes Federation (IDF),
the global cost of the diabetes in Europe was approximately of €68.300 million in
2007 and will grow until €80.900 millions in 2025. According to countries, depending
on the prevalence and the level of available treatments, the cost in diabetes will be in
a rank of 2.5 - 15% of the total of sanitary expenses. The cardiovascular diseases,
including all the diseases of the circulatory system, demanded a total cost in Europe in
2006 of €109,000 million (10% of the total of the sanitary cost; in Spain 7%). The
indirect costs include €41,000 million loss of productivity and €42,000 million of the
cost of the informal cares. All it makes a total of €192,000 millions in 2006.
The set of the main respiratory diseases (EPOC, asthma, cancer of lung, pneumo-
nia and tuberculosis), is responsible for 20% of all the deaths and generates a cost of
€84,000 million in Europe. The EPOC affects in Europe 44 million people, with a
prevalence of the 5-10% of population greater than 40 years. According to the WHO
(World Health Organization), in 2030 will be 3rd. death cause, and first cause of
sanitary costs in Europe, due the profiles of the expenses in health and the long time
expenses by age groups, and due to their important associate morbidity.
The “Virtual cloud carer” (VCC) R&D project will try, from the dominion of the
information and communications technologies, initiate an approach to the field of the
innovation in the integral care to the elderly people with chronic diseases, being
Innovative Health Services Using Cloud Computing and Internet of Things 417
understood here, by integral care, the provision of medical assistance and social sup-
port necessary to take care in an ideally correct form to the elderly people according
to their health state.
The Virtual Cloud Carer project, through the use of information and communication
technologies, intended to cover a range of social and health objectives aimed to
improve the quality of life for elder people with chronic diseases.
Among the initial services of the platform, there are technical difficulties related to the
application area. For example, the development of an accessible Web browser must be
multimodal and interoperable in order to take into account the needs of all members of
the group, which greatly complicates the solution due the diversity of users.
At the other hand, the design of a mobile device for collect bio-sensor information
must take in account the diversity of technologies and different communication proto-
cols (USB, IEE, I2C, etc). Is then necessary to develop a proprietary API that deals
with this issue and permits send the data from mobile device to Internet.
Another important technical difficulty is to develop a system for 3D recognition for
evaluate the rehabilitation exercises without intervention of medical personnel, in this
case it is necessary to include information about how well is done the exercises, also the
use of common computer applications trough the use of voice commands will permit the
elderly to connect to the Internet for entertainment and searching for information about
their health status. The ability to speak will be performed by TTS module (Text to
Speech), which will be able to interpret instructions to carry out a phrase, change com-
puter’s volume, etc. This module will be capable of aloud the name of the icon over
which the user is, or alert user of events or errors via voice messages.
For this subsystem the idea is to implements a graphical user interface (GUI)
having a similar behaviour to the Windows desktop, but with some innovative charac-
teristics that allow meets the specific 65+ user requirements (size of icons, colours,
etc.) including likewise key aspects of advanced user interfaces. An important part of
this subsystem will be also an adaptable and voice commanded Web browser. Fig 2.
depicts a high level architecture of the Virtual Cloud Carer components.
The creation of a subsystem for gesture recognition and movement detection will
be created having in mind the necessity for assist and improve the physical recovery
process of patients with movement disabilities caused by neurological, orthopaedic or
rheumatoid problems, this subsystem will be based on interactive applications
and games that monitor the patient’s movement and engage them in performing the
exercises recommended for recovery.
This is something that many researchers have recently tried to achieve using differ-
ent devices and sensors for patients’ movements detection, like the Wiimote [8], but
in our case we want to use the Microsoft Kinect [9], which tracks the human motion
without requiring any devices attached to the body.
Innovative Health Services Using Cloud Computing and Internet of Things 419
About the mobile device our approach is to develop this part around a microcon-
troller suitable for collecting information from sensors, gps, accelerometers, etc. as
shown in Fig 3. The device must be capable of transmit data to the Internet trough a
3G connection via an integrated modem and send/receive data via Bluetooth to/from
sensors. In the cloud part there will be available a TCP/IP socket with capabilities
for receive and transmit basic information using a special and dedicated protocol.
In the cloud part it is necessary to design and develop a back office system for stor-
ing the sensor data and develop the logic processes for generation of alarms based on
threshold’s measures and integrating this with a messaging system capable of sending
information in a user profiles basis and using social networks.
From technological point of view there are a great variety of alternatives on which
we have to decide, for example the technological components involved in the design
of user interfaces for data presentation through mobile devices, i.e. HTML5.
In the server and cloud part we will use SOA architecture for standardization and
because enables and facilitates the development of new applications and services that
seamlessly integrate with existing modules without need of an expert knowledge, for
mobile device our intention is using Android as operating system.
In summary the most important technological challenges of the project lies in the
design and development of a intelligent mobile device capable of integrating a variety
of biosensors as for example oximetry sensors, glucose and others, including accel-
erometers and GPS for localization. At the cloud part our efforts will be dedicated to
design and to develop a platform for collect information and process it in order to
generate alarms and directions for the elderly caregivers or medical personnel and the
exercises dedicated to the computer aided physical rehabilitation, the other subsys-
tems mentioned above are also important but at this stage of the project these are not a
priority.
4 Expected Results
The VCC Project (Virtual Cloud Carer) has as goal the design and developing of a
technological platform allowing the elderly and persons with chronic diseases to
increase their quality of life. In this paper we have focused on the preliminary ideas
around the development of an intelligent mobile device for managing the information
provided by sensors and the infrastructure at the cloud part, also we present some
details about a health information service for the elderly integrated as a part of the
technological platform.
As mentioned above, from the point of view of development, the project's expected
results are:
• A hardware interface device adaptable to all seniors and people with disabilities
enabling the interaction with computer or television.
• A mobile device capable of collecting information from sensors and transmit
these data to internet.
• The development of services including an adaptable Web browser that allows
access for elders and disabled people to the Internet.
• A subsystem for gesture recognition and movement detection for physical
rehabilitation in order to assist and augment the physical recovery process of the
elderly with movement disabilities
• An analysis of business opportunities and business requirements (identifying
their strengths and weaknesses) for the successful commercialization of the
project results.
Innovative Health Services Using Cloud Computing and Internet of Things 421
During the running of Virtual Cloud Carer project two case studies/scenarios will be
implemented to demonstrate the functionality of the framework developed. One deal-
ing with rehabilitation at home, based on gesture recognition, internet access through
special input devices and using voice commanded common computer applications,
while another scenario will be developed and evaluated in a care centre with elders
and chronics bringing the mobile device and sensors. The scenarios will have real
participation of end users to validate the technological advances.
Acknowledgments. The R+D+i Project Virtual Cloud Carer described in this paper
is partially funded by the Ministry of Industry, Tourism and Commerce TSI-020100-
2011-83.
References
1. Burdick, D.C., Kwon, S.: Gerotechnology: Research and Practice in Technology and Ag-
ing. Springer Publishing Company, New York (2004)
2. Wang, F., Docherty, L.S., Turner, K.J., Kolberg, M., Magill, E.H.: Service and policies for
care at home. In: Proc. 1st Int. Conf. on Pervasive Computing Technologies for Health-
care, pp. 7.1–7.10. IEEE Press (November 2006)
3. Augusto, J.C.: Towards personalization of services and an integrated service model for
smart homes applied to elderly. In: Proc. Int. Conf. on Smart Homes and Health Telemat-
ics, Sherbrooke, Canada, pp. 151–158 (July 2005)
4. Kranz, M., Schmidt, A., Rusu, R., Maldonado, A., Beetz, M., Hornler, B., Rigoll, G.:
Sensing technologies and the player-middleware for context-awareness in kitchen envi-
ronments. In: Proc. Networked Sensing Systems, pp. 179–186 (2007)
5. Michahelles, F.: How the Internet of Things will gain momentum: Empower the users. In:
International Conference of Impact on Ubiquitous RFID/USN Systems to Industry, Sun-
chon (October 2009) (invited paper)
6. WHO global report. Preventing chronic diseases: a vital investment. World Health Organi-
zation (2005)
7. Notaris, M., et al.: Preliminary experience with a new three-dimensional computer-based
model for the study and the analysis of skull base approaches. Child’s Nervous Sys-
tem 28(5), 621–626 (2010)
8. Wii-hab: Veterans Get More Than Fun With Wii Rehab. United States Department of Vet-
erans Affairs. U.S. Department of Veterans Affairs (2010)
9. Webb, J., Ashley, J.: Beginning Kinect Programming with the Microsoft Kinect SDK.
Apress (2012)
10. Jonquet, C., Shah, N., Musen, M.: The Open Biomedical Annotator. In: AMIA Summit on
Translational Bioinformatics, San Francisco, CA, pp. 56–60 (2009)
11. Bollacker, K., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: a collaboratively
created graph database for structuring human knowledge. In: Proc. ACM SIGMOD Inter-
national Conference on Management of Data, pp. 1247–1250. ACM, New York (2008)
Using Haptic and Neural Networks for Surface
and Mechanical Properties 3D Reconstruction
1 Introduction
3D models of objects obtained through images is a research area that has been
developed in recent years. Several high-quality algorithms have been developed
for that purpose [1]. In fact, there has been a big advance in 3D object recon-
struction (surface approximation) since marching cubes algorithm [4], as in the
work [2,3]. The object reconstruction problem is the focus of many researchers
in various fields such as computer vision, medical imaging, CAD, among others.
Specifically in the field of medicine, researchers are currently working on the
reconstruction of different organs such as bones [5] or soft tissue [6]. In gen-
eral, the medical community requires the reconstruction of organs for diagnosis,
simulation, medical training, or surgical planning.
The use of haptic interfaces in medicine is an area of growing interest, espe-
cially in robot-assisted endoscopic surgical and endoscopic robot-assisted surgical
training [7]. The haptic hardware has become a commonly used device in sur-
gical planning [10]. The use of haptic interfaces is intended to reduce surgical
errors, reaching a lack of them especially in knot tying. The development of
augmented and virtual reality systems is also of growing interest especially in
surgery training; in such systems, the haptic interface has become a state-of-the-
art technique.
Radial Basis Neural Networks (RBFNN) are a special type of neural networks
using radial basis function (RBF) as activation functions. Such neural networks
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 422–429, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Using Haptic and Neural Networks 423
are widely used in control, time series and for the approximation of functions.
General Regression Neural Network (GRNN) is a kind of a normalized RBFNN
in which there is a hidden unit centered at every training case. GRNN falls into
the category of probabilistic neural networks. It needs only a fraction of the
training samples needed by a backpropagation neural network to approximate
functions. The use of this kind of networks is advantageous due to its ability to
converge to the underlying function of the data with only few training samples
available. So GRNN is a very useful tool to perform predictions and to find
the underlying function of a data set. A detailed explanation about the GRNN
method can be found in [11].
This article shows how to reconstruct the mechanical properties of a 3D object
using a haptic interface with a GRNN, and it could also be used to improve the
visual reconstruction achieved with any vision system.
2 Haptics in Medicine
As stated in [8], in manual minimally invasive surgery (MIS), surgeons feel the
interaction of the instrument with the patient via a long shaft, which eliminates
tactile cues and masks force cues. However, robot-assisted minimally invasive
surgery (RMIS) holds great promise for improving the accuracy and dexterity
of a surgeon while minimizing trauma to the patient. Nevertheless, the clinical
success with RMIS has been marginal. It could be due to the lack of haptic (force
and tactile) feedback presented to the surgeon.
The goal of haptic interfaces in RMIS is to provide the surgeon the sen-
sation of his own hands are contacting the patient. Force feedback systems
for RMIS typically measure or estimate the forces applied to the patient by
the surgical instrument, and provide resolved forces to the hand via a force
feedback device. There are commercially available force sensors, and some re-
searchers have created specialized grippers that can attach to the jaws of existing
instruments [9].
Our goal in this work, is to use the haptic interface to approximate the me-
chanical properties of an organ which is touched, as well as to obtain information
about the surface of such organ. For this purpose, we create a virtual model of
the object of interest, and sense it by touching it in different points using the
haptic device. As a result, we obtain a set of points with certain basic mechanical
properties. Then, a neural network (the GRNN), is used to interpolate in order
to approximate the mechanical properties of the entire surface. A similar pro-
cedure is employed to obtain spatial information about the surface. The haptic
interface used in this work is an Omni Phantom, from SensAble Technologies.
Fig. 1. Left: Cloud of points of the hand bones from the Visual Human Project. Right:
Smooth surface of the hand bones from the Visual Human Project.
Fig. 2. Left: Smooth surface of the skin of the head from the Visible Man. Right:
Two regions were defined on the surface. The brighter region corresponds to the region
with constant stiffness, damping and mass; the darker region corresponds to the region
where stiffness varies.
The National Library of Medicine has a project called the Visible Human. It
has already produced computed tomography, magnetic resonance imaging and
physical cross-sections of a human male cadaver. They use surface connectiv-
ity and isosurface extraction techniques to create polygonal models of the skin,
bone, muscle and bowels. The goal of such project is the creation of complete,
anatomically detailed, three-dimensional representations of the normal male and
female human bodies. Acquisition of transverse CT, MR and cryosection images
of representative male and female cadavers has been completed. The male was
sectioned at one millimeter intervals, the female at one-third of a millimeter
intervals. Figure 1 shows an example of the cloud points and the surface corre-
sponding to the bones of a hand. Figure 2 (left) shows an example of the smooth
surface representing the skin of the head of the Visible Man (part of this skin
surface will be used in experiments - fig. 2 (rigth)).
In order to determine the accuracy of the surface approximation and the
mechanical properties estimated by the neural network, we use as ground truth
a 3D model based on the Visible Man. In addition to the cloud of points and the
mesh of such model, we define certain mechanical properties for the entire surface
of the model used in the experiments. Basically, we define stiffness, damping and
mass properties for points on the surface.
Using Haptic and Neural Networks 425
The model used to the experimental tests was only a small part of the head,
which has around 10000 points and 20000 triangles on the mesh surface. From
these points, we take the upper 10% aproximatelly to simulate a deformation
on othe skin, by varying the stiffness propperty along such surface. The stiffness
was defined as a ramp shaped function, while damping and mass was declared
constant. The rest of the points was declared with a stiffness, damping and mass
properties constant. Figure 2 shows a visual representation of the two regions
of the skin model: the region with a lower gray value corresponds to the region
with varying stiffness factor.
Fspring = kx − bv (1)
(a) (b)
(a) (b)
The second neural network has three entries, and will help us to reconstruct
the so called mechanical properties of the surface as a function of 3 variables
s = h(x, y, z)
In order to verify the accuracy of the experiment, we proceeded to touch the
first set of points totally at random. The experiment was done with different
sizes of the sets of points, and the results are shown in figures 3 to 5.
4.1 Discussion
As shown in the figure 3, using 50 random points in the two reconstructions,
both are of poor quality and do not reproduce the body’s surface nor the elastic
properties of it. After that, we have the same experiment but increasing to
75 the number of points considered, as seen in the figure 4, both adjustments
significantly improve the previous case, and both are a roughly approximation of
the two surfaces. Finally, the experiment was done with 100 points and results,
as expected, improved to such a degree that seems to represent both the area
pretty well (at least in the area where he was touched) as a function of stiffness
(see Fig. 5).
Using Haptic and Neural Networks 427
(a) (b)
Fig. 5. Surface and mechanical properties approximation using 100 points. a) Point
cloud of the original object and the function approximation of the surface given by the
neural network. b) Stiffness estimation given by the neural network.
(a) (b)
The same experiment was done but now making a more targeted sampling,
i.e., using the haptic interface properties, which allows us to feel if there is any
area on the body surface where its mechanical properties are different from the
rest. In this case we understand by targeted sampling the next action: a person
touches around the body, which in our example has a nonuniform stiffness, so,
when the person doing the experiment feels a change which can be perceived
by haptic interface, he/she focus in exploring this area in more detail. For this
case, we distinguish two types of points: points around the body, which we
call rounding points, and the central points, where virtually simulate different
stiffness from the rest of the surface. The results obtained with this set of points
are shown in Fig. 6 and 7.
As before the left image in each figure shows the plot of the point cloud
of the original body with the surface-adjusted by the GRNN, while right part
shows the stiffness function. In Fig. 6 the experiment was done with 60 points: 25
rounding points and 35 central points, and the function adjusted was satisfactory
in both: surface and stiffness, comparable even with the case of 100 points taken
at random. In Fig. 7 input for the neural network was compound of 30 rounding
428 E. Castillo-Muñiz, J. Rivera-Rovelo, and E. Bayro-Corrochano
(a) (b)
points and 45 central points. As reader can see, the results are not significantly
improved since they were good before that.
Some advantages of using the GRNN to 3D reconstruction are: although the
estimate converges to the regression surfaces as more and more examples are ob-
served, it forms very reasonable regression surfaces based on only a few samples;
the estimated surface is bounded by the minimum and maximum of the samples
(this is why only a portion of the surface in figures 3.a to 7.a is reconstructed).
This leads us to conclude that the 3D reconstruction of objects (for example,
biological organs) can be obtained by the use of haptic interfaces using some
sample points and a regression method (as the neural network used).
5 Conclusion
In this work it was shown that an approach using the GRNN and a haptic in-
terface can achieve a 3D reconstruction of objects, approximating not only the
space properties (surface), but also the basic elasticity properties of the sur-
face. Experiments shown that this approach is promising and can be used for
medical applications. Despite the large amount of work about 3D reconstruction
using visual approaches, the problem is not solved because the algorithms that
can be implemented in real time are not yet accurate enough. In this case, as
noted, our approach could be used to improve the visual reconstruction, merg-
ing haptic and tactile cues. It was also shown that the reconstruction of objects
from a mechanical point of view is possible, and that the use of a haptic inter-
face can improve the performance of reconstruction algorithms. However, before
we can plan to use our method in a clinical system, it is needed to work fur-
ther in order to include more (mechanical) properties into the 3D model of the
surface.
Using Haptic and Neural Networks 429
References
1. Seitz, S.M., Curless, B., Diebel, J., Scharstein, D., Szeliski, R.: A Comparison and
Evaluation of Multi-View Stereo Reconstruction Algorithms. In: IEEE: Computer
Vision and Pattern Recognition, pp. 519–528 (2006)
2. Kazhdan, M., Bolitho, M., Hoppe, H.: Poisson Surface Reconstruction. In: Euro-
graphics Symposium on Geometry Processing (2006)
3. Surazhsky, V., Surazhsky, T., Kirsanov, D., Gortler, S., Hoppe, H.: Fast Exact and
Approximate Geodesics on Meshes. ACM Trans. on Graphics (SIGGRAPH) 24(3)
(2005)
4. Lorensen, W.E., Cline, H.E.: Marching Cubes: A High Resolution 3D Surface Con-
struction Algorithm. Computer Graphics 21(3), 163–169 (1987)
5. Rajamani, K.T., Styner, M.A., Talib, H., Zheng, G., Nolte, L.P., González, M.A.:
Statistical deformable bone models for robust 3D surface extrapolation from sparse
data. Medical Image Analysis 11(2), 99–109 (2007)
6. Fenga, J., Horace, H.S.: A multi-resolution statistical deformable model (MISTO)
for soft-tissue organ reconstruction. Pattern Recognition 42, 1543–1558 (2009)
7. Van der Meijden, O.A., Schijven, M.P.: The value of haptic feedback in conventional
and robot-assisted minimal invasive surgery and virtual reality training: a current
review. Surgical Endoscopic 23(6) (2009)
8. Okamura, A.M.: Haptic Feedback in Robot-Assisted Minimally Invasive Surgery.
Curr. Opin. Urol. 19(1), 102–107 (2009)
9. Dargahi, J., Sedaghati, R., Singh, H., Najarian, S.: Modeling and testing of an
endoscopic piezoelectric-based tactile sensor. Mechatronics 17(8), 462–467 (2007)
10. Pekkan, K., et al.: Patient-specific surgical planning and hemodynamic compu-
tational fluid dynamics optimization through free-form haptic anatomy editing
tool (SURGEM). Medical and Biological Engineering and Computing 46(11),
1139–1152 (2008)
11. Specht, D.F.: A General Regression Neural Network. IEEE Transactions on Neural
Networks 2(6), 568–576 (1991)
An Integrated Environment to Aid Knowledge Exchange
and Collaboration Using Mobile Devices in a Healthcare
Context
1 Introduction
With the advance of wireless technologies, the way we communicate with the world
changes every day. Mobile devices such as cell phones, netbooks and more recently
tablets, allow us to be permanently connected with the people. This need of interac-
tion makes people trade knowledge constantly. Besides that, most human activities
have been transferred from personal computers to their respective mobiles[1]. People
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 430–437, 2012.
© Springer-Verlag Berlin Heidelberg 2012
An Integrated Environment to Aid Knowledge Exchange and Collaboration 431
chat on their phones wherever they are, photos of interesting places are shared be-
tween smartphones, text messages are sent. However, once the interactions are over,
much information is lost for not being stored.
This revolution in wireless technology provides us with new perspectives of com-
munication and social interaction. Some aspects may be noted in relation to this new
kind of interaction. Firstly, time and space are not a problem anymore. People can
work or entertain themselves anywhere and anytime only needing an Internet connec-
tion for that. The second difference is that human interactions are easier to happen
than in the past although they may be shorter. One may search, find, and interact with
someone for some activities but these interactions are weak and will most likely be
lost as they end.
Those points relate to a new phase in life called ‘cyberculture’[2]. Cyberculture
envelops people and objects in a immense and connected environment. Along with
the development of mobile technology, this type of culture and its spaces allow us to
enter a new era, marked by the ‘collective and distributed’, the collective being the
partial and temporal aggregation of many items. This highly connected scenario is
changing the way people interact with the spaces and the world takes new dimen-
sions[3].
Working with the hypothesis that the misuse of information or even the lack of it is
a problem that affects several scenarios, damaging them in many areas, some alterna-
tives that bypass or improve the situation are being studied. More specifically, in the
healthcare scenario, the lack of information may delay people in their search for
medical assistance or even take actions contrary to the indicated treatment which
consequently increases the risk to patient's life and delays one's recovery.
By allowing the introduction of new technologies along with simple ideas, we can
see the impacts on the big picture. A quick web search of symptoms gives an array of
possible illnesses, storing patients records on a database for quick access anywhere
using a portable device, all simple ideas that eases the life and work of patients and
medical staff alike.
In this work, we will present a proposal on how the dissemination of knowledge in
a proactive way and how finding someone in a closed environment may be used to-
gether to enhance the healthcare environment as both ideas mean to support the work
of the people involved.
By following these concepts, this work is divided as follows: the next chapter will
give a brief description of the eHealth scenario. Considering that this integrated ap-
proach is focused on the use of mobile devices and aims to assist knowledge ex-
change and collaboration among those involved in a healthcare scenario, Chapter 3
will show the module responsible for the functionalities of the indoor location system,
EDIPS [4]. Chapter 4 will describe MEK[5], the module responsible for the knowl-
edge exchange functionalities. The proposed integration of the two ideas and contri-
bution to the scenario of eHealth will be shown in Chapter 5. A conclusion on this
research is provided in Chapter 6.
432 D. da S. Souza et al.
2 eHealth
The technological progress of areas such as Computing and Electronics benefited
several sectors in society, among them the medical domain. New terms appeared in
this field to designate its integration with technology, such as ‘eHealth’.
The concept of eHealth admits different meanings [6], such as representing the use
of the Internet to access any kind of information related to healthcare, besides services
and applications [7]. In general, it means everything that relates to Computing with
Medicine [8], encompassing the ICTs (Information Communication Technology) as
used in healthcare for administrative, clinical, and research purposes [9].
Nowadays, with the popularization of mobile devices and wireless technology, new
applications have appeared in this field, allowing the access to information in any
place and at any time. For an example in healthcare, it is now possible for a patient to
make an appointment with the doctor without being in the presence of each other [10].
With technological support, it is now possible to have a great deal of information
available in different scenarios, about different themes. In healthcare, in a place like a
hospital, information on diseases, medical specialists, patient experience with any
kind of disease, treatments or other kind of information are dispersed. In other words,
there is information but it is misused in most cases for being restricted to one person
or one group.
The correct use of information and the easy access to it contribute in a positive way
to the daily routines of healthcare, allowing knowledge exchange between people and
the quick dissemination of information.
There are some computer applications in this scenario that use these technologies
to improve the day-to-day of a health care scenario such as: 1-MEK, Mobile Ex-
change of Knowledge, an application created for knowledge dissemination in a viral
way via Bluetooth; 2-EDIPS, Easy to Deploy Indoor Positioning System, developed
to detect the position of mobile devices in a enclosed place, where a GPS (Global
Positioning System) does not work correctly (as it calculates position only using lati-
tude and longitude, without considering depth), using Wi-Fi; 3-Lifelink, technology
created for doctors to remotely monitor in real time their patients via mobile devices
and 3G [11]; 4-MobileMed, which integrates clinical data for patients that is distrib-
uted and fragmented in different systems, being available via Personal Digital Assis-
tants (PDAs) [12]; 5- A system for mobile devices whose goal is to help doctors
detect bone fragility in the elderly [13].
Given that the problem we want to solve is the misuse of information and its access
restrictions in a healthcare scenario, our hypothesis is that the propagation of informa-
tion in a opportunistic way will help it. This way, we propose an integrated approach
involving two applications, MEK and EDIPS, to be shown in the next sections.
The EDIPS, Easy to Deploy Indoor Positioning System, is an application that maps
the position of several people in a closed environment, functions as an internal GPS,
using Wi-Fi.
An Integrated Environment to Aid Knowledge Exchange and Collaboration 433
The EDIPS has two operating phases: offline and online. During the offline phase,
the mapping of signal strength blueprint is made and stored in the device afterwards.
Signal strength is calculated for each reference point or access point. Once the blue-
print has been produced, the discretization of the space is performed. A matrix of cells
of fixed sizes is overlapped to the blueprint, and these cells receive expected signal
strength values, considering the reference points. Normally, in other applications, the
reduced signal modelling considers a large number of samples taken in the operating
area. This step is completely ignored by EDIPS, promoting a fast deployment with an
acceptable degree of precision.
For EDIPS to locate a device, it must have the local map and the map of reference
points. If the device does not have this information, it still can get them by finding
another device in the LAN that owns it.
The online phase aims to keep updated the devices' positions that are being
screened. This step is aided by the use of HLMP API[14]. During this phase, the data
received is processed to acquire an estimated position of the device. From this esti-
mation, each cell is investigated to determine which one minimizes the positioning
error and then choosing the one whose position is most likely.
with one’s interests and stored knowledge keywords. If one or more matches is/are
found, the selected knowledge is sent to the requesting party.
There is also the possibility of the user searching for knowledge actively in nearby
devices using a filter that can use information such as title, keywords or interest area
provided by the user in the search.
5 Proposal of Integration
This section will describe the integration between the two tools presented earlier. This
integration aims to increase collaboration between different people involved in the
health scenario and enhance the eHealth environment, as shown in Fig 1.
Both applications work with ‘loosely coupled mobile work’[16], i.e., connections
between mobile devices or between mobile devices and a server, on demand, which
are marked by short time. This kind of connection became very important with the
spread of wireless networks.
The MEK aims at disseminating knowledge amongst its users and can identify
their interests through its application (profile member). Besides this type of informa-
tion, knowledge or skills can also be identified and stored by analyzing the changes
occurring amongst the users. This information can be very important for the EDIPS
which locates and maps its clients. This application could use some context informa-
tion provided by the MEK to locate persons and identify those that have the user's
interest, a discretization of the area can be done too. This information can be offered
in a visual way to the users' EDIPS. According to the data received from the MEK,
the EDIPS can identify on the area map of the environment that the application has,
the areas which have the greatest concentration of one specific knowledge, as identi-
fied with the same colour, or a label, of people who have the same interest.
Moreover, the EDIPS also provides information on people's location to the MEK,
which does not have this kind of information. The MEK, having acquired this data,
An Integrated Environment to Aid Knowledge Exchange and Collaboration 435
can make a complex analysis on knowledge exchange such as discovering areas that
concentrate specific kinds of knowledge and the frequency with which those ex-
changes occur.
Due to these factors, this integration can be applied to use in many different envi-
ronments, mostly in healthcare scenarios.
5.3 Integration
This section will describe how the integration between the two applications will be
made. Bluetooth protocol and XML files are used to perform this transfer.
The MEK will capture data about the profile, the interests of its users and the ex-
changes made between them. These data will be processed by the tool and stored in
XML files. A unique ID for each user will also be stored in these files, thus
applications know which user data refer. This identification may be given by the
436 D. da S. Souza et al.
unique identifier for each unit. With these data, the EDIPS may conduct analyzes
promoting discretizations of the areas that are most likely to propagate a particular
knowledge, as shown in figure below (Fig. 2). The circular regions show areas where
people inte- rested in a particular subject can be found. Areas that may be sources of
information can also be bounded by EDIPS.
On the other hand, the EDIPS will also collect information about the positioning of
the people to provide for the MEK. The MEK will collect this information and can
perform several analyzes with them, for example, analyzes related to what was
changed and the location where the exchange occurred. Again, the exchange of files
from the EDIPS for the MEK will be done through Bluetooth protocol.
Fig. 2. Old and new EDIPS screens, using information promoted by MEK
6 Conclusion
The main objective of this research was to improve the healthcare scenario by inte-
grating two applications. The first goal is to disseminate knowledge virally and in a
proactive way via a Bluetooth protocol. The other application works as a people loca-
tor using Wi-Fi signals. As we know Bluetooth has a limited range and Wi-Fi signals
are not found everywhere, this research proposes to unify the two applications to en-
able knowledge exchange using Wi-Fi.
This integration was proposed to disseminate knowledge in hospitals, clinics, and
healthcare scenarios in general, to easily exchange information amongst patients,
doctors, nurses, and anyone that has some knowledge available in the LAN range to
exchange.
7 Special Thanks
To CNPq, CAPES and FAPERJ for the scholarships and to Microsoft Research for
their funding of the project “Temporal Social Networks and Knowledge Dissemina-
tion for Healthcares” (LACCIR, R1210LAC002), in collaboration with Federal
University of Rio de Janeiro, University of Chile and CICESE.
An Integrated Environment to Aid Knowledge Exchange and Collaboration 437
References
1. Dutta, S., Mia, I.: The global information technology report 2008–2009: mobility in a net-
worked world. World Economic Forum & INSEAD (2009)
2. Pierre, L.: Cyberculture - University of Minnesota Press (October 2001)
3. Levy, S.: Making the Ultimate Map. In: Newsweek, pp. 56–58 (June 2004)
4. Vera, R., Ochoa, S.F., Aldunate, R.G.: EDIPS: an Easy to Deploy Indoor Positioning Sys-
tem to support loosely coupled mobile work. Personal and Ubiquitous Computing Arc-
hive 15(4), 365–376 (2011)
5. Souza, D.S., Fogaça, G., Silveira, P.C., Oliveira, J., Souza, J.M.: MEK: Uma abordagem
oportunística para disseminação colaborativa do conhecimento. In: VIII Simpósio Brasilei-
ro de Sistemas Colaborativos, pp. 145–151 (2011) (in Portuguese)
6. Oh, H., Rizo, C., Enkin, M., Jadad, A., Phil, D.: What Is eHealth: A Systematic Review of
Published Definitions. J. Med. Internet Res. 7, e1 (2005)
7. Wyatt, J.C., Liu, J.L.Y.: Basic concepts in medical informatics. J. Epidemiol Community
Health 56, 808–812 (2002)
8. Eysenbach, G.: What is eHealth? J. Med. Internet Res. 3, e20 (2001)
9. Yunkap Kwankam, S.: What eHealth can offer? Bull. World Health Organ. 82, 800–802
(2004)
10. Tachakra, S., Wang, X.H., Istepanian, R.S.H., Song, Y.H.: Mobile eHealth: The Unwired
Evolution of Telemedicine. Telemed. J. E. Health 9, 247–257 (2003); Alis C., Rosario C.
Jr., Buenaobra B., Blanca C.M.: Lifelink: 3G Based Mobile Telemedicine System. Te-
lemed. J. E. Health 15, 241–247 (2009)
11. Choi, J., Yoo, S., Park, H., Chun, J.: MobileMed: A PDA-Based Mobile Clinical Informa-
tion System. IEEE Trans. Inf. Technol. Biomed. 10, 627–635 (2006)
12. Fontecha, J., Hervás, R., Sánchez, L., Navarro, F.J., Bravo, J.: A proposal for elderly frail-
ty detection by using accelerometer-enabled smartphones. In: 5th International Symposium
of Ubiquitous Computing and Place Intelligence (2011)
13. Rodríguez-Covili, J.F., Ochoa, S.F., Pino, J.A., Messeguer, R., Medina, E., Royo, D.:
HLMP API: a software library to support the development of mobile collaborative applica-
tions. In: 14th International Conference on Computer Supported Cooperative Work in De-
sign, pp. 479–484. IEEE Press (2010)
14. CNPq Conselho Nacional de Desenvolvimento Científico e Tecnológico,
http://www.cnpq.br/
15. Herskovic, V., Ochoa, S.F., Pino, J.A., Neyem, A.: General requirements to design mobile
shared workspaces. In: 12th International Conference on Computer Supported Cooperative
Work in Design, pp. 582–587. IEEE Press (2008)
A Metaprocesses-Oriented Methodology for Software
Assets Reuse in the e-Health Domain
Abstract. Software reuse in the early stages is a key issue in rapid development
of applications. Recently, several methodologies have been proposed for the
reuse of components and software assets, but mainly in code generation as arti-
facts. However, these methodologies partially consider the domain analysis,
business process modeling, and reuse software in the software development en-
vironment in order to promote productivity and quality in application develop-
ment. This paper introduces a metaprocess-oriented methodology for the reuse
of metaprocesses as software assets starting from specifications and analysis of
the domain. The proposed methodology includes the definition of a conceptual
level to adequately represent the domain, a structural level to specify the meta-
process as software assets, and an implementation level which defines the rules
for instantiation and reuse of metaprocess as software assets. The methodology
has been applied successfully to the first phase, i.e. at the specification of the
conceptual level and design of applications in the field of e-health, in particular
in monitoring systems of patients with cardiovascular risk.
1 Introduction
A metaprocess can be defined as a complete process model that serves as benchmarks
to be instantiated embracing different cases or situations for the same domain. These
models contribute to the evolution of processes through its metamodeling, methods,
decomposition of tasks, and consistency. Accordingly, Rolland and Prakash [18] in-
clude metaprocesses with general features, for instance, customization and gradual
refinement of processes and models.
Metaprocesses are environments of generic specification of activities, tasks, roles,
and behaviors supporting the development of processes with the main objective of
obtaining an abstraction of the domain. There are several proposals about metapro-
cesses such as metamodeling-based models [5], and those that are framed in complete
methodologies for process-oriented software development [7]. However, none of
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 438–445, 2012.
© Springer-Verlag Berlin Heidelberg 2012
A Metaprocesses-Oriented Methodology for Software Assets Reuse 439
them provide the reuse of the metaprocess specification as software assets for instan-
tiation and customization in the early stages for rapid development of applications.
Metaprocess metamodeling and its specification as software assets for reuse in the
early stages is a field requiring a great effort to standardize software processes in the
context of software industrialization.
Greenfield and Short [8] define "[...] metaprocess-oriented methodology can con-
tribute to raise productivity and quality in the software construction process ". In
particular, a methodology that takes into account the abstraction of the domain speci-
fication and process modeling, as well as its specification as software assets for reuse
in the early stages can ensure lower development times in the release of applications.
This can also reduce costs in areas with strong demand for the development of rapid
applications in the domain of e-health.
The purpose of this paper is to present a methodology based on a domain meta-
modeling approach intended to provide a conceptual perspective for the specification
of artifacts as software assets especially for reuse purposes. This methodology in-
cludes the elements proposed in the use of standards for software development as
SPEM [12], and software and business process modeling (such as UML [3] and
BPMN [13]), adding a new notation to the business domain metamodeling.
The methodology proposed incorporates elements of the OMG specification - RAS
[14], which provides an integrative methodological approach. To illustrate the appli-
cability of our proposal, this paper presents a real application currently running on the
e-health domain: a monitoring system for patients with cardiovascular risk.
The remainder of the paper is organized as follows. In section 2 we discuss related
work. Section 3 provides an introduction to the proposed methodology. Section 4
analyzes the implementation of the proposed methodology for the specific application
mentioned above. Finally, we present the conclusions and future work in Section 5.
2 Related Work
The use of methodologies oriented to metaprocesses is not a novelty. In fact, the use-
fulness of metaprocesses has been widely advocated, Acuña and Ferré [1] define Me-
taprocesses as "[...] generic building environments that support domain-oriented
software development, and its mechanisms of specification and validation." Our ob-
jective is to consider metaprocesses, generic environments that integrally support
model-oriented software development taking into consideration the specific domain.
On the other hand, Ramsin and Paige [17] incorporate the use of metaprocess as an
oriented methodology towards system development, but do not consider the reuse.
That proposal collects what has been accomplished in terms of components reuse, but
the work is not based on reuse of models from the early stages of software develop-
ment. Ouyang et al [15] present as a methodological proposal which is oriented to
business processes and applications geared towards the use of metaprocesses, but the
methodologically insight of the problem of metaprocesses reuse is not usually consi-
dered in model-oriented software development, a complete methodology to specify
metaprocesses for their reuse starting from the domain and oriented towards software
development would be desirable.
Several works have proposed a systematic approach to the modeling that accompa-
nies the use of metaprocesses in all phases of software development from the early
440 J.D. Fernandez et al.
stages to understand and analyze the domain problem of software design and imple-
mentation in the domain solution. Kühne [9] incorporates the concept of metapro-
cesses to evolution of software processes, but it does not take into account the issue of
reuse specification. Asikainen and Männistö [2] consider the need to semantically
formalize the software development processes by the metamodel processes, but that
approach does not yet consider the reuse and its formalization in the early stages of
software development. Levendovszky et al [10] incorporate the use of process meta-
model patterns as a first step to formalize specifications, without considering reuse
either. These approaches use independent platform models and their implementation
from the domain, but none of them include the problem of instantiation and customi-
zation through the metaprocesses reuse as it is proposed in our methodology.
Cechticky et al [4] which proposes to reuse code components for real time applica-
tions. This proposal is based on the reuse of code but does not cover models and me-
taprocesses. Park et al [16], where it is proposed to use code components as software
assets to facilitate the reuse, which is done independently from models. Finally, De
Freitas [6] incorporates flexibility by the reuse of application code, without address-
ing the issues of models and metaprocesses reuse to generate applications. Hence,
these works do not include the theoretical and conceptual articulation of metapro-
cesses usage in software development through the fostering of reuse, the instantiation
and customization using architectures independently from platforms, and the use of
models and metamodels as a contribution to software industrialization, which is the
intent of the work to be presented in the next sections.
In order to apply the proposal, we consider a case study based on a monitoring system
for patients with cardiovascular risk. In this paper, we only focus on the metapro-
cesses specification at a conceptual level (due to space restrictions). The development
of an application in the context of e-health entails the reuse of metaprocesses to medi-
cal guideline models and protocols to develop.
442 J.D. Fernandez et al.
In the next subsection the methodology phases, the architecture and other technol-
ogical issues are described.
4.2 Architecture
The conceptual level described above, initially permits the construction of a proto-
type. The architecture is composed of three main software nodes/subsystems: temper-
ature sensor node, movement sensor node and pulse-oximetry node. Figure 2 shows
the deployment diagram for a monitoring system of patients with cardiovascular risk
through mobile devices.
Fig. 2. Deployment diagram for a monitoring system of patients with cardiovascular risk
through mobile devices
Main components are organized in three nodes and a mobile device: (1) the tem-
perature sensor node that contains a processor microcontroller, a sensor PTA-100 and
A Metaprocesses-Oriented Methodology for Software Assets Reuse 443
a communication CC2420 for a temperature manager; (2) the movement sensor node
that contains a processor microcontroller, a sensor MMA7361 and a communication
CC2420 for a movement manager; and (3) the pulse-oximetry sensor node that in-
cludes a processor microcontroller, a sensor Nonin OEM 3 and a communication
CC2420 for a oximetry manager and pulse manager. One of these three nodes com-
municates with each other ones through the mobile device and medical server. In
particular, the mobile device contains a processor OMAP, a gateway Shimmer Span
and a communication GE 865 de Telit for a vital signs manager and the medical serv-
er is mainly in charge of managing alerts.
Operationally, this system presents user input interfaces and other operational inter-
faces in patients with cardiovascular risk through remote controls, event tracking and
reporting modules in the system; therefore, we have presented an architecture that can
initially see independently, but is actually from the same application to patients with
cardiovascular risk that are monitored from a central system supporting e-health
processes.
The system technology consists of three shimmer nodes. Each node is built through
a program that meets the needs of data capture and transmits them to the system.
Small adjustments were also used for external hardware. One node is used for the
temperature sensor (with a simple circuit constructed by a LM35 adapted to its port
I/O). Another node is used for the motion sensor (this shimmer node has an accelero-
meter which was used without the need for additional hardware). An additional node
was used for plethysmography, heart rate, pulse rate and pulse intensity. A simple
circuit was designed from the OEM sensor NONIM III, which has a sensing tip that is
placed where the patient can hear it.
The mobile application was built in the C + + programming language using QT. It
runs on a device called BeagleBoard Open-Hardware, which runs in Armstrongas, the
operating system. This device was used for its ease of use in handling featuring ports
and interfaces. This device has a communication interface to receive data from nodes
and process them according to an algorithm built from information provided by doc-
tors [19].
The web application was built in the Java programming language and has been im-
plemented. This application also displays the history of the patient's vital signs and is
fully integrated with the hospital charts.
The methodology proposed for the metaprocess at the conceptual specification level
as software assets for reuse in the early stages of software development is intended to
make easier the development of domain process oriented applications, in this case
for e-health. It facilitates the software development process for one case, in which
guided models contributed to the development of applications from the domain,
444 J.D. Fernandez et al.
independently from the development platforms. In turn, this proposal has been vali-
dated and tested.
The software assets repository for e-health domains is being constructed to facili-
tate the reuse tasks of e-health oriented application development. It constituted a first
step to accomplish higher productivity and quality as well as the decrease of error and
time to release applications. Consequently a knowledge base and artifacts will be
available to e-health users.
As future work we think of the formalization of the methodology through the use
of ontologies and logic languages with a formal definition [11]. We are currently
working on the construction of a repository for software assets reuse by means of
Eclipse plugins, i.e., in an integrated development environment to build and reuse
software artifacts in e-health and other domains, taking also into consideration the
structural level and implementation levels of the proposal.
References
1. Acuña, S., Ferré, X.: Software Process Modelling. In: Proceedings of the 5th World Multi-
conference on Systemics, Cybernetics and Informatics (SCI 2001), Orlando Florida, USA,
pp.1–6 (2001)
2. Asikainen, T., Männistö, T.: Nivel: a metamodelling language with a formal semantics.
Software & Systems Modeling 8(4), 521–549 (2009)
3. Baisley, D., Björkander, M., Bock, C., Cook, S., Desfray, P., Dykman, N., Ek, A., Frankel,
D., Gery, E., Haugen, Ø., Iyengar, S., Kobryn, C., Møller-Pedersen, B., Odell, J., Över-
gaard, G., Palmkvist, K., Ramackers, G., Rumbaugh, J., Selic, B., Weigert, T., Williams,
L.: OMG Unified Modeling Language (OMG UML). Superstructure v 2.2. Object Ma-
nagment Group (OMG) (February 2009)
4. Cechticky, V., Egli, M., Pasetti, A., Rohlik, O., Vardanega, T.: A UML2 Profile for Reus-
able and Verifiable Software Components for Real-Time Applications. In: Morisio, M.
(ed.) ICSR 2006. LNCS, vol. 4039, pp. 312–325. Springer, Heidelberg (2006)
5. Conradi, R., Nguyen, M.: Classification of Metaprocesses and their Models. Software
Process, 167–175 (1994)
6. De Freitas, J.: Model business processes for flexibility and re-use: A component-oriented
approach. IBM Developer Works Journal, 1–11 (2009)
7. Finkelstein, A., Gabbay, D., Hunter, A., Kramer, J., Nuseibeh, B.: Software Process
Modeling and Technology. Research Studies Press, LTD., Londres (1994)
8. Greenfield, J., Short, K.: Software Factories: Assembling Aplications With Patterns,
Model, Frameworks and Tools. John Wiley & Sons (2004)
9. Kühne, T.: Editorial to the theme issue on metamodelling. Software & Systems Model-
ing 8(4), 447–449 (2009)
10. Levendovszky, T., László, L., Mészáros, T.: Supporting domain-specific model patterns
with metamodeling. Software & Systems Modeling 8(4), 501–520 (2009)
11. Noguera, M., Hurtado, M., Rodríguez, M., Chung, L., Garrido, J.: Ontology-driven analy-
sis of UML-based collaborative processes using OWL-DL and CPN. Science of Computer
Programming 75, 726–760 (2010)
A Metaprocesses-Oriented Methodology for Software Assets Reuse 445
12. OMG. Software & Systems Process Engineering Meta-Model Specification doc.ormsc/
(April 1, 2008)
13. OMG: Business Process Model and Notation (BPMN) v1.2. Object Managment Group
(OMG) (2008)
14. OMG. Reusable Asset Specification. OMG Available Specification Version 2.2 (2005)
15. Ouyang, C., Dumas, M., Van der Aalst, W., Ter Hofstede, A., Mendling, J.: From business
process models to process-oriented software systems. ACM Trans. Software Engineering
Methodologies 19(1), Article 2 (August 2009)
16. Park, S., Park, S., Sugumaran, V.: Extending reusable asset specification to improve soft-
ware reuse. In: Proceedings of the 2007 ACM Symposium on Applied Computing, SAC
2007, pp. 1473–1478 (2007)
17. Ramsin, R., Paige, R.: Process-Centered Review of Object Oriented Software Develop-
ment Methodologies. Computing 40(1), 1–89 (2008)
18. Rolland, C., Prakash, N.: On the Adequate Modeling of Business Process Families. Un-
iversité Paris1 Panthéon Sorbonne, Francia (2000)
19. Uribe, C., Isaza, C., Florez, J.: Qualitative-Fuzzy Decision Support System for Monitoring
Patients with Cardiovascular Risk. In: Proceedings Conference on Fuzzy Systems and
Knowledge Discovery, vol. 3, pp. 1621–1625 (2011)
Cloud Integrated Web Platform for Marine
Monitoring Using GIS and Remote Sensing:
Application to Oil Spill Detection through SAR
Images
University of Coruña,
Campus de Elviña, A Coruña, Spain
{dfustes,dcantorna,dafonte,alfonsoiglesias,manteiga,cibarcay}@udc.es
http://www.tic.udc.es
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 446–453, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Cloud Platform for Marine Monitoring Using GIS and Remote Sensing 447
type of sensors are called Synthetic Aperture Radars (SAR), which capture the
scatter of microwaves of a determined surface and are optimal for capturing
the roughness of the sea surface regardless of the weather and light conditions.
This scatter can be used to distinguish the normal sea background, which will
show high values of backscatter, from “anomalous” entities such as ships, algae
formations, or pollutant discharges, which have lower backscatter values.
Since 1994, a pioneering Norwegian service called KSAT [4] has been manually
analyzing satellite images in order to detect oil slicks that consist of potentially
illegal discharges from ships near the Norwegian coast. These oil slicks appear in
the radar images as dark spots and they can be identified by visual inspection.
As it was demonstrated in [2], an automatic approach to radar data analysis is
preferable because of the tremendous volume of data to manage. However, both
Brekke and Solberg [3] reported that it is quite difficult to compare the different
approaches towards oil spill detection because of the heterogeneity of the images
that are used for testing and the objectives they aim to achieve.
Apart from commercial and proprietary tools, ESA provides a set of free tools
that are useful to visualize, manage, and analyze SAR data. For example, NEST
is a Java multiplatform application that includes an integrated viewer, orthorec-
tification, and mosaicking of SAR images. In addition, it allows tiling of images
and batch processing for facilitate computation. These powerful software tools
help to analyze SAR data, but they nevertheless remain desktop applications
with limited features. For example, Envisat SAR images in Wide Swath Mode
are very large images, with approximately 6000*6000 pixels, so one single com-
puter may have problems to perform the involved computations. The Cloud can
help researchers to perform their operations quickly and reliably and to share
information across the Internet in real time. Furthermore, GIS can help to locate
objects and to characterize regions of interest in the radar scenes, and to per-
form some spatial operations in such sense [11]. The following sections show a
tool that incorporates Cloud computing and GIS in an integrated web platform
which allows the development and validation of algorithms that work with radar
scenes.
– Image Scheduling: SAR images can be uploaded into the application and
then searched for. Searches can be performed using different criteria such as
dates, titles, etc.
– Preprocessing: SAR images can be calibrated to mitigate the effect of the
antenna incidence angle variation. The image is then noise-filtered with Lee
mean filters [5] so as to take speckle noise into account. Finally, the image is
projected into a map and the coastal zones are dropped using geographical
information (see Fig. 1).
– Processing: Several methods for dark spot detection have been included
in the tool, concretely those presented in section 5. These can be launched
using preprocessed images, and features can be extracted from the detected
spots.
– Algorithm Development and Validation: Users can upload their own
algorithms to the platform. To this effect, they have to upload a JAR file
with a program that has to implement a Java Interface. Once the algorithm
is uploaded, it can be validated using a set of SAR images and their corre-
sponding “masks”. A “mask” is a humanly modified image that represents
the desired result of the algorithm processing.
Fig. 1. An original SAR image and the same image after preprocessing
Sections 3, 4, and 5 describe the System in more detail, focusing on key as-
pects such as the System architecture, the way the system serves maps, and the
algorithms used for detecting dark spots in SAR images.
3 System Architecture
A tool that processes SAR data should allow for high performance computing
and the management of spatial information, since all operations described in this
work require both functionalities to be fully usable. It involves three main com-
ponents: a web server, a PostGIS database, and a set of computers coordinated
by a scheduler.
Cloud Platform for Marine Monitoring Using GIS and Remote Sensing 449
The need of scalable systems has appeared since the volume of digital data
has grown exponentially. Solutions such as Globus Toolkit [7] or OurGrid [6] are
meant to help researchers to share computing resources, but nowadays the ongo-
ing paradigm to make scalable systems is the so-called “MapReduce” paradigm
[10]. “MapReduce” was implemented in Java open source code by the Apache
Hadoop project [8]. Hadoop was intensively used by Yahoo and others and allows
users to process very large amounts of data. The “Sentinazos” tool is running in
an Apache Tomcat web server, using PostGIS as a database and Hadoop as a
framework to perform computationally intensive computations.
Image processing algorithms are, in general, highly distributable among pro-
cessors, as usually they are run in GPUs. On the other hand, in this work we
will focus on clustering algorithms, which can be also be well parallelized among
computers. An example of this can be found in the Apache Mahout project [9],
which incorporates a wide variety of clustering algorithms implemented on top
of Hadoop.
Fig. 2. Screenshot of a SAR image search in the World Map using the “Sentinazos”
app
5.1 FCM-Wavelets
5.2 SKFCM
The Kernelized Fuzzy C-Means (KFCM) algorithm is based on the FCM algo-
rithm, but, in this case, a metric distance based on a kernel is used [15]. This
Cloud Platform for Marine Monitoring Using GIS and Remote Sensing 451
c
N
α
Jm = 2 um
ik (1 − K(xk , vi )) + c N (1)
NR m − uir )m
i=1 k=1 i=1 k=1 uik r∈Nk (1
Parameter α controls the effect of the penalization term and must comply with
the following: 0 ≤ α ≤ 1. The kernel is assumed Gaussian, where K(x, x) = 1.
X k < μk − T c (2)
, where Tc is the threshold associated to class c, and μk represents the average
value of the pixels that exist in a given window around Xk (including Xk itself).
Table 1 shows that all the algorithms are very likely to correctly label the
pixels that are not fuel. For the fuel class, the results are not so uniform:
452 D. Fustes et al.
Fig. 3. SAR original image (left), and the result of processing it with the SK-
FCM+Local Threshold algorithm (right)
6 Conclusions
– Radar scenes that are used for ocean monitoring are usually very large, so as
to cover the largest possible area, and they have to be processed in a short
period of time, due to temporal requirements. Grid technologies, such as
MapReduce, have been used as suitable means to upscale the algorithms and
to reduce computational time, this way allowing quick response to events.
– A set of segmentation algorithms have been implemented with the aim of
isolating dark spots in SAR images that are possible oil spills. The accuracy
of the algorithms is high, the combination between local thresholds and
SKFCM being the one that retrieves the best results with a 86% of true
positives and a 98% of true negatives.
– Accessibility through the Web, communication between users, and georef-
erencing are desirable features in a marine monitoring system. This work
has shown how Web technologies and GIS standards can be integrated in a
system that covers such features and incorporates tools for algorithm devel-
opment. Also, once integrated the algorithms are made scalable and become
feasible for real operation.
Cloud Platform for Marine Monitoring Using GIS and Remote Sensing 453
References
1. La tragedia del prestige, evolución de la marea negra (2009),
http://www.lavozdegalicia.es/albumes/index.jsp?album=20021121133541
2. Brekke, C., Solberg, A.J.S.: Oil spill detection by satellite remote sensing. Remote
Sensing of Environment 95(1), 1–13 (2005)
3. 2004 IEEE International: Algorithms for oil spill detection in Radarsat and EN-
VISAT SAR images. Geoscience and Remote Sensing Symposium (2004)
4. Indregard, M., Solberg, A., Clayton, P.: D2-report on benchmarking oil spill recog-
nition approaches and best practice. Tech. Rep. Archive No. 04-10225-A-Doc, Con-
tract No: EVK2-CT-2003-00177, European Comission (2004)
5. Lee, J.: Speckle analysis and smoothing of synthetic aperture radar images. Com-
puter Graphics and Image Processing 17, 24–32 (1981)
6. Seventh IEEE International Symposium on Cluster Computing and the Grid:
Bridging the High Performance Computing Gap: the OurGrid Experience. IEEE
Computer Society (2007)
7. Sotomayor, B., Childers, L.: Globus Toolkit 4: Programming Java Services. Morgan
Kaufman, Elsevier (2006)
8. White, T.: Hadoop: The Definitive Guide, 2nd edn. O’Really Media—Yahoo Press
(2010)
9. Owen, S., Anil, R.: Mahout in action (MEAP). Manning Publications Co. (2011)
10. Lin, J., Dyer, C.: Data-Intensive Text Processing with MapReduce, 2nd edn. Mor-
gan and Claypool Publishers (2010)
11. Kresse, W., Fadaie, K.: Iso standards for geographic information. Springer (2004)
12. Geoserver user’s manual: Cql tutorial (2010),
http://docs.geoserver.org/1.7.x/user/tutorials/cql
13. Nuñez, J., Llacer, J.: Astronomical image segmentation by selforganizing neural
networks and wavelets. Neural Networks (16), 411–417 (2003)
14. Crosslin, R.L., Eddins, W.R., Sutherland, D.E.: Digital Image Processing, pp. 201–
208, 443–457. Prentice Hall (1992)
15. Zhang, D.Q., Chen, S.C.: A novel kernelized fuzzy c-means algorithm with appli-
cation in medical image segmentation. Artificial Intelligence in Medicine 32 (2004)
16. Zhang, H., Berg, A., Maire, M., Malik, J.: Svm-knn: Discriminative nearest neigh-
bor classification for visual category recognition. In: 2006 IEEE Computer Society
Conference on Computer Vision and Pattern Recognition (2006)
An Agent-Based Wireless Sensor Network
for Water Quality Data Collection
1 Introduction
The key activity for a WSN is to facilitate the autonomous collection of data,
typically using low cost sensor nodes with limited on-board battery power sup-
ply and limited communication bandwidth. Consequently, sensors in the network
must be carefully tasked and controlled in order that they consume the minimum
amount of resources whilst maintaining an adequate quality of service. Research
on energy conservation schemes for WSN has been carried out within different
application domains including environmental [15], agriculture [17], habitat mon-
itoring [6] and ecosystem management [8]. Most of these applications derived
from the need for more sustainable environments. Using WSNs facilitates inex-
pensive and continuous monitoring, and they are especially useful when remote
or dangerous locations need to be studied. The network gathers data from the
locations using sensors and use this data to model the environment.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 454–461, 2012.
c Springer-Verlag Berlin Heidelberg 2012
An Agent-Based WSN for Water Quality Data Collection 455
There are three essential tasks within WSNs: sensing, computing, and commu-
nicating - the latter task typically impacts most in terms of energy consumption,
but the three tasks are interdependent. For instance, as more data is sensed, more
data is transmitted, and processed; consequently, the network must choose sam-
pling rates appropriately along with performing communication in a judicious
manner. Indeed, in compressed sensing scenarios, sampling could be the largest
power drain, but in the majority of sensor network applications this will not
be the case. One mechanism by which to reduce energy consumption associated
with communication in WSNs is to perform routing and local node data pro-
cessing to maximize the information content while minimizing the size of the
data and its transmission cost to other nodes in the network. In this paper, we
model clustering points as Mobile Agents (MA)s that can move from one node
to another at run-time. When an agent migrates, it transfers its state and a
set of rules, which determine its behaviour, to the destination node, terminates
execution locally (freeing up memory and resources) and then begins operating
at the destination.
This paper discusses the applications of MAs to an application called Smart
Coasts (SCs), which is being developed to monitor and forecast water quality in
real time 1. The SCs sensor network aims to equip Irish and Welsh communities
to maintain the economic and strategic value of their near-shore waters and
ensure protection of public health. From a policy perspective, it is motivated
by the EU Water Framework Directive [1]. The objective is to develop a public
forecasting service for water quality, using quantitative microbial-physical and
statistical prediction modelling. In this paper, we propose modifying the current
centralised framework such that it is implemented as a distributed fault tolerant
architecture.
Intelligent agents have been demonstrated as a key enabling technology for Am-
bient Intelligence [12] and mobile services [7] [13]. More recently, the possibility
of harnessing agents to realize intelligent Wireless Sensor Networks (iWSNs) has
been demonstrated through the development of sufficiently powerful mote plat-
forms. Due to the constrained nature of embedded devices, embedded agents [3]
[14] must be circumspect in their use of the available resources, including energy
[16]. The ability of agents to reason about their behaviour and resource usage
makes them an apt solution in such scenarios [11]. Agents are particularly useful
in the development of complex systems, such as WSNs, whereby system state
and behaviour is uncertain, and perhaps even chaotic, and thus cannot be fully
specified or explained using traditional methods.
In this paper, we advocate the use of Agent Factory Micro Edition (AFME)
[9,10], a platform that enables rational decision making through the ascription
of beliefs and commitments to autonomous agents. Many intelligent agent plat-
forms, including AFME, draw from Dennett’s folk psychology [2] and employ the
notion of the intentional stance as a tool for modelling complex systems through
456 M.S. Garcia et al.
the attribution of mental attitudes, such as goals, hopes, and desires, to agents
so as to explain and predict behaviour. According to Dennett, there are three
different strategies we use when confronted with an object or system, namely
the physical stance, the design stance, and the intentional stance.
To predict the behaviour of an entity, according to the physical stance, we
use information about its physical constitution along with information about the
laws of physics. Suppose I am holding a golf ball and I let go of it and I predict
that it will fall to the floor. This prediction based on (1) the mass of the ball
and (2) the law of gravity.
With the design stance, we assume that the entity in question has been de-
signed in a particular manner. Our predictions are based on the idea that the
entity will behave as designed. When someone turns on an electric fan, they
predict that it will behave in a certain manner i.e. the fan will cool down the
room. They do not need to know anything about the physical constitution of
the fan to make the prediction. Predictions made from the design stance are
based on two assumptions (1) that the entity is designed for the purpose that
the user thinks it to be designed for and (2) that it will perform as designed
without malfunctioning. This does not mean that the design stance is always
used for entities that have been designed. The physical stance could be used to
predict what would happen to the fan if it were knocked onto the floor or if
it malfunctioned, but in most cases there is no need to go to a lower level of
granularity.
We can often improve our predictions of the design stance by adopting the
intentional stance. When making predictions from this stance, we interpret the
An Agent-Based WSN for Water Quality Data Collection 457
In this section we discuss the water quality monitoring and forcasting sensor net-
work, which at present, is controlled using an internet-based user interface. In the
sensor network, the sensors and loggers sense and transmit data at predefined time
intervals. The loggers use GPRS/GSM communication protocols to send the data
to a server, which hosts a telemetry service. Powered by a battery, the loggers are
458 M.S. Garcia et al.
designed for water and environmental monitoring applications and are currently
operating at 18 key remote locations under rough environmental conditions.
Physical temperature, stream depth, flow, and rainfall data is obtained on
sensors attached to the loggers along with the catchment area of the Dargle
river in County Wicklow Ireland. The physical data, together with microbial
data, is used to calibrate and build new statistical models, which describe the
dynamics of the pollutants throughout the catchment and the sea area. In this
paper, we argue for a transition from the current centralized server-based SCs
communication network to a distributed energy-aware WSN for forecasting water
quality. For this to be achieved, an efficient communication routing scheme is
required together with agent-based data processing software. We propose using
a group of MAs to perform networking tasks, such as routing and distributed
processing, while considering the memory and power resources. The MAs will
be used together in-network data aggregation algorithms. With in-network data
aggregation, the nodes that aggregate data are effectively operating as clustering
points in the network. Given that, within sensor networks, network topology is
dynamic due node failures and variable link qualities, determining a priori static
clustering point locations will be suboptimal in practice. Resilience is another
important advantage that MAs incorporate into an environment where unstable
or highly uncertain events occur. The MAs have the flexibility to recover easier
from failures by working collaboratively.
of time has elapsed, it adds its own value and increments the count and then
transmits the sum and the count to its parent node1 . In a similar way, its parent
aggregates the sums and counts received from its children and then transmits.
In this way, the aggregate values percolate up the tree. The advantage of this
over an alternative approach that sends all node values is that each node must
only transmit the sum and the count rather than the values received from all its
children and its descendants along with its own value.
With in-network data aggregation ,the nodes performing the aggregation are
effectively acting like clustering points. Given that within sensor network the topol-
ogy is dynamic due to node failures and variable link qualities, using static cluster-
ing point locations will often perform poorly. In this paper, we model clustering
point as MAs that can move within the network. When the application begins
operating, the base station gathers statistics regarding the link qualities of nodes
within the network. Subsequently, the network is subdivided into different cliques
of the communication graph and a MA is assigned to manage each clique. The sam-
pling process then begins. In this approach, nodes broadcast messages but are not
aware of which node is their parent. In this way, when an agent migrates from one
node to another it does not need to inform its children. When an agent receives a
message, it simply checks to see whether the message is from a node within the set
it is responsible for and if so, it aggregates the sum and count. The nodes that the
agent is responsible for form a clique in the communication graph and are, thus,
all within communication range of each other. As such, the agent can migrate to
any of the nodes within the clique provided it is still in communication range of
its parent without loss of connectivity. That is, MAs within the network form a
tree and must know their locations, but nodes in general do not.
1
If a node has no children, it simply sends its own value and a count of 1 to its parent.
460 M.S. Garcia et al.
the public. The service enables the general public to make appropriate decisions
given the predicted water quality, such as whether to go for a swim at the beach,
and other users, such as local authorities, to make decisions so as to maintain low
levels of pollutants in the area. The web service is currently used by a website,
mobile applications, and electronic billboards at the beach.
5 Conclusion
This paper proposed a new architecture for SC water quality monitoring and
prediction framework. The current SC network architecture represents a cen-
tralised approach and does not consider its energy consumption. The architec-
ture discussed here proposes to reduce energy consumption through the use of
the in-network data aggregation algorithms whereby mobile intelligent agents
act as dynamic clustering points in the network.
At present, the agents in the proposed architecture only collaborate to ensure
that connectivity is maintained in terms of the communication tree. Future work
will investigate ways by which energy can be reduced through increased collab-
oration by MAs within the system. The collaborative agent infrastructure will
form part of middleware to enable the rapid development of similar functionality
in other applications of interest.
References
7. Lowen, T.D., O’Hare, G.M.P., O’Hare, P.T.: Mobile agents point the way: context
sensitive service delivery through mobile lightweight agents. In: Proceedings of
the First International Joint Conference on Autonomous agents and Multiagent
Systems: Part 2, AAMAS 2002, pp. 664–665. ACM, New York (2002)
8. Mo, L., He, Y., Liu, Y., Zhao, J., Tang, S.J., Li, X.Y., Dai, G.: Canopy closure
estimates with greenorbs: sustainable sensing in the forest. In: Proceedings of the
7th ACM Conference on Embedded Networked Sensor Systems, SenSys 2009, pp.
99–112. ACM, New York (2009)
9. Muldoon, C., O’Hare, G.M.P, O’Grady, M.J., Tynan, R.: Agent migration and
communication in wsns. In: Ninth International Conference on Parallel and Dis-
tributed Computing, Applications and Technologies, PDCAT 2008, pp. 425–430.
IEEE (2008)
10. Muldoon, C., O’Hare, G.M.P., Collier, R.W., O’Grady, M.J.: Towards pervasive
intelligence: Reflections on the evolution of the agent factory framework. In: El
Fallah Seghrouchni, A., Dix, J., Dastani, M., Bordini, R.H. (eds.) Multi-Agent
Programming, pp. 187–212. Springer US (2009)
11. O’Grady, M.J., O’Hare, G.M.P., Chen, J., Phelan, D.: Distributed network intel-
ligence: A prerequisite for adaptive and personalised service delivery. Information
Systems Frontiers 11, 61–73 (2009)
12. O’Hare, G.M.P., O’Grady, M.J., Keegan, S., O’Kane, D., Tynan, R., Marsh, D.:
Intelligent agile agents: Active enablers for ambient intelligence. In: ACM’s Special
Interest Group on Computer-Human Interaction (SIGCHI), Ambient Intelligence
for Scientific Discovery (AISD) Workshop, Vienna, Austria, April 25 (2004)
13. O’Hare, G.M.P., O’Grady, M.J., Muldoon, C., Bradley, J.F.: Embedded agents: a
paradigm for mobile services. International Journal of Web and Grid Services 2(4),
379–405 (2006)
14. O’Hare, G.M.P., O’Grady, M.J., Tynan, R., Muldoon, C., Kolar, H., Ruzzelli, A.,
Diamond, D., Sweeney, E.: Embedding intelligent decision making within complex
dynamic environments. Artificial Intelligence Review 27, 189–201 (2007)
15. Ramanathan, N., Balzano, L., Estrin, D., Hansen, M., Harmon, T., Jay, J., Kaiser,
W., Sukhatme, G.: Designing wireless sensor networks as a shared resource for
sustainable development. In: International Conference on Information and Com-
munication Technologies and Development (ICTD 2006), pp. 256–265 (May 2006)
16. Shen, S., O’Hare, G.M.P., O’Grady, M.J.: Fuzzy-set-based decision making through
energy-aware and utility agents within wireless sensor networks. Artificial Intelli-
gence Review 27, 165–187 (2007)
17. Wark, T., Corke, P., Sikka, P., Klingbeil, L., Guo, Y., Crossman, C., Valencia, P.,
Swain, D., Bishop-Hurley, G.: Transforming agriculture through pervasive wireless
sensor networks. IEEE Pervasive Computing 6(2), 50–57 (2007)
Detection and Extracting of Emergency
Knowledge from Twitter Streams
1 Introduction
Since Twitter is widely adopted today and permanently accessible through the
mobile phone it is very well suited for emergency reporting. A Survey [1] con-
ducted in Fairfax, USA showed that the benefit of the informal communica-
tion through Twitter lies in the early diffusion of emergency information and
the potential to organize mutual help within neighborhoods. The majority of
Twitter analysis tools today focus on trend spotting, a process where popular
hashtags/keywords are extracted from tweets based on their retweet statistics.
This approach does not go far enough to reasonably support emergency response
systems. An adequate analysis tool should not only identify reliably emergencies
(e.g. flooding, storms, earthquakes, tsunamis and epidemics) in a given region
like Bilbao in Spain, but also summarize that emergency data from the Twitter
community. Experienced Twitter users keep tweets short, mention only impor-
tant facts, provide adequate links and hashtags for tweet discovery. Good in-
formation sources in emergency situations are often public media accounts and
increasingly rescue organizations. A good example for a tweet emergency re-
port is the following: ”A 2.5 magnitude earthquake occurred 3.11 mi E of Brea,
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 462–469, 2012.
c Springer-Verlag Berlin Heidelberg 2012
Twitter Emergency Knowledge Extraction 463
2 Related Work
Several researchers have worked on similar analysis tools to improve information
for rescue teams by exploiting data from social networks: SensePlace2 [3], the
TEDAS system [4] and the Crime Detection Web1 use an iterative crawler which
monitors the global Twitter stream to identify emergencies within a given re-
gion. Queries are issued as a set of keywords specifying specific time points (July
2010), locations (Houston) and emergency types (car accidents). Their user in-
terface allows rescue organizations to parametrize emergency filters, visualize
emergency information on the map and summarize the content of emergency
messages through tag clouds. The Twitcident project [5] goes one step further
and enriches structured emergency information with data obtained from Twitter
streams. They use natural language processing (NLP) techniques, more specifi-
cally part-of-speech (POS) tagging and named entity recognition (NER), to tag
tweets and enrich tweet contents for incident detection and profiling. Gnip2 and
DataSift3 are further examples which interface with different social media, pro-
vide complex query syntax for more general events and integrate event based
information through NLP techniques. Above that, it is important to aggregate
tweets which describe the same emergency event. Marcus et al. [6] and Becker et
al. [7] describe ways how to cluster tweets based on the inferred topic similarity
measured through the keyword distance obtained from an emergency taxonomy.
Alarms are automatically issued if the amount of tweets belonging to an event
exceeds a certain threshold value. Pohl et al. [8] extend this clustering concept
with the capability of sub-event detection. In case tweet clusters are not strongly
coherent, less frequently used keywords in the tweet cluster are used to identify
sub clusters which point for instance to different hot spots in the emergency
region. The Twitter reporting process can roughly be divided into two main
1
http://canary.cs.illinois.edu/crimedetection/web/
2
http://gnip.com/
3
http://datasift.net/
464 B. Klein et al.
phases (see Figure 1). First, several witnesses may report independently from an
observed emergency event and in a follow-up step their dependent followers may
again spread this information in the Twitter community. Previously described
research work focuses on knowledge extraction but falls short to identify reliable
and informative tweets. The examples given in Section 1, however, make clear
that filtering misleading tweets is crucial to success. A solution may be found
by learning more about the dissemination process an emergency tweet triggers.
This is because followers not only spread tweets, but also may confirm/deny,
enrich or aggregate emergency facts reported from their friends. In other words,
the content summarization can certainly benefit from analyzing the social net-
work structure behind a tweet conversation and using this information to filter
superior tweets from the crawler.
3 SABESS Framework
The goal of this framework is to provide models and tools to analyze the Twitter
stream in a more sustainable manner with a combined social and content-aware
analysis approach. The framework consists of a Twitter crawler, diverse analysis
tools, tweet aggregators and a content summarization component. An interac-
tive crawler monitors the global Twitter stream and extracts relevant tweets by
querying the Twitter Streaming API with keywords specifying emergency types
and regions. In a follow up process this first Twitter corpus is extended by fetch-
ing tweets belonging to ongoing conversations defined by retweet, reply, mention
relationships. This type of query data is encoded as metadata in the tweet and
thus easy accessible. In order to detect emergencies close to real-time it is im-
portant that computer systems analyze a big mass of tweets effectively. For the
SABESS project this is achieved with a pipeline based architecture which allows
to plugin different analysis tools (see Figure 2). This design approach has two
main advantages: first, the capabilities of the Tweet analysis can be enhanced
just by adding new types of analysis tools and secondly, scalability is managed
Twitter Emergency Knowledge Extraction 465
and influence in the community. Since each tweet represents a communication re-
lation between users, we inspect tweet metadata to learn if it is a single tweet or
a tweet in a response to another. Based on this author and relationship analysis,
it is possible to construct social network graphs representing Twitter conversa-
tions. Assuming that these graphs are dynamically constructed we are able to
identify important witness communities and leading community members close
to real-time. The Java Universal Network/Graph (JUNG) Framework has been
chosen as a basis for the social network analyzer. JUNG [9] is an open source
graph modeling, network analysis and visualization framework written in Java.
It supports a variety of graph representations and allows to examine the relations
between nodes. We build a social network graph from all collected tweet conver-
sations by obtaining their user id as unique node identifier and the interaction
type e.g. retweet to specify the relationship between the user nodes. JUNG pro-
vides a mechanism for annotating nodes and relations with metadata. By adding
Twitter specific metadata to the social network graph the analysis can later ben-
efit from their rich expressiveness. Node information is enhanced with the user
name, the status count and friends/followers count whenever tweets are parsed
by the social network analyzer. By this, the twitter experience of the user and
the characteristics of the subscription network can be considered for the network
analysis as an additional parameter. Similarly, relationships between the nodes
are enhanced with information about the accumulated interaction frequency ob-
served during the analysis period. Figure 3 (right part) shows as an example of
an graph metadata annotation for a randomly selected group of Twitter authors.
JUNG is able to calculate from the interaction frequency the network central-
ity (used as indicator for author credibility) for each node and determine social
coherent sub communities in the network graph. Various graph metrics are calcu-
lated individually for each relationship type. Whereas retweets are considered as
a weak relation, replies and mentions are considered as a strong relationship. The
result of this network analysis is shown in Figure 3 (left part), where all Twitter
authors are displayed in circular graph (KKLayout) in which leading members
are positioned in the center and ordinary members in the edge. The JUNG frame-
work provides filtering mechanisms which allow to filter tweets from the center
according to these centrality values, given a manually defined centrality thresh-
old. Executing tests with the tweet data obtained from the crawler described
in Section 3 we observed slight delays (40000 tweets render in approximately 5
minutes) during updating the graph display. This delay can be explained with
the complete recalculation of the graph whenever a new node/relation is added.
Since the graph layout algorithm is very time consuming switching off the visu-
alization component leads to a much better performance. Another solution is to
move from an incremental to an interval based recalculation of the graph as the
graph evolves only partially.
result is a list of connected words and text separators like spaces and punctua-
tions. In a subsequent step a grammatical analysis is performed which classifies
words in nouns, verbs adjectives etc.. This process is called part of speech tag-
ging and utilizes the word context (relationship with other adjacent words) in the
tweet text. Based on this knowledge a word chunker can group single words in
meaningful units that belong together e.g. a combination of family and surnames.
This word clustering process leads to specific n-gram (sequence of n items) pat-
terns that we use to identify the text language. In order to extract knowledge
from texts it is important to associate these chunks with the semantics. This
can be achieved through named entity recognition algorithms. In the case of
an emergency analysis we correlate specific chunks with temporal information,
information about mentioned location, involved persons and objects. Figure 4
shows the word tokens, part of speech tags and the named entities discovered
for one example tweet. For the language detection we use a library implemented
by Shuyo Nakatani. According to his webblog [10] this library detects tweets in
17 languages with 99.1% accuracy. The parser is limited to the Latin alphabet
and includes languages like German, English, Spanish, French, Italian and other
European languages. Since it is specialized on noisy short text (more than 3
n-grams) it suitable for Twitter. A possible disadvanatage of that library is that
the detection quality may suffer from the short text length of tweets, especially
if multiple languages are used in one tweet. The results can be improved by
repeating the language analysis with aggregated tweet contents belonging to the
same conversation or even the same emergency event. As a word tokenizer and
POS tagger we currently use the Washington’s CoreNLP library4 . Preliminary
tests with different NLP tools have shown that Washingtons word tokenizer and
POS tagger work sufficiently fast and accurate enough for the analysis of Twitter
streams in real-time. As for the named entity recognition component we choose
Stanfords Twitter NLP Library5 , as it demonstrated an improved performance
for Twitter. In some cases, however, locations and persons were not properly
4
https://github.com/aritter/twitter_nlp
5
http://nlp.stanford.edu/software/corenlp.shtml
468 B. Klein et al.
Fig. 4. Example text analysis with derived word tokens, POS and NER tags
detected. The NER tool from the University of Illinois’s NLP tools6 produced
the best results in our tests, however, they require a licensing condition which
were not appropriate for the SABESS project. Altogether these tests proofed
that this combination of text analysis tools works fast and accurate enough to
sufficiently detect emergency information.
4 Conclusion
version of our system will extend the graphical tool so that users can inspect the
data and the outcomes of the analysis.
References
1. Licamele, G.: Web metrics report from Fairfax county (2011),
www.fairfaxcounty.gov/emergency/flooding-090811-metrics.pdf (last visited
June 1, 2012)
2. Acar, A., Muraki, Y.: Twitter and natural disasters: Crisis communication lessons
from the Japan tsunami. International Journal of Web Based Communities 7(3),
392–402 (2011)
3. MacEachren, A.M., Jaiswal, A.R., Robinson, A.C., Pezanowski, S., Savelyev, A.,
Mitra, P., Zhang, X., Blanford, J.: SensePlace2: GeoTwitter Analytics for Situa-
tional Awareness. In: IEEE Conference on Visual Analytics Science and Technology
(VAST 2011), Rhode Island, USA (2011)
4. Li, R., Lei, K., Khadiwala, R., Chang, K.: TEDAS: a Twitter Based Event Detec-
tion and Analysis System. In: Proc. of the 28th IEEE International Conference on
Data Engineering (ICDE), Washington, USA (2012)
5. Abel, F., Hauff, C., Houben, G.-J., Stronkman, R., Tao, K.: Semantics + Filtering
+ Search = Twitcident Exploring Information in Social Web Streams. In: 21st
International ACM Conference on Hypertext and Hypermedia (HT 2010), Toronto,
Canada (2010)
6. Marcus, A., Bernstein, M., Badar, O., Karger, D., Madden, S., Miller, R.: Twitinfo:
aggregating and visualizing microblogs for event exploration. In: Proc. of ACM CHI
Conference on Human Factors in Computing Systems, pp. 227–236 (2011)
7. Becker, H., Naaman, M., Gravano, L.: Beyond Trending Topics: Real-World Event
Identification on Twitter. In: Proc. of the 5th International AAAI Conference on
Weblogs and Social Media, ICWSM (2011)
8. Pohl, D., Bouchachia, A., Hellwagnerr, H.: Automatic Sub-Event Detection in
Emergency Management Using Social Media. In: Proc. of the 1st International
Workshop on Social Web for Disaster Management (SWDM 2012), pp. 683–686
(2012)
9. Madadhain, J., Fisher, D., Smyth, P., White, S., Boey, Y.B.: Analysis and visu-
alization of network data using JUNG. Journal of Statistical Software 55(2), 1–25
(2005)
10. Shuyos Weblog, http://shuyo.wordpress.com/2012/02/21/language-detection-
for-twitter-with-99-1-accuracy/ (last visited: June 02, 2012)
Protecting Firefighters with Wearable Devices
1 Introduction
Firefighter, rescuers and, in general, emergency units, can benefit from new tech-
nologies to improve their performance and security conditions when working
exposed to risk situations. The main danger originated by working exposed to
severe hot conditions is called thermal stress and can be defined as the heat
load received, resulting from the interaction between environmental conditions
of the workplace, the physical activity and the clothes that workers wear. Under
thermal stress, the body is altered and the person suffers physiological overload.
Certain physiological mechanisms (such as sweating and peripheral vasodilata-
tion) make the body leave the heat excess and if even thought, the body temper-
ature exceeds 38 º C, serious health problems and even death may occur. This
phenomenon is a major cause of death among firefighters and it can also happen
to professionals working with high heat sources such as foundries and metal or
glass industry.
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 470–477, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Protecting Firefighters with Wearable Devices 471
Fig. 2. T-Shirt
The hear rate beat is acquired through textiles electrodes integrated on the
T-shirt fists (Figure 2). A good electrode placement is a key issue since we
work with very low power signals and the movements of the user can produce
bad contacts and distort the signal. Given this, those electrodes can be easily
integrated in the elastic band in the trunk that contact the skin since they are
a good electrocardiography (ECG) acquisition point. For a good acquisition of
cardiac signal is crucial to increment the conductivity between the electrodes
and the skin. Most systems use conductive gels which eventually end up drying
and disabling the electrode. Our electrodes leverage the same sweat the user to
reduce the resistivity. Therefore, to make the system work, it is necessary that
the person is sweating, which is implicit in the risk of heat stress.
Protecting Firefighters with Wearable Devices 473
4 Wireless Communication
4.1 Protocols
Our target application is characterized by infrequent, non-constant transfer of
small amounts of information between the sensor nodes, to a central device,
a wrist-watch. In the set of possible protocols, Bluetooth [3], Bluetooth Low
Energy (BLE) [4] and Zigbee[6] provide a low consumption, but with a low data
rate. Technologies that provide a longer rate such as WiFi and WiMAX offer
very high data rates but without any restriction on consumption. Because a low
energy consumption is a key issue in this project to maximize battery duration
and the data rate needed to transmit is low, we discarted WiFi and WiMAX
technologies and focus on bluetooth and Zigbee technologies.
Both BLE energy and Zigbee use the 2.4GHz part of the spectrum. BLE fea-
tures a simple link layer designed for quick connections, working in five simple
stages; wake-up, connect, send data, disconnect, sleep. The total time of send-
ing data is less than 3 ms, (compared to 100 ms with classic Bluetooth) and
BLE uses fewer channels than ‘classic’ Bluetooth for its connection procedure,
and a staggeringly small 0.1% duty cycle, (compared to Bluetooth’s 1% duty
cycle). Zigbee also operates within 915Mhz (in America) and 868Mhz (Europe)
and introduces mesh networking with three fundamental types of Zigbee nodes:
‘coordinator’ (initiates the network), ‘router’ (passes data on node-to-node) and
the ‘End Device’. The duty cycle of battery powered nodes within a ZigBee net-
work is designed to be very low, offering even more energy efficiency and greater
battery life. Once associated with a network, a ZigBee node can wake up and
communicate with other ZigBee devices and return to sleep. In our case, ro-
bustness is a very important criteria: BLE proves itself to be significantly more
robust than Zigbee through its use of adaptive frequency hopping. Each node
also maps frequently congested areas of spectrum so that they can be avoided.
Zigbee, in contrast, does not do any of these things, utilizing direct sequence
spread spectrum.
Power-wise, BLE uses a synchronous connection meaning that both master
and slave wake up synchronously. This helps keep power on both sides low.
ZigBee uses an asynchronous scheme, this means that the routers stay awake all
the time. The power consumption of the routers is then relatively high, but the
end-nodes can wake up at any time, send their data and not have to wait for a
specific time slot. Regarding network layout, BLE optimized for a star topology,
such as the master-slave relationship between a mobile phone or wrist-watch
and its peripheral devices. Zigbee, on the other hand is optimized for a mesh
topology1 . BLE star topology slave nodes can simply ‘go to sleep’ when they
have nothing to say. Mesh nodes, on the other hand, have to be scanning all the
time in order to detect a message to pass on. This requires a lot of power when
compared to BLE.
Taking all the considerations of this section, we have chosen BLE as commu-
nication protocol since we need short-distance ultra-low power communication
1
This is because Zigbee was designed for use as a series of distributed nodes.
474 G. Talavera et al.
with star topology. The final decision of chosen BLE is strongly supported by the
fact that new devices are appearing, and this trend will increase in the future,
with BLE capabilities such as wrist-watches ([2,5]) or smart-phones (BLE is a
feature of bluetooth 4.0 that will be commonly integrated on next generation
smart-phones).
Changing the substrate type, from FR4 to PET, modifies parameters, see
figure 3b), as dielectric constant, thickness and bandwidth; these changes move
the central frequency to 3.48GHz, out of our range of operation. To solve this
deviation of operation is necessary change the original design to found an antenna
at 2.45GHz.
This material is susceptible to be printed; using serigraphy techniques is pos-
sible to print the antenna in a layer of PET, printing the antenna on top and
the ground plane on bottom, doing a cheap and quick solution.
The body effects need to be studied with detail to assure a good performance.
After the fabrication of two antennas, we analyzed the the response on different
environments.
Body Issues and Flexibility: The antenna is very thin and with a small
ground plane, in this situation is necessary think about body as another dielectric
Protecting Firefighters with Wearable Devices 475
Fig. 4. Body issues a). Graphic comparing antenna with more branches (—) and an-
tenna with longer branches (___) on an arm b) Graphic comparing antenna with
more branches (—) and antenna with longer branches (___) on trunk
able to modify the frequency response. The first test was putting the antenna
on one arm (figure 4a) ) covered with a jacket to avoid the contact with skin.
The response of frequency was modified to 2.16GHz for antenna with longer
branches and 2.24GHz for antenna with more branches. The second test was
made on the trunk (Figure 4b) ), covering the skin with a jacket again, the body
effects produces a similar reaction on the frequency, modifying the response to
2.25GHz for antenna with longer branches and 2.23GHz for antenna with more
branches. Another important point is the study of flexibility of the antenna
(Figure 5a) ) , the test analyzes the behavior of antenna when is flexed and
evaluate the central frequency.
Thermal Stress Test: Thermal stress test provides data about heat resistance
of the antenna, studying frequency variations as a function of temperature in-
crease. The procedure of the test consists in two steps, the first one is using
the thermal protection of firefighters and the second is without protection. In
both cases we heat the antenna at increase intervals from 40ºC to 250ºC, the
variation on behavior is evaluated connecting the antenna to network analyzer
during the tests. The variation of the reflection coefficient when the temperature
increases is shown at figure. At case with shield, the protection of the antenna
is completely effective and the behavior is almost the same in the whole range.
Otherwise, when the test is made without shield the antenna suffer the effects
of heat, the bonding material between the connector and the antenna begins to
melt at 130ºC, after a few the substrate of antenna is folded on itself. The results
can be shown in (Figure 5b) ).
476 G. Talavera et al.
5 Inductive Charging
Energy is a crucial issue in this project where the batteries have to last at least
24 hours, maximum time considered realistic between two charges of the bat-
teries. Since the system is on a t-shirt that must be washed, a complete closed
system must be provided. Inductive charging uses electromagnetic field to trans-
fer energy between two objects. Induction chargers typically use an induction
coil to create an alternating electromagnetic field from within a charging base
station, and a second induction coil in the portable device takes power from the
electromagnetic field and converts it back into electrical current to charge the
battery. Because there is a small gap between the two coils, inductive charging
is one kind of short-distance wireless energy transfer. The main advantages of
inductive charging are the elimination of power transmission line cables and elec-
trical shock probabilities since there are no exposed conductors. It also can be
used in close system that must be protected for the environment. The main dis-
advantages are its lower efficiency and increased resistive heating in comparison
to direct contact and it also increase manufacturing complexity and cost.
6 Conclusions
In this paper we have presented our ongoing work on a project that aims at cre-
ating a t-shirt with integrated technology to prevent thermal stress in emergency
units, especially firefighters. Right now the project is still under development.
The PCB and the T-shirt are completed and we are now working on finishing
the development of the mobile phone application, an android platform with BLE
capabilities. A web server where to store and analyze the history of the data is
also being developed. A carefull analysis analysis and optimization of the energy
consumption when the complete system is finished will be done.
Acknowledgements. This work has been partly founded by the Spanish project
TSI-020400-2010-100 and the Catalan Government Grant Agency Ref.
2009SGR700.
Protecting Firefighters with Wearable Devices 477
References
1. Application note,
http://www.campbellsci.com/documents/technical-papers/heatindx.pdf
2. http://world.casio.com/news/2011/watch_prototype
3. http://www.bluetooth.com/, https://www.bluetooth.org/
4. http://www.bluetooth.com/pages/low-energy.aspx
5. http://www.polar.fi/
6. http://www.zigbee.org/
7. Hertleer, L.V.C., Rogier, H., Van Langenhove, L.: A textile antenna for off-body
communication integrated into protective clothing for firefighters. IEEE Transac-
tions on Antennas and Propagation 57 (April 2009)
8. Hertleer, C., Rogier, H., Moeneclaey, M., Vallozzi, L., Van Torre, P., Verhaevert,
J.: Wireless communication for firefighters using dual-polarized textile antennas
integrated in their garment. IEEE Transactions on Antennas and Propagation 58
(April 2010)
9. Pasher, E.: Intelligent Clothing: Empowering the Mobile Worker by Wearable Com-
puting. IOS Press (2009)
10. Sensirion the sensor company, http://www.sensirion.com/
11. Van Torre, P., Vallozzi, L., Rogier, H., Verhaevert, J.: Diversity textile antenna
systems for firefighters. In: Proceedings of the fourth European Conference on
Antennas and Propagation, p. 5. IEEE (2010)
An Experience of Using Virtual Worlds and Tangible
Interfaces for Teaching Computer Science
1 Introduction
3D virtual environments such as World of Warcraft, Sims, and Second Life have
achieved great popularity and acceptance among teenagers. There is a good prospect
of incorporating virtual worlds as a learning tool, and in fact several attempts have
been already done in that direction. Let’s review here some of the educational projects
that use virtual worlds at various educational levels and in different educational areas.
Ketelhut [1] describes River City, a virtual city in which citizens are supposed to
be sick, and students have to investigate and explore in search for evidence and clues
for the reason of the plague.
The Vertex Project [2] took place at a school in the UK with children aged between
9 and 11. Students worked in groups, combining traditional activities like creating a
collage or writing a story with the creation of their own virtual worlds.
The NIFLAR Project [3] was aimed to enhance and innovate the teaching of for-
eign languages by means of sessions in which secondary school students in Spain and
in the Netherlands interacted with each other with the aim of improving social and
intercultural learning. Other examples of projects related to language learning are the
AVATAR project [4] and the AVALON project [5].
The ABV4Kids&Teens Project involved a cooperative team of experts and schools
from Germany, the United Kingdom, Norway, Poland and Israel. The main goal was
to help students and teachers to better understand the different European cultures,
languages and values. They created a virtual town called Anti-Bullying-Village [6] in
which events were held on racism, xenophobia, violence, and school bullying.
Virtual worlds may help to deal with inclusive education. For example, Brigadoon
is a virtual island in Second Life that is used to help people with autism or Asperger
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 478–485, 2012.
© Springer-Verlag Berlin Heidelberg 2012
An Experience of Using Virtual Worlds and Tangible Interfaces 479
syndrome. Other example in this line is using virtual worlds to facilitate the initial
adaptation of immigrant students to the school, as in the Espurna Project [7] where
language and cultural aspects undermine the educational possibilities of low-income
immigrant students.
Virtual worlds may also be used for teaching computer science. For example, the
V-LeaF Project [8] teaches computer programming in an innovative and attractive
way. This project was carried out at secondary schools in Spain.
Virtual worlds may be criticized for making the student “loosing contact” with re-
ality. Just the opposite approach would be to locate some computation in the real
world, thus enriching normal objects with computational functionalities. This is what
has been called “tangible interfaces”. Its use as an educational tool can help to assimi-
late abstract concepts using analogies with tangible elements, or can be used for col-
laborative work that would allow a more active participation of the students. Here we
present a brief review on the use of tangible interfaces in education.
Stanton [9] describes the process of creating tangible interfaces for a collaborative
drawing tool called KidPad. KipPad is used to create narrative stories by means of
collaborative activities in small groups.
Africano [10] proposes a conceptual development based on interactive games to
promote collaborative learning. The system is provided with a set of tangible interfac-
es that are used by students to explore geography and foreign cultures.
Horn and Jacob [11] propose a system for teaching programming using a physical
interface. They have created two tangible programming languages for use by students
in the final year of primary school and in middle school.
Tangicons [12] is similar to the previous project, but aimed to children at kinder-
garten, who will learn the first steps of programming by interacting with tangible
programming bricks.
Finally, Garden Alive [13] is a garden where you can interact with tangible inter-
faces. This system has cameras that detect hand gestures and sensors that detect light
and the position of water. These sensors allow the plants to grow virtually.
As a conclusion we can see that there are very few projects that integrate both
virtual worlds and tangible interfaces. We think that this integration may exploit po-
werful synergies between the virtual and the “real” world, allowing a more vivid edu-
cational experience. Thus, we propose in this paper a system that uses virtual worlds
and tangible interfaces in order to explore such a “mixed reality” concept.
The system described in this paper is named Cubica, and uses tangible interfaces to
combine the real world with virtual worlds, thus providing a “mixed reality”.
The first decision was the choice of the virtual world platform, which is a key issue
for several reasons. On one hand, the virtual world has to be suitable for secondary
education: it has to be a closed and controlled environment. For example, Second Life
is a great virtual world platform, but access is not allowed for users under 18, and
therefore it cannot be used at secondary schools. On the other hand, we need a virtual
480 J. Mateu and X. Alamán
world provided with open communication protocols, as we want to link it with the
real world.
Considering these two requirements we finally chose OpenSimulator (OpenSim)
[14] as the development platform. This platform allowed us to implement a close and
controlled environment suitable for high school students. OpenSim uses the same
communication protocols than Second Life, but is distributed as open source software
under a BSD license, so it is very flexible.
We also had to take into account the limitations that are found in public schools. In
general, public schools at Spain have computers with limited capacity, low bandwidth
internet access, and some firewall restrictions. To overcome such problems we in-
stalled the OpenSim server within the high school own local area network (LAN),
instead of using the global Internet.
We created a virtual world called “Algoritmia Island”, where the students may visit
different thematic houses in which they learn about sorting algorithms. In this island,
students may find different animations, notice boards, exercises, links to web pages,
QR codes that link to videos of animations on sorting algorithms, etc.
Once we had the virtual world running, we needed a way of integrating it with the
real world. With that purpose we created a middleware that mediates the virtual world
with the tangible interfaces, thus creating what can be described as a “mixed reality”.
This middleware is based on the LibOpenMetaverse library [15].
We also created a Non Player Character (NPC) or bot inside the virtual world that
is in charge of sending and receiving messages to implement the interaction of the
virtual world with the tangible interface.
Although the middleware may be used to interact with any tangible interface, as a
proof of concept we have developed a particular one. For this, we have used Phidgets
technology. Phidgets [16] are low cost electronic components -sensors and actuators-
that are controlled by a computer via USB.
One of the problems while teaching computer science at secondary schools is the
high degree of abstraction that is involved in the subject. The assimilation of abstract
concepts is facilitated by using “tangible” elements that are analogies of the abstract
concepts. In this experience we decided to create a wooden model that represents the
concept of array
The model has five holes representing the five vector elements while cubes (dices)
are used to represent the values of the elements of the array. We used RFID readers to
read each element of the array: RFID tags have been inserted inside the cubes. The
values of the different positions of the array are sent to a simulation of the array that is
represented inside the virtual world, which is synchronously updated using this in-
formation. The model also features an LCD display that shows auxiliary messages,
such as the number of iteration during a sorting algorithm. In the next section we will
explain how this tangible interface was used to interact with the virtual world thus
creating a “mixed reality” educational experience.
tangible interface and at the end of the activity the system itself, through an LCD
screen, indicated whether what has been done matched any of the algorithms and
whether it is advisable for the student to visit again some “thematic house”. In order
to promote collaborative work, the teacher finally proposed an activity in which two
teams competed to order as quickly as possible a given array.
In the third session, the students continued performing exercises and practicing
with the training panels available at the Algoritmia Island. Each student then per-
formed two tests using the tangible interface. In each test the teacher proposed a par-
ticular array and a sorting algorithm to be used, and the student had to carry out the
corresponding iterations till the array was sorted. After each test was done, the system
showed in the LCD screen the student results, which were also published in the Twit-
ter account of the teacher.
Finally a written test was also delivered to the students. It included a series of exer-
cises to evaluate their acquired knowledge about sorting algorithms. It also included a
survey to assess whether the use of virtual worlds and tangible interfaces have moti-
vated them and helped them to understand sorting algorithms.
As explained above, the experience was carried out with several groups of students, at
various educational levels. After each experience, the students were administered a
written test and a survey, to check the usefulness of the system (both objective and
perceived). In total, 42 students participated in the sessions. The results are summa-
rized in the following tables:
An Experience of Using Virtual Worlds and Tangible Interfaces 483
Questions Yes No
Have you ever played with 3D virtual world games 36(86%) 6(14%)
(World of Warcraft, Sims, etc.)
Did you find the system easy to use? 42(100%) 0(0%)
Do you think that the system has been helpful to under- 40(95%) 2(5%)
stand the concept of array?
Did you understand better the sorting algorithms using 35(83%) 7(17%)
virtual worlds with respect to blackboard explanation?
Using the system has helped you to answer the exercises? 33(79%) 9(21%)
Using the system has helped you to distinguish among 37(88%) 5(12%)
different sorting algorithms?
The wooden model has been helpful to understand how 41(97,6%) 1(2,4%)
sorting algorithms work?
Do you think the use of virtual worlds has motivated you 38(90,5%) 4(9,5%)
to work harder in class?
Have you found interesting the sessions in which you 42(100%) 0(0%)
interacted with virtual worlds?
Would you spend more time studying at home if you 35(83%) 7(17%)
could use virtual worlds there?
Would you like to participate in further sessions? 42(100%) 0(0%)
Analizing the survey, the first thing we may realize is that 86% of the students had
previous experiences with 3D virtual environments such as World of Warcraft or
Sims. This willingness in using 3D virtual worlds made the students show more inter-
est during the sessions, as they could learn in what they thought to be a fun environ-
ment. In fact the data from the survey shows that 65% of the students think that the
use of virtual worlds has motivated them to work harder than they would have done in
traditional classes, and 83% of the students said they would devote more time to study
at home if they could use virtual worlds there.
This data is related with the learning curve: the fact of having interacted previously
with 3D games allowed them to quickly adapt to the environment, so that 83% of the
students have found the interaction with virtual worlds easy or very easy: there wasn’t
a steep learning curve.
83% of the students believe they have achieved a better understanding of sorting
algorithms by using the virtual island and the tangible interface, compared with tradi-
484 J. Mateu and X. Alamán
tional methodologies (blackboard, books…). The use of virtual worlds also allowed
the students to get involved in collaborative activities in a more orderly fashion, using
text chatting and avatar interactions in the virtual environment. Collaborative activi-
ties conducted in the virtual world created a pleasant climate of healthy competition
among students. The sessions were about sorting algorithms, but many other skills
were worked indirectly, such as digital competence and understanding 3D coordinate
axes.
The analogy provided by the system through the tangible interface has been very
useful to understand the concept of sorting arrays: 95% of the students believe that it
has helped. Also 88% of the students think the system has helped them to better dis-
tinguish the differences among sorting algorithms.
The sessions have been very positive and 100% of the students say they would like
to participate again in similar sessions. Even students from other schools have ex-
pressed interest in participating in the sessions, as they have heard about them from
friends or relatives.
The motivation in the teacher side is also noteworthy. Teachers who have partici-
pated have loved the sessions and would like to work with such tools in the future.
However they have a lack of knowledge on implementation and use of the virtual
worlds that may jeopardize these desires.
If we analyze now the results of the tests, 69,23% of the students in the fourth year
of ESO have performed correctly or have had just a minor error in the final test. 100%
of second year of Bachelor students have been successful or have had just a minor
error in the test. 60% of the students in the FP courses have performed correctly or
have had just a single minor error in the test.
Despite having been a very successful experience, we detected some problems that
have to be addressed in future sessions. Firstly, although 69% of the students have
found that the time devoted to the sessions has been adequate, teachers have found it
to be insufficient: many activities were developed in a fairly short space of time.
Therefore it would be desirable to include one or two more additional sessions to
accomplish everything more calmly and to properly consolidate all the concepts.
We also plan to make an additional experience with a control group with students
that have not used the Cubica system, to compare their performance with the results
presented in this paper.
Finally, an additional aspect to consider is the distraction that the interaction with
virtual worlds may cause. 20% of the students have spent 30 or more minutes to set
up their avatars and 52% of the students have visited other parts of the island, not
directly related with the topics they had to study. The nature of the environment may
cause some students to waste time in non relevant activities, and this is a problem that
has to be properly addressed if the system is to be used regularly.
References
1. Ketelhut, D.J.: The impact of student self-efficacy on scientific inquiry skills: An explora-
tory investigation in River City, a multi-user virtual environment. The Journal of Science
Education and Technology 16(1), 99–111 (2007)
2. Bailey, F., Moar, M.: The Vertex Project: Exploring the creative use of shared 3D virtual
worlds in the primary (K-12) classroom. In: SIGGRAPH 2002 (2002)
3. Jauregi, M.K., Canto, S., de Graaff, R., Koenraad, T.: Social interaction through video-
webcommunication and virtual worlds: An added value for education. Short paper in CD
Proceedings Online Educa Berlin, pp. 1–6. ICWE, Berlín (2010)
4. Feliz, T., Santoveña, S.M.: El proyecto Added Value of Teaching in a Virtual
World(AVATAR) (Valor añadido de la enseñanza en un mundo virtual). Programa Com-
enius, Lifelong Learning (2009)
5. Deutschmann, M., Outakoski, H., Panichi, L., Schneider, C.: Virtual Learning, Real Herit-
age Benefits and Challenges of Virtual Worlds for the Learning of Indigenous Minority
Languages. In: Conference Proceedings International, Conference ICT for Language
Learning, 3rd Conference Edition (2010)
6. ABV (Anti-Bullying-Village), http://www.abv4kids.org/
7. Espurna, http://www.espurna.cat
8. Rico, M., Martínez-Muñoz, G., Alamán, X., Camacho, D., Pulido, E.: A programming Ex-
perience of High School Students in a Virtual World Platform. International Journal of
Engineering Education 27(1), 1–9 (2011)
9. Stanton, D., Bayon, V., Neale, H., Ghali, A., Benford, S., Cobb, S., Ingram, R., Wilson, J.,
Pridmore, T., O’Malley, C.: Classroom collaboration in the design of tangible interfaces
for storytelling. In: CHI 2001 Seattle, Washington, United States, pp. 482–489. ACM
Press (2001)
10. Africano, D., Berg, S., Lindbergh, K., Lundholm P., Nilbrink, F., Persson, A.: Designing
tangible interfaces for children’s collaboration. Paper Presented at the CHI 2004 Extended
Abstracts on Human Factors in Computing Systems, pp. 853–868 (2004)
11. Horn, M.S., Jacob, R.J.K.: Designing Tangible Programming Languages for Classroom
Use. In: Proceedings of TEI 2007 First International Conference on Tangible and Embed-
ded Interaction (2007)
12. Scharf, F., Winkler, T., Herczeg, M.: Tangicons: algorithmic reasoning in a collaborative
game for children in kindergarten and first class. Paper Presented at the Proceedings of the
7th International Conference on Interaction Design and Children, pp. 242–249 (2008)
13. Ha, T., Woo, W.: Garden Alive: n Emotionally Intelligent Interactive Garden. The Interna-
tional Journal of Virtual Reality 5(4), 21–30 (2006)
14. OpenSimulator, http://opensimulator.org/
15. LibOpenMetaverse, http://openmetaverse.org/
16. Phidgets, http://www.phidgets.com/
Learning by Playing in an Ambient Intelligent Playfield
1 Introduction
Today’s research in computer-based education highlights the need for a learner-
centred shift targeted towards harnessing the full potential of learning applications. In
this context, learning activities are claimed to be more effective when involving the
learners’ active engagement in groups and intensive interpersonal interaction in real-
world contexts [21].
This paper reports the design, development and evaluation of a technological
framework for learning applications, named AmI Playfield, aimed at creating
challenging learning conditions through play and entertainment. AmI Playfield is an
educative Ambient Intelligent (AmI) environment which emphasizes the use of
kinesthetic and collaborative technology in a natural playful learning context [18] and
embodies performance measurement techniques.
In order to test and assess AmI Playfield, the “Apple Hunt” application was
developed, which engages (young) learners in arithmetic thinking through kinesthetic
and collaborative play, observed by unobtrusive AmI technology behind the scene.
“Apple Hunt” has been evaluated according to a combination of methodologies
suitable for young testers, whereas Children Committees are introduced as a
promising approach to evaluation with children. The obtained results demonstrate the
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 486–498, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Learning by Playing in an Ambient Intelligent Playfield 487
system’s high potential to generate thinking and fun, deriving from the learners’ full-
body kinesthetic play and team work.
2 Related Work
The essence of AmI Playfield constitutes an interdisciplinary concept relating to
educative AmI environments, interactive floor projects and pervasive gaming.
Educative AmI environments [7, 12, 13] aim at providing education through real-
world context activities and are characterized by their facility to suit non-skilled
learners and educators for any type of learning or teaching. While the majority of
them implements Tangible User Interfaces (TUI) as a vehicle to learning content,
kinesthetic interaction is considerably static, corresponding mainly to hand-
manipulation of objects. Moreover, existing educational AmI environments do not
address performance measurement.
Interactive floors are basically characterized by a high potential of full-body
kinesthetic activity and present diverging objectives closely connected to fun. A
variety of game surfaces [5, 10, 11], socializing media [9] and, literally, dance floors
[8, 15] have been proposed in the literature. AmI Playfield would rather compare to
the first category, although it does not build on top of any specialized electrical
surface, in contrast to most related projects, which impose an immediate contact
between users and the sensing technology.
Pervasive gaming is a remarkable paradigm of enhancing traditional games with
technology. Well-known examples, such as [2, 20], utilize GPS-based location
awareness and PDAs to recreate traditional outdoor game experiences. Head-Up
Games [23], evolved through simple technology to bypass the inaccuracy of GPS
technology. Their adaptable rules, however, appeal mainly to entertaining
applications. Also, Interactive Slide [22] is an indoor pervasive game framework that
embodies infrared artificial vision for location awareness.
AmI Playfield, presented in this paper, combines location awareness with
multimodality to promote entertainment and learning, enhancing physical play. AmI
Playfield constitutes an interaction space, where the user has minimal immediate
contact with technology. Offering a variety of modalities above the floor level, the
system also supports more flexible forms of interaction. In addition, providing a
customizable infrastructure for learning applications, it is suitable for a wide variety
of educational subjects and concepts, determined by the content design of each
application. Also, AmI Playfield introduces a performance measurement system,
using an extendable metric set (see subsection 3.3).
3 AmI Playfield
AmI Playfield addresses the main objective of providing technological support in an
Ambient Intelligent environment appropriate for accommodating and encouraging
learning processes based on playful learning and learning by participation [21].
For this purpose, AmI Playfield embodies a multi-user tracking vision system
developed by the Computational Vision and Robotics Laboratory of ICS-FORTH [25,
488 H. Papagiannakis et al.
26] that observes activities unobtrusively and provides the basis for natural
(kinesthetic) and collaborative interaction. AmI Playfield applications provide the
content of learning and determine the logic of interaction. Such user experiences, in
which the learner’s whole sensory system participates, are believed to foster learning
more effectively [3], as opposed to the sedentary play style of typical computer
games.
a b
Fig. 1. (a) AmI Playfield’s Hardware setup, (b) Mobile phone controllers serve as command
input devices
safety and reusability reasons. Depending on the learning scenario, marks are
assigned (both physically and digitally) to each game position that may vary from
arithmetic or alphabetic to custom. In particular, arithmetic games use numbers,
whereas alphabetic games use characters as identifiers. Extra flexibility is offered by
the custom game type that allows any combination among symbols, numbers and
characters. In order to mark positions physically, a set of typical stickers can be used
(Fig. 3).
In its current settlement the playfield spreads around an amount of adjacent
positions, which obviously limits the game design. However, this settlement was
chosen so as to accommodate alphabetic and numeric games, where the marks’
traceability plays a key role. Further research is needed for free-form games. The
framework is capable of visually generating any kind of grid styles, enabling
applications to use an empty, sequenced or randomly-marked playfield, according to
their game type. The respective floor design has to be applied each time. Within the
playfield, players’ moves are translated into mark choices by the system, which are
further manipulated by application logic. In an alphabetic scenario, for example, a
player’s move to position “D” could denote the player constructing a word containing
the specified letter. Depending on the application design, input from different player
moves can be combined in various ways to yield collaborative results.
Furthermore, the system optionally generates targets and/or obstacles, aimed at
enriching the game strategy and defining levels of complexity. Any small objects
(dummies) can be used for their physical representation, as their presence does not
interfere with the operation of the cameras (Fig. 3). Depending on the game
application scenario, rival-players may also be considered as targets or obstacles.
Projected in parallel on the dual back-projection display, various graphical user
interfaces convey the image of the game’s virtual world. The GUIs are designed to
dynamically depict game action from different aspects, in order to aid in the players’
decisions and activity. The left view is focused on personalized information, whereas
the right view depicts mainly the playfield activity.
Mobile controller interfaces are optionally used for profile creation and command
submission during play. Input, in the form of arithmetic operations, characters or/and
symbols, can be combined in each application’s logic with the positioning input to
generate specific results. Various mobile controller UIs were implemented as
ASP.Net web pages (Fig. 1b). Playing using one’s own phone, instead of using a
special handheld controller, was believed to be both intuitively accepted and
attractive, which was justified by the project’s evaluation (Fig. 5 – Questions 7, 8).
Speech synthesis is used to enrich acoustic interaction in a natural way, via the MS
Speech API 5.3. Sound effects are offered to cover routine (repetitive) messages,
whereas background music is also available, in order to promote a feeling of joy and
vigilance.
AmI Playfield modalities are orchestrated via the FAMINE middleware
infrastructure [4]. The middleware plays a key role in the framework flexibility and
extensibility, allowing for the platform-independent and distributed implementation of
its components. Thanks to the middleware server-client architecture, learning
applications (as clients) are enabled to easily extend the established user interfaces
490 H. Papagiannakis et al.
(UIs). In particular, applications are enabled to customize the framework and apply
their add-ons (e.g. extra performance metrics) through a structured XML-based
mechanism.
Overall, the framework logic was entirely developed in .Net (mainly C#), in
conjunction with XML for its configuration facilities. .Net provides an efficient 2D
GUI-designer and good interoperability with the MS ExcelTM sdk and MySQL
databases, used for performance reporting purpose, and data storage respectively.
a b
Fig. 2. (a) An instance of Fun Statistics, (b) An extract of a Performance Report that summarizes
each player’s activity in terms of total moves, target hits, wrong actions, command usage,
response times, etc.
The results are presented to the players at the end of a game as “fun statistics”, which
summarize the game activity in the form of awards or improvement prompts. Fig. 2a
describes the default metrics defined by the framework.
In addition, an analysis of the players’ performance is output in Excel reports at the
end of every game (Fig. 2b). The default performance measurement scheme described
above can also be extended by additional application-level statistics.
4 Apple Hunt
AmI Playfield applications provide the content of learning and set the logic of play
interaction. Influenced by the research on playful learning applications described in
[16, 27], AmI Playfield applications are targeted to: i) fun; ii) challenge; iii)
engagement; and iv) learnability.
Learning by Playing in an Ambient Intelligent Playfield 491
Apple Hunt is an educational application that was developed in order to assess the
degree to which AmI Playfield facilitates the development of kinesthetic educational
games, and investigate whether educational games integrated within this environment
can have a positive influence on learning. Apple Hunt addresses fundamental
arithmetic operations, taking advantage of the framework’s provisions for playful and
collaborative learning, and is oriented towards elementary school children. A
prototype of Apple Hunt has been implemented and is up and running in a laboratory
space of ICS-FORTH.
• A four-player capacity;
• A variety of difficulty levels related to the ratio of traps (obstacles) to apples
(targets);
• The entire range of user interfaces. In addition, the game restricts the arithmetic
controller from allowing decimal numbers or numbers higher than 9;
• Arithmetic-type performance reports, extended by the “Math Genie award”
(gained by each player who achieves to use all four mathematical operations
flawlessly).
Fig. 3. The Apple Hunt game: The players compete in quick gathering the apples on floor,
while executing simple arithmetic operations with the mobile controller (in order to target their
following positions)
apples are picked, which normally lasts from 5 to 15 minutes, depending on its
difficulty level.
To move around the playfield, each player must first target a desired position using
the mobile controller, while no turn-restrictions apply. For this purpose, they have to
input an arithmetic operation that begins with the number of their current position and
results in their target one. For example, standing on position “5” and targeting “25”
would require a “multiply by 5” command. Upon submit, they are allowed to move
towards the targeted position.
Strategic play not only induces careful choice of target positions, but also leads to
analytic thinking of the optimal arithmetic operation that will minimize the actions
(commands and moves) required to reach the apples. Apart from the presence of apple
and trap dummies on the game floor, strategic play is also accommodated by the
visual display, which provides a dynamic sky-view of the game. In the end, the
winner (or winning team) is judged by the highest score (or score sum), as a result of
the best balance between apple and trap hits. The “fun statistics” (described under
subsection 3.3), presented in the end of each game, are designed to reward intelligent
play.
5 Evaluation
Games, in contrast to productivity applications, are designed to entertain. Pagulayan
et al. [14] stress the importance of considering the differences between games and
productivity applications in evaluation design, indicating that typical usability
evaluation does not apply to games. The reason is that the emphasis in playful
activities does not necessarily lie in efficiency, but rather in pleasure and fun. The
evaluation of Apple Hunt addresses this issue, by entailing a variety of children-
oriented evaluation methodologies and theories (Table 2), while introducing Children
Committees, a sort of evaluators committee constituted at the end of a game series.
Moreover, the evaluation was aimed at measuring the objectives set for AmI Playfield
applications along with the system’s performance according to the seven types of
problems in games [1], as per the Table 1.
Learning by Playing in an Ambient Intelligent Playfield 493
The evaluation of the Apple Hunt game so far has involved 9 children in total. All
children had a considerable experience with computers and technological gadgets,
whereas most of them had been playing computer games since long. Their age varied
from 7 to 11 years old, while 6 out of the 9 participants were 6th-grade elementary
school students (according to the Greek educational system).
During the first phase (practice), the children practiced in teams of two persons
playing in turns, with only one team participating at a time. This arrangement helped
the participants engage in discussions, which seemed to overweigh the drawbacks of
the Think Aloud method. The main interest in this phase was to realize possible
misconceptions or difficulties caused by the system’s features.
In the second phase (free play), the game was played by two teams, but this time
only one member per team was supposed to act within the playfield, while the others
stayed aside to provide support, holding a pencil and a notepad, useful for calculating
complex arithmetic operations. Therefore, during the test two rival players were
acting in the playfield simoultaneously. This phase was devoted to free game, since
by this point the players had already been trained into the AmI Playfield and the rules
of the Apple Hunt game. This phase was assessed by observation, as the most
appropriate practice in understanding the children’s feelings, according to Hana et al.
[6]. This method proved valuable in figuring “what was fun” or “what was boring”
(impacts in challenge and engagement), etc. The performance reports (Fig. 2b)
produced by the system were used to extract information about regular problems
(mistakes) during subsequent games. Controller logs were also analyzed to assess the
controller’s interface usability.
The third phase, influenced by the Smileyometer [17] and the Problem
Identification Picture Cards [1], involved the composition of a young-evaluators’
committee. For this purpose, each child was provided with a set of picture-cards (Fig.
4), illustrating an 1-5 likert-like scale. The children were placed in hemicycle fashion
494 H. Papagiannakis et al.
facing a facilitator, who addressed a series of questions to them (Fig. 5). They were
asked to answer each question by raising the appropriate card simoultaneously, while
he performed laddering [19] whenever judged necessary, in order to extract as much
of the committee’s comments.
Fig. 4. The picture cards designed for the children committee in ascending order
1 Min: 3 Max: 5 Std: 0.60 1. Did you like the game? Challenge & Fun
2 Min: 5 Max: 5 Std: 0.00 2. Did you enjoy moving around, instead of sitting, during play? Kinesthetic activity & Fun
3 Min: 3 Max: 5 Std: 0.93 3. Was it easy for you to calculate the arithmetic operations?
4 Min: 3 Max: 5 Std: 0.71 4. Was it easy for you to move around the playfield? Challenge problems &
5 Min: 3 Max: 5 Std: 0.78 5. Was it easy for you to locate the targets? Inefficiencies
6 Min: 3 Max: 1* Std: 0.71 6. Did you feel bored? (*inverse concept)
7 Min: 4 Max: 5 Std: 0.44 7. Did you like the mobile phone?
Control problems
8 Min: 3 Max: 5 Std: 0.71 8. Was it easy for you to use the mobile phone, in order to target your positions?
9 Min: 3 Max: 5 Std: 0.73 9. Did you enjoy the visual display?
Curiosity problems
10 Min: 1 Max: 5 Std: 1.41 10. Did you enjoy the music?
11 Min: 1 Max: 5 Std: 1.36 11. Did you enjoy the sound effects?
12 Min: 3 Max: 5 Std: 0.93 12. Was it easy to follow up the visual display? Learnability
13 Min: 3 Max: 5 Std: 0.87 13. Was it easy to know when you should move to a new position? Usability problems at
14 Min: 3 Max: 5 Std: 0.73 14. Was it easy to understand where you should or should not target? the cognitive level &
Learnability
15 Min: 1 Max: 5 Std: 1.58 15. Was it easy to decide which arithmetic operation you should use?
16 Min: 3 Max: 5 Std: 0.71 16. Was it easy for you to find the right position on the playfield after a command? Usability problems at
the physical level &
17 Min: 4 Max: 5 Std: 0.44 17. Was it easy for you to stand within the positions’ boundaries? Learnability
Fig. 5. Average committee score per question in parallel to the corresponding evaluation
parameter. The findings here consist only a part of the final assessment of each evaluation
parameter
5.1 Findings
The findings reported in this Section have turned up through a compilation of all
phases’ results.
Learning by Playing in an Ambient Intelligent Playfield 495
6 Conclusions
This paper has reported the development and evaluation of a technological framework
for learning applications, named AmI Playfield, which constitutes an educative
Ambient Intelligent (AmI) environment aimed at creating challenging learning
conditions through play and entertainment.
With respect to previous efforts, AmI Playfield emphasizes kinesthetic and
collaborative technology in natural playful learning, while embodying an innovative
performance measurement system. Providing an appropriate framework for
developing learning applications with extended customization facilities, the system
supports a wide breadth of educational subjects and concepts.
Addressing the objective of uncovering the system’s potential to encourage
kinesthetic and collaborative play, the prototype “Apple Hunt” game was developed.
“Apple Hunt” is an educational game designed for elementary school children, which
aims at exercising players in fundamental arithmetic operations. The evaluation of
Apple Hunt intended to assess the effectiveness of the system’s technological features
that encourage and contribute to learning. Furthermore, children committees, a fresh
approach to evaluation with children, were introduced. The application of this
methodology is believed to bring an interesting potential in the context of Ambient
Intelligent Environments and educational games, as demonstrated by the children’s
forward discussions and high engagement during the conducted process.
According to the evaluation results, Apple Hunt, and consequently AmI Playfield,
prove to satisfy their purpose to a large extent. Their value lies in that they
successfully promote kinesthetic activity, collaborative work and thinking in an
entertaining multimodal environment. On the other hand, potential improvements of
Learning by Playing in an Ambient Intelligent Playfield 497
their current implementation are related to the lack of complexity scaling and to
interactivity issues.
In conclusion, the work presented in this paper constitutes an encouraging step
towards the adoption of entertaining educative practices for young learners that not
simply foster their cognitive evolvement, but also benefit their emotional, social and
physical development.
References
1. Workshop Interaction Design for Children (2005)
2. Benford, S., Rowland, D., Flintham, M., Drozd, A., Hull, R., Reid, J., Morrison, J., Facer,
K.: Life on the edge: supporting collaboration in location-based experiences. In: CHI
2005: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems,
pp. 721–730. ACM Press, New York (2005)
3. Gardner, H.: Frames of Mind: The theory of multiple intelligences. Basic Books (1993)
4. Georgalis, Y., Grammenos, D., Stephanidis, C.: Middleware for Ambient Intelligence
Environments: Reviewing Requirements and Communication Technologies. In:
Stephanidis, C. (ed.) UAHCI 2009, Part II. LNCS, vol. 5615, pp. 168–177. Springer,
Heidelberg (2009)
5. Grønbæk, K., Iversen, O.S., Kortbek, K.J., Nielsen, K.R., Aagaard, L.: Interactive Floor
Support for Kinesthetic Interaction in Children Learning Environments. In: Baranauskas,
C., Abascal, J., Barbosa, S.D.J. (eds.) INTERACT 2007. LNCS, vol. 4663, pp. 361–375.
Springer, Heidelberg (2007)
6. Hanna, L., Risden, K., Alexander, K.J.: Guidelines for usability testing with children.
Interactions 4, 9–14 (1997)
7. Karime, A., Hossain, M.A., El Saddik, A., Gueaieb, W.A.: Multimedia-driven Ambient
Edutainment System for the Young Children. In: Proc. MM 2008, pp. 57–64. ACM (2008)
8. Keating, N.H.: The Lambent Reactive: an audiovisual environment for kinesthetic
playforms. In: Proc. NIME 2007. ACM (2007)
9. Krogh, P.G., Ludvigsen, M., Lykke-Olesen, A.: Help me pull that cursor - A Collaborative
Interactive Floor Enhancing Community Interaction. In: Proc. OZCH 2004, pp. 22–24
(2004)
10. Leikas, J., Väätänen, A., Räty, V.-P.: Virtual Space Computer Games with a Floor Sensor
Control - Human Centred Approach in the Design Process. In: Brewster, S., Murray-
Smith, R. (eds.) Haptic HCI 2000. LNCS, vol. 2058, p. 199. Springer, Heidelberg (2001)
11. Lund, H.H., Jessen, C.: Playware intelligent technology for children’s play. Technical
Report TR-2005-1, Maersk Institute For Production Technology (2005)
12. Marti, P., Lund, H.H.: Ambient Intelligence Solutions for Edutainment Environments. In:
Proc. AI*IA. Springer (2003)
13. Ndiaye, A., Gebhard, P., Kipp, M., Klesen, M., Schneider, M., Wahlster, W.: Ambient
Intelligence in Edutainment: Tangible Interaction with Life-Like Exhibit Guides. In:
Maybury, M., Stock, O., Wahlster, W. (eds.) INTETAIN 2005. LNCS (LNAI), vol. 3814,
pp. 104–113. Springer, Heidelberg (2005)
14. Pagulayan, R., Keeker, K., Wixon, D., Romero, R., Fuller, T.: User-centered design in
games. In: Human-Computer Interaction Handbook: Fundamentals, Evolving Techniques
and Emerging Applications, pp. 883–905. Lawrence Erlbaum Associates (2003)
15. Paradiso, J., Abler, C., Hsiao, K., Reynolds, M.: The Magic Carpet: Physical Sensing for
Immersive Environments. In: Proc. CHI 1997, pp. 277–278. ACM Press (1997)
498 H. Papagiannakis et al.
16. Price, S., Rogers, Y., Scaife, M., Stanton, D.: Neale.: Using tangibles to promote novel
forms of playful learning. Interacting with Computers 15, 169–185 (2003)
17. Read, J.C., MacFarlane, S.J.: Using the Fun Toolkit and Other Survey Methods to Gather
Opinions in Child Computer Interaction. In: Proc. IDC 2006. ACM Press, Tampere (2006)
18. Resnick, M.: Edutainment? No thanks. I prefer playful learning 1(1), 2–4 (2004)
19. Reynolds, T.J., Gutman, J.: Laddering Theory, Method, Analysis, and Interpretation.
Journal of Advertising Research 28(1), 11–31 (1988)
20. Rogers, Y., Price, S., Fitzpatrick, G., Fleck, R., Harris, E., Smith, H., Randell, C., Muller,
H., O’Malley, C., Stanton, D., Thompson, M., Weal, M.: Ambient Wood: designing new
forms of digital augmentation for learning outdoors. In: IDC 2004: Proceeding of the 2004
Conference on Interaction Design and Children, pp. 3–10. ACM Press, NY (2004)
21. Roschelle, J.M., Pea, R.D., Hoadley, C.M., Gordin, D.N., Means, B.M.: Changing how
and what children learn in school with computer-based technologies. The Future of
Children: Children and Computer Technology 10(2), 76–101 (2000)
22. Soler-Adillon, J., Ferrer, J., Pares, N.: A Novel Approach to Interactive Playgrounds: the
Interactive Slide Project. In: Proceedings of the 8th International Conference on Interaction
Design and Children (IDC 2009), pp. 131–139. ACM, New York (2009)
23. Soute, I., Markopoulos, P.: Head Up Games: The Games of the Future Will Look More
Like the Games of the Past. In: Baranauskas, C., Abascal, J., Barbosa, S.D.J. (eds.)
INTERACT 2007. LNCS, vol. 4663, pp. 404–407. Springer, Heidelberg (2007)
24. Whitton, N.: Learning with Digital Games. A Practical Guide to Engaging Students in
Higher Education. Routledge (2010)
25. Zabulis, X., Grammenos, D., Sarmis, T., Tzevanidis, K., Argyros, A.A.: Exploration of
large-scale museum artifacts through non-instrumented, location-based, multi-user
interaction. In: VAST 2010, pp. 21–24. Eurographics Association (2010)
26. Zabulis, X., Sarmis, T., Tzevanidis, K., Koutlemanis, P., Grammenos, D., Argyros, A.A.:
A platform for monitoring aspects of human presence in real-time. In: Proc. ISVC 2010.
ACM (2010)
27. Zaman, B.: Evaluating games with children. In: Proc. Interact, Workshop on Child
computer Interaction: Methodological Research (2005)
Stimulating Cognitive Abilities with Internet of Things
Abstract. The interaction between mobile devices and physical objects in the
real world is becoming increasingly important as people can utilize available
services in a more natural and intuitive way, for example, moving an object
closer to a device that triggers the service. In order to realize these
developments it is necessary to use the benefits offered by the latest
technologies (RFID, WiFi, Web Service, etc). This article describes an
interactive system and its infrastructure based on games for stimulating
cognitive abilities using Internet of Things designed to facilitate the human–
computer interaction.
1 Introduction
Cognitive abilities are mental skills that allow us to reason, solve problems,
communicate among ourselves [2].When people have cognitive disorders, they need
help to perform day-to-day activities. For this reason, rehabilitation therapies and
cognitive stimulation are necessary for the treatment and improvement of cognitive
abilities such as (memory, language, attention, concentration). Currently there is a
growing use of games aimed at the rehabilitation of cognitive impairment via
computer and this has established itself as one of the treatment methods in many
rehabilitation centers. These games offer many advantages, although the classical
interaction such as the use of the mouse, keyboard and virtual reality devices can be
an obstacle for people with limitations. Over the last few years, increasing advances
have been made in technology; as a result, new technology scenarios such as The
Internet of Things have been added to social activities.
Its main goal is to provide the user with advanced and implicit computing, capable
of carrying out a set of services but without being aware of it. The scenario is
described as a daily life object network; all of them are digitalized and interconnected.
In order to develop educational games dedicated to the stimulation of the cognitive
abilities of people with cognitive disabilities, it is necessary to break the barriers
between computers and users. In this case, the Internet of Things [1] paradigm
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 499–502, 2012.
© Springer-Verlag Berlin Heidelberg 2012
500 E. de la Guía, M.D. Lozano, and V.M.R. Penichet
simplifies the difficulty that the older styles of interaction gives us, offering digitized
objects that provide a simple and intuitive interaction, so the user is encouraged to use
the system and receive the benefits. Developing a system that offers the benefits of
games and new technologies needs an infrastructure that takes into account the
necessary components and services in environments that use the Internet of Things
paradigm where objects are combined with interactive devices to provide services
desired by the user.
This paper describes an interactive system based on games that, in conjunction with
mobile devices, allows tasks to be performed with a simple and intuitive gesture,
thanks to the Internet of Things and the combination of technologies like, RFID, WiFi
and Web Services.
2 TrainInAb System
TrainInAb (Training Intellectual Abilities) is an interactive and collaborative game
designed to stimulate people with cognitive disabilities. It integrates a new form of
human-computer interaction. The user can interact with the system through everyday
objects such as cards, toys, coins…The functionality of the system is as follows: In
the main game an interface is projected on the wall. Users with physical interfaces,
i.e, the objects that integrate RFID tags, can interact with the main interface; this
requires the mobile device that incorporates the RFID reader to interact with the main
interface by bringing objects closer to the mobile device (See Figure 1).
Fig. 1. Digitized objects with RFID tags that communicate with the game's interface through
the mobile device
The games are aimed at improving cognitive abilities such as memory, language,
and concentration. An example of a memorization game is as follows: one image is
displayed on the projector for a limited time and the user has to concentrate and
memorize the image. After ten seconds, the image disappears. The user must
remember what the missing image is. Then, they have to identify it among all the
Stimulating Cognitive Abilities with Internet of Things 501
physical objects and bring it closer to the mobile device, which is responsible for
checking whether it is the correct one or not and then the success or failure message is
shown accordingly.
Perception Layer. This layer is the intermediary between the user and the system. Its
main function is to allow the user to easily interact with the system. It is divided into
two components: objects and devices. Objects are physical user interfaces that
integrate RFID tags. Devices are communication channels between the user and the
system. These can be mobile devices, computers, tablets, or projectors.
502 E. de la Guía, M.D. Lozano, and V.M.R. Penichet
Application Layer. This layer provides services to support the stimulating games. It is
composed of a server that provides services to other devices. It offers important
functionalities such as Web Services and/or database. The internal operation is as
follows: First, the web service receives the information, specifically the id tag that has
been read from mobile devices. Secondly, the system checks the method associated
with this id tag in the database. Finally, web service executes the specific method. The
operation executed depends on following parameters: the identifier object, the
executed game and the current status in the game.
Network Layer. This layer allows to communicate and transmit the information
obtained from the perception layer to application layer
4 Conclusions
Acknowledgments. This research has been partially supported by the Spanish CDTI
research project CENIT-2008-1029, the CICYT TIN2011-27767-C02-01 project and
the regional projects with reference PAI06-0093-8836 and PII2C09-0185-1030.I
would like to especially thank Erica Gutierrez Gonzalez and Yolanda Aranda Cotillas
their collaboration on this project.
References
1. Ashton, K.: That ’Internet of Things’ Thing. RFID Journal 22 (2009) (retrieved April 8,
2011)
2. Luckasson, R., Borthwick-Duffy, S., Buntix, W.H.E., Coulter, D.L., Craig, E.M., Reeve, A.,
et al.: Mental Retardation. Definition, classification and systems of supports, 10th edn.
American Association on Mental Retardation, Washington, DC (2002)
Context, Patterns and Geo-collaboration to Support
Situated Learning
1 Introduction
Situated learning is a general theory of knowledge acquisition that emphasizes the
importance of the activity, the context and the culture in which learning occurs [13].
Social interaction is another critical component of situated learning; learners become
involved in a "community of practice" which embodies certain beliefs and behaviors to
be acquired. Educational technologists have been applying the notion of situated learn-
ing over the last two decades, in particular promoting learning activities that focus on
problem-solving skills [11, 15, 20]. The notion of “cognitive apprenticeship” [5] is also
closely related to “situated learning” as: "Cognitive apprenticeship supports learning in
a domain by enabling students to acquire, develop and use cognitive tools in authentic
domain activity. Learning, both in and outside school advances through ”.
Now, the integration of one-to-one computer-to-learner models of technology en-
hanced by wireless mobile computing and position technologies provides new ways to
integrate indoor and outdoor learning experiences. The notion of “seamless learning”
[22] has been proposed to define these new learning situations that are marked by a
continuity of learning experiences across different learning contexts. Students, indivi-
dually or in groups, carry out learning activities whenever they want in a variety of
situations and that they switch from one scenario to another easily and quickly. In these
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 503–511, 2012.
© Springer-Verlag Berlin Heidelberg 2012
504 G. Zurita and N. Baloian
learning situations, learners are able to examine the physical world by capturing sensor
and geo-positional data and conducting scientific inquiries and analyses in new ways
that incorporate many of the important characteristics suggested by situated learning.
In this paper we describe our current research efforts that include the design of a
learning environment that integrates learning with patterns, mobile applications and
geo-collaboration tools in order to support situated learning. Learning activities in
these settings take place in and outside the classroom and encourage students to collect
data in the field in order to find, relate and document patterns of any nature. An impor-
tant element of the collected data is the geographical location where instances of the
pattern being learned are located.
2 Related Work
Some interesting applications supporting learning activities guided by situated learning
making use of geo-referenced data over maps and mobile devices have been developed
in the past years. Few of them rely upon geo-localization features that characterized
Geographic Information Systems (GIS) while most of the applications are based on the
notion of location-based services (LBS). A relevant difference between LBS and GIS
is that a GIS application also geo-references information using visually represented
maps, in addition to offering localization services as LBS does. A GIS also offers sev-
eral additional functionalities, such as associating information of different nature to a
geographic location, recording the history of routes, making notes on real geographic
zones, determining routes, comparing different notes made in different locations, etc.
These different functionalities and information layers certainly may introduce an added
value to situated learning applications supported by geo-localization, as they allow to
make connections between places, content, learning activities and learners.
Collaborative activities can be introduced in situated learning scenarios by letting
participants collaboratively geo-reference information, as well as solving tasks in par-
ticular locations taking advantages of the affordances of mobile technologies. Students
may collaboratively work at the same time and in the same place, at the same time and
in different places, at different times in the same place or at different times in different
places. These types of collaborative activities have not been widely explored yet in
situated learning settings since most of the research efforts have only focused on one or
another modality. Moreover, few efforts consider the benefits of other learning modali-
ties like personalized and social learning, encompassing physical and digital worlds,
ubiquitous knowledge access, combining use of multiple device types, knowledge
synthesis or learning with patterns [22].
Patterns play a significant role in learning. Research findings in the field of learning
psychology provide some indications that human learning can be explained by the fact
that learner discover, register and later apply patterns [7, 10, 17, 18]. These cognitive
processes "involves actively creating linkages among concepts, skill elements, people,
and experiences" [7]. For the individual learner, the learning process involves "'making
meaning' by establishing and re-working patterns, relationships, and connections" [7].
Patterns are recurring models, often are they presented as solutions for recurring prob-
lems. Natural sciences, mathematics and arts also work with patterns. The exact use of
the term however, varies from discipline to discipline. The first formalization of pat-
tern description and their compilation into networks of “pattern languages” was
Context, Patterns and Geo-collaboration to Support Situated Learning 505
C1 √ √ √ √ √ √
C2 √ √ √ √ √ √ √
C3 √
C4 √ √
C5 √ √ √ √ √ √
C6 √ √ √ √ √ √
C7 √ √ √ √ √ √ √
C8 √ √ √
C9 √ √
morphology” and “the history of the city square through centuries”. The system chal-
lenges the stu- dents to identify different types of objects and conducting some tasks
including recording still images and video describing how they solved the tasks they
were assigned. In order to solve these problems, students are required to collaborate
using a number of tools including instant text messaging between smartphones and
computers. MobileMath [21] is designed to investigate how a modern, social type of
game can contribute to students’ engagement in learning mathematics. It is played on a
mobile phone with a GPS receiver. Teams compete on the playing field by gaining
points by covering as much area as possible. They do this by constructing squares,
rectangles or parallelograms by physically walking to and clicking on each vertex
(point). During the game, in real-time the locations of all teams and all finished quadri-
laterals are visible on each mobile phone. The treasure hunt game [2] has been de-
veloped as a case study to help analyzing a specific domain and designing a generic
and flexible platform to support situated collaborative learning. Students go around the
city and learn how to participate in several social/group activities. In SketchMap [15],
children carry a PDA and create a map using a stylus pen by drawing streets and plac-
ing icons such as hospitals or municipal offices. Using a USB camera attached to the
tablet PC children can capture an image, or a video which is shown as an icon. The
icon can be dragged from the palette to anywhere on the map. The system supports
reflection by allowing the children to replay their map creation processes. Annotations
on the maps allow children to add new information or experiences, related to what they
have discovered after their outdoor activities. The children can collaboratively share
Context, Patteerns and Geo-collaboration to Support Situated Learning 507
information and knowledgee about neighboring areas in the vicinity of their school. In
Micromandarin [6], a dataabase of English-Chinese translations associated with thheir
context of use was created d. This application supports a key functions: studying llan-
guage based on where you are; using language you have learned based on where you
are; browsing all language you
y have seen through the application.
Based on the information n shown in table 1, we can conclude that from the requuire-
ments stated by [9], the leess frequently considered are: the access to expert perfor-
mances and the modeling off processes (C3), the coaching and scaffolding by the teaach-
er at critical times (C8), and
d the authentic assessment of learning within the tasks (C
C9).
Moreover, none of the appllications described above has introduced the “learning w with
patterns” modality so far.
3 Geo-collaborativ
ve Application for “Learning with Patterns””
Based on the results describeed in the previous section, we can conclude that mobile ggeo-
collaboration can be successsfully used to implement learning activities grounded onn si-
tuated learning. We have deeveloped a prototype of a system to support geo-collaboraative
learning activities that includ
de collecting data on the field in order to find evidence of ppre-
viously known patterns, for example, knowing the patterns of neo-classical architectture
found in the city, or discoverring patterns starting from the evidence found in the field ((e.g.
studying the reasons of why y certain patterns of trees appear more often in city parrks).
According to the specific sceenario described in the next paragraphs, the following funcctio-
nalities for a system supportiing them have been identified:
Creating Patterns: To creaate a pattern means to define its components, describingg its
elements: name, descriptio on, context, etc. For each pattern, these components are
annotated over the map by y free-hand writing (see Figure 1). Additional multimeedia
objects (pictures, videos, etc.) can be associated to the description of the pattern.
instructions annotated overr the map with a specific path (left of Figure 2), or to rran-
domly explore a pre-defineed area within the city in order to find evidence of patteerns
(right Figure 1), or visiting
g specifically marked places (right of Figure 2). Therefoore,
the teacher can define a patth, an area, or mark points by free hand sketching the lim mits
of it onto the map. Consequ uently, the task for the students will consist of explorinng a
geographic area by followin ng a path, randomly visiting a concrete area, or specificaally
visiting marked locations, in order to collect data about the instances of a patteern.
Furthermore, the teacher caan associate previously defined patterns to the task or creeate
new ones inside the task creeation process. Figure 2 shows the creation of various taasks
and their associations of theese to the corresponding pattern(s).
Fig. 2. Teacher’s view of the system for the task definitions, which are made by followinng a
fic locations (right) in which the students need to work with
path (left), and marking specifi
4 Conclusions
In our current work In ourr current efforts, we are proposing the design of learnning
activities that incorporate elements
e of situated learning that are supported by the use
of geo-collaboration tools and mobile applications which incorporates learning w with
patterns. From our literaturre review, we can see on the one hand that learning acttivi-
ties using mobile techn nologies and geo-collaboration have been successsful
implemented and on the otther hand, it has been recognized that patterns can playy an
important role in the learnning process. Since the proposed system presented in the
previous section can be useed to handle patterns in any field/discipline, it can be uused
in a variety of learning sccenarios. In section 4, we presented the requirements for
designing learning environm ments that support situated learning. In this section, we w
will
analyze how the proposed system
s fulfills them. Table 3 illustrates how our suggessted
solution supports all requirrements for situated learning, some in a better way tthan
others. An important charaacteristic of the learning approach proposed in our currrent
510 G. Zurita and N. Baloian
efforts is that it starts in the classroom, continues on the field; proceeds then at home
or in a computer lab and ends with a learning session inside the classroom again. This
again can create another cycle which is interesting from the point of view that the
sake system is able to support different learning modes and stages, without disrup-
tions of methodology, interaction paradigm or data compatibility. In fact, the system
is able to run on different platforms. It has been used on PCs inside the classrooms,
where the teacher used an electronic board to create patterns and tasks during the
class. It has been also used on tablet PCs as well as on handheld computers. The
common aspect on all these platforms is the touch screen and the big difference is the
size. However, the way of using sketching and gestures to control the applications
was positively evaluated by the early users. They also positively evaluated the fact
that they use the same interaction paradigm regardless the platform they were using,
so they do not need to learn how to interact with another application interface.
Although the first trial of the system has been done implementing a rather simple
learning activity, it is easy to see that this approach can be used to learn and discover
more complicated patterns across different fields. Below we provide some examples
of different field in which we plan to conduct some future trials in order to validate
our approach: a) Geology students must perform collaborative activities like field
measurements and observations that can be monitored and controlled remotely by a
teacher. Students must geo-reference their notes, take pictures and make recordings at
concrete points that will be constructed jointly and/ or with their peers; b) Architec-
ture students may recognize construction styles and design patterns in specific areas
of an urban space. Students may also collaboratively survey construction styles or
design patterns in a certain zone using geo-referenced notes to understand the changes
in the construction development; c) Social sciences. Students of anthropology, psy-
chology or sociology may conduct field observations for which collaboratively
created data and information notes of diverse nature (text, images, video & sound),
associated with its localization will enrich their observations.
References
[1] Alexander, C., Ishikawa, S., Silverstein, M.: A Pattern Language. Towns, Buildings,
Construction. Oxford, New York (1977)
[2] Bahadur, S., Braek, R.: Platform support for situated collaborative learning. In: Interna-
tional Conference on Mobile, Hybrid, and On-line Learning, pp. 53–60 (2009)
[3] Borchers, J.: A pattern approach to interaction design. John Wiley & Sons, Chichester
(2001)
[4] Breuer, H., Zurita, G., Baloian, N., Matsumoto, M.: Mobile Learning with Patterns. In:
Procs. of the 8th IEEE Intl. Conf. on Advanced Learning Technologies Santander, Spain,
July 1-5 (2008)
[5] Brown, J.S., Collins, A., Duguid, P.: Situated Cognition and the Culture of Learning,
pp. 32–42 (1989)
[6] Edge, D., Searle, E., Chiu, K., Zhao, J., Landay, J.A.: MicroMandarin: mobile language
learning in context. ACM (2011)
Context, Patterns and Geo-collaboration to Support Situated Learning 511
[7] Ewell, P.: Organizing for learning: A point of entry. Draft prepared for discussion at the
AAHE Summer Academy at Snowbird. National Center for Higher Education Manage-
ment Systems (NCHEMS) (1997),
http://www.intime.uni.edu/model/learn-ing/learn_summary.html
[8] Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns. Elements of Reusable
Object-Oriented Software. Addison-Wesley, Reading (1995)
[9] Herrington, J., Oliver, R.: An instructional design framework for authentic learning envi-
ronments, pp. 23–48 (2000)
[10] Howard, J., Mutter, S., Howard, D.: Serial pattern learning by event observation. Journal
of Experimental Psychology: Learning, Memory, and Cognition 18(5), 1029–1039 (1992)
[11] Kurti, A., Spikol, D., Milrad, M., Svensson, M., Pettersson, O.: Exploring How Pervasive
Computing Can Support Situated Learning. In: Proc. of Pervasive Learning Workshop at
Pervasive, pp. 19–26 (2007)
[12] Lave, J., Wenger, E.: Situated Learning: Legitimate Periperal Participation. C.U. Press,
Cambridge (1990)
[13] Mattila, P., Fordell, T.: MOOP - Using m-Learning Environment in Primary Schools. In:
Proceedings of mLearn 2005 (2005)
[14] Miura, S., Ravasio, P., Sugimoto, M.: Situated Learning with SketchMap. Interaction
Technology Laboratory, Department of Frontier Informatics. The University of Tokyo,
Tokyo (2010)
[15] Ogata, H., Yin, C., Paredes, J.R.G., Saito, N.A., Yano, Y., Oishi, Y., Ueda, T.: Support-
ing Mobile Language Learning outside Classrooms. In: The Proceedings of IEEE Interna-
tional Con-ference on Advanced Learning Technologies, ICALT 2006, Kerkrade, Nether-
lands, pp. 928–932 (2006)
[16] Posner, M., Keele, S.: On the genesis of abstract ideas. Journal of Experimental Psychol-
ogy 77(3), 353–363 (1968)
[17] Restle, F.: Theory of serial pattern learning: Structural trees. Psychological Reiew 77(6),
481–495 (1970)
[18] The Pedagogical Pattern Project, http://www.pedagogicalpatterns.org (last
visited on November 5, 2009)
[19] Vanderbilt, C.T.G.: Anchored instruction and situated cognition revisited. Educational
Technology 33, 52–70 (1993)
[20] Wijers, M., Jonker, V., Kerstens, K.: MobileMath: the Phone, the Game and the Math.
Paper presented at the 2nd European Conference on Games Based Learning. Barcelona,
Spain (2008),
http://www.fi.uu.nl/isdde/documents/curriculum_jonker.pdf
[21] Wong, L.H., Looi, C.K.: What Seams Do We Remove in Mobile Assisted Seamless
Learning? A Critical Review of the Literature. Computers & Education 57(4), 2364–2381
(2011)
Automated Energy Saving (AES) Paradigm to Support
Pedagogical Activities over Wireless Sensor Networks
Abstract. This paper presents a AES paradigm that introduces wireless sensor
networks to control remote servers or other devices at remote place through
mobile phones. The main focus of paper is to consume minimum energy for ob-
taining the objectives. To realize the paradigm, mathematical model is formu-
lated. The proposed paradigm consists of automatic energy saving model senses
the environment to activate either the passive or active mode of sensor nodes
for saving energy. Simulations are conducted to validate the proposed para-
digm; we use two types of simulations: Test bed simulation is done to check
practical validity of proposed approach and Ns2 simulation is performed to si-
mulate the behavior of wireless sensors network with supporting mathematical
model. The prototype can further be implemented to handle several objects
simultaneously in university and other organizations.
1 Introduction
The Scientific and technological inventions have been observed for many decades that
change the lives. The faster growth in smallness of microprocessors has made signifi-
cant progress for AmI [1]. AmI nourishes from many well organized fields of compu-
ting and engineering.
Many objects are now embedded with computing power like home appliances and
portable devices (e.g., microwave ovens, programmable washing machines, robotic
hovering machines, mobile phones and PDAs). These devices help and guide us to
and from our homes (e.g., fuel consumption, GPS navigation and car suspension) [2].
AI involves compact power that is adapted to achieve specific tasks. This widespread
accessibility of resources builds the technological layer for understanding of AI [6].
Information and communications technologies (ICT) have highly been accepted as
part of introducing new cost-effective solutions to decrease the cost of pedagogical
activities and healthcare. For example, the Ubiquitous intelligence health home that is
equipped with AmI to support people in their homes. While this notion had some prob-
lems to be fully understood in the past, but due to emerging technologies and incredi-
ble progress in low-power electronics and sensor technologies supported with faster
J. Bravo, D. López-de-Ipiña, and F. Moya (Eds.): UCAmI 2012, LNCS 7656, pp. 512–519, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Automated Energy Saving (AES) Paradigm to Support Pedagogical Activities 513
wireless network have facilitated the human life. Robust heterogeneous wireless sensor
network systems can be organized for controlling mobility of objects and logistics.
These developments have led to the introduction of small-sized sensors that are ca-
pable for monitoring the constraints of humans as well as living environment [3], [8].
The objective of AmI with use of sensors is to provide better living standard.
The deployment of mobile devices as sensor node provides more flexibility to interact
with objects in any environment [4]. These solutions not only improve the quality of
education and health of people in their own homes but also provide the fastest way of
communication to interact with devices all over the world. The exploitation of mobile
devices in wireless sensor network provides more flexibility, intelligence and adaptivity
to interact with devices dynamically in any environment [11]. This makes it possible to
deploy mobile phone not only as terminal but also as remote controller for several devic-
es. With deployment of mobile infrastructure, larger area can be covered as compare with
installed infrastructure using same number of sensors [10]. From other side, wireless
networks face many challenging issues including unreliable communication, consump-
tion of energy, storage resources, inadequate computing and harsh environments.
In this paper, we introduce a novel paradigm that involves mobiles and sensors over
robust WSN to facilitate in controlling remotely available servers and different devices.
2 Proposed Architecture
Our proposed architecture consists of two types of devices: mobile phone and specific
type of BT node sensors. The mobile phone is used to interact with different types of
servers and other devices. Initially mobile device is deployed to control remote serv-
ers but it can further be implemented to control several types of devices. The BT node
provides self-directed prototyping platform based on microcontroller and Blue-tooth
radio. It has been introduced as exhibition platform for research to deploy in distri-
buted sensor networks, wireless communication and ad-hoc networks. It is composed
of microcontroller, separate radio and ATmega 128. The radio of BT node comprises
of two radios: first is low power chipcon CC1000 suited for ISM-band broadcast. It
works same as Berkeley MICA2 mote does. This supports to BT node to create multi-
hop networks. Second radio is Zeevo ZV4002 supported with Bluetooth module.
We deploy BT node sensors in wireless sensor network to control remote servers and
objects to make faster communication consuming less energy using mobile phones. The
BT node provides multiple interfaces to control many devices at the same time but issue
is consumption of lot of energy during the sensing time. We implement paradigm to
enforce BT node to work either in active or passive mode for saving the energy.
We use different protocols and standards in our architecture. Most of sensors do
not communicate with Zigbee/ IEEE 802.15.4 standard. Wi-Fi and Bluetooth are also
not compatible because both utilize unlicensed 2.4 GHZ ISM bandwidth. So they are
using same bandwidth and in resulting causes of interference between them. In addi-
tion, both are also transmitting data at binary phase shift keying (BPSK) and quadric
phase shift keying (QPSK). Our selection of BT node sensors provides the compati-
bility to Zigbee, Bluetooth and Wi-Fi. It supports different types of applications and
having multitasking support. BT node consists of several drivers that are designed
with fixed length of buffers. The buffers in BT node are adjusted during compile time
to fulfill severe memory requirements.
514 A. Razaque and K. Elleithy
Available drivers include real time clock, UART, memory, I2C, AD converter and
power modes. We have discovered several theories that enabled to make mobile
phone compatible with BT node sensors over wireless sensor networks. We install
Asus WL-500GP that maintains IEEE 802.11 b/g/n standards that is equipped with
USB port to interlink with sensors. We also use Zigbee USB adapter/ IEEE 802.15.4
to provide the communication between sensors. Zigbee/ IEEE 802.15.4 also provide
the capability to sensors to maintain multi-hop communication.
We deploy highly featured hybrid network. This multi-featured network supports
to all BT node sensors and different product of mobile families supported with Zig-
bee/IEEE 802.15.4 and IEEE 802.11 b/g/n standards.
The beauty of sensor network is distribution into different regions. Each region has
one boundary node that coordinates with boundary node of other regions. Participat-
ing sensors go automatically into active and passive modes for saving the energy.
This process is also done with mathematical model for consuming less energy be-
cause only one node is activated during communication process. Boundary node is
dynamically selected that plays a role as communicator with other regions. Network
supports to mobiles to carry the commands in faster way to control remotely available
servers and other devices.
E UE PK K k UE β 1
∞
E PD PK K k f UE β (2)
Automated Energy Saving (AES) Paradigm to Support Pedagogical Activities 515
PK K k PD β ζ PK K k UE β α 2
E UE PK K k UE β α (4)
K is sensor node of N that is case of finite capacity environment, then equation (3)
represents N equations in UE , γ , and combine with equation (4). Thus, we can
resolve N +1 by using UE ,γ, where k ={ 0, 1, 2, 3…, N}. We can obtain the
threshhold of UE using inverse of relationship of UE β . Therefore, we can obtain
unknown environment but we have also to find indoor and outdoor environment. We
are using following probabilities for indoor and outdoor environment.
UEi = P (Di =0 | IOE outdoor),βi = P (Di0 = 0| IOE outdoor), PDi = P (Di = 1| IOE
indoor), γi = P (Di0 =1| IOE indoor). Assume that detection of environment is inde-
pendent made by sensor, the UE probability of obtained decision from ith sensor is
given by following equation:
β UE 1 Pc1 1 UE Pc0 (5)
Here Pc1(Pc0) is probability of IOE. When sensor ith transmits bit for finding “0” and
“1”. For IOE, Pc1 = Pc0 = Pc; Therefore, (5) can be simplified as follows:
β UE 1 2UE Pc (6)
The probability of detection the IOE depends on receiving decision can be obtained
from (6) and replacing UE with PD. FOR IOE
γ PD 1 2PD Pc (7)
This helps to determine the IOE, on basis of decision, BT node automatically works
either active or passive mode. This automatic process of detection causes the saving
energy. We replace γ with IOE.
IOE PD 1 2PD Pc
γ and IOE are showing the nature of environment, Di 1 2PD Pc; we substitute
1 2PD Pc with Di. We get:
516 A. Razaque and K. Elleithy
PD IOE Di (8)
If we obtain the value of Di, replace probability detection (PD) with its substitute
values.
IOE ∑ ∞ PK K k f UE β Di ; Now rearrange and get Di.
∞
Di IOE PK K k f UE β 9
If we obtain the value Di =1, it means passive mode is initiated and sensor ith saves
the energy. Passive mode is supported by energy of sun. Di=0 gives the sign of active
mode, and BT node consumes the energy in this mode, which obtains through passive
mode. If Di ≥ 1 || Di ≤ 0, shows that environment is unknown and sensor ith does not
work and goes to sleep position in order to save the energy.
We can obtain energy preserving as follows:
N
EN X E i γ ai 10
N
EN Y E j ∂ aj 11
Where EN(X) and EN(Y) denote total energy used by two different networks. E (i,j)
indicates the energy used by node i and j during transmission.
0 if aij 0
12
1 otherwise
Assume that aij is different from 0, only if node j receives E(i,j) when node i trans-
mits. This gives appropriate minimum level Emin i.e. if
E i, j 1 rij Emin (13)
We incur from the equation (13), it consume less energy during the process.BT sensor
node follows the energy saving integration method during the passive process. We here
show numerical time integrators that causes of preserving energy P(e), we begin by
assuming an x-point quadrature formula with nodes Ni. The required weight of ai is
obtained through Lagrange basis polynomials in interruption that is shown as follows:
τ Nj
lim τ , ai lim τ dτ 14
Ni Nj
,
Let a1, a2, a3,…ax be different real numbers (usually 0 ≤ Ni ≤1) for which ai ≠ for
all i. we use polynomial p(d0) for satisfying the degree.
p do xo 15
p d0 Nej A p d0 Nej S p d0 τs dx (16)
cal method. The solution of these methods depends on specific factorization of vector
filed. Assume, If A(x) = A is constant matrix, let (1, 1) be Hamiltonian system, thus it
becomes energy saving integrator of [9]. This is proof that sensor node also consumes
minimum amount of energy and saves the energy in passive mood.
proved the mathematical model, let us now evaluate the coverage efficiency of net-
work. We have already discussed about area of network that is 160m × 160m and
divided into 40m x 40m regions. We target coverage efficiency of network after dep-
loying from 1 to 70 sensors Figures 1 and 2 show coverage efficiency of WSN during
whole simulation time and increased number of sensors.
Fig. 1. Coverage efficiency on basis of sensors Fig. 2. Coverage efficiency on basis of time
Our simulated network shows that our proposed WSN achieves almost 100% effi-
ciency whereas general wireless sensor network gets 69% efficiency using 70 sensors.
We establish 15 sessions simultaneously in order to determine the actual behavior of
network in highly congested environment. If we have less number of sensors, it is
hard to establish many sessions at the same time. So, it is very interesting direction of
this research that 70 sensors can provide path-connectivity for 15 mobile devices to
interact with remotely placed devices at same time. In addition, one mobile can inte-
ract with multiple devices at same time. Question is why to deploy more sensors in
that area? And logically answer is availability of several servers and devices at the
different places.
More sensors are required to find path and provide the connectivity for enough
number of mobiles working concurrently. Trend of network coverage during the time
of 35 minutes of simulation is 99.2% in our proposed WSN whereas performance of
general network is affected. This is another weakness of general WSN. Our proposed
method produces stable efficiency during all the simulation.
It is proved that duration of simulation either increases or decreases; do not affect
the efficiency in our case.
The advantage of this model is to activate only one sensor in whole region. When
sensor finishes the job, it automatically goes to sleep mode. To validate the proposed
paradigm in WSN, we implement proposed mathematical model in ns2.35-RC7. On
the basis of findings, we prove that our proposed research have saved maximum
amount of energy as compare with traditional method of saving energy. In addition,
we have achieved objectives to control the remotely available servers and devices by
consuming minimum energy resources. In future, we will introduce testbed to control
several devices simultaneously using mobile phones.
References
[1] Chin, J., Callaghan, V., Clarke, G.: Soft-appliances: A vision for user created networked
appliances in digital homes. Journal on Ambient Intelligence and Smart Environments
(JAISE) 1(1), 69–75 (2009)
[2] Cook, D.J., Augusto, J.C., Jakkula, V.R.: Ambient intelligence: applications in society
and opportunities for artificial intelligence. Pervasive and Mobile Computing (2009)
[3] Nakashima, H., Aghajan, A., Augusto, J.C. (eds.): Handbook on Ambient Intelligence
and Smart Environments. Springer (2009)
[4] Reddy, S., Samanta, V., Burke, J., Estrin, D., Hansen, M., Srivastava, M.B.: MobiSense -
Mobile Network Services for Coordinated Participatory Sensing. In: Proc. of the 9th In-
ternational Symposium on Autonomous Decentralized Systems, ISADS (2009)
[5] Kastner, W., Soucek, G.N.S., Newman, H.M.: Communication Systems for Building Au-
tomation and Control. Proceedings of the IEEE 93(6), 1178–1203 (2005)
[6] Chiu, D.K.W., Choi, S.P.M., Wang, M., Kafeza, E.: Towards Ubiquitous Communication
Support for Distance Education with Alert Management. Educational Technology & So-
ciety 11(2), 92–106 (2008)
[7] Kajita, S., Mase, K.: uClassroom: Expanding Awareness in Classroom to Ubiquitous
Teaching and Learning. In: Proceedings of the 4th IEEE International Workshop on
Wireless, Mobile and Ubiquitous Technology in Education, pp. 161–163. IEEE Computer
Society, Los Alamitos (2006)
[8] Milrad, M., Spikol, D.: Anytime, Anywhere Learning Supported by Smart Phones: Expe-
riences and Results from the MUSIS Project. Educational Technology & Society 10(4),
62–70 (2007)
[9] Gómez-Skarmeta, A.F.: An Integral and Networked Home Automation Solution for In-
door Ambient Intelligence. IEEE Trans. on Pervasive Computing, 1536–1268 (2010)
[10] Zhang, Y., Partridge, K., Reich, J.: Localizing Tags Using Mobile Infrastructure. In:
Hightower, J., Schiele, B., Strang, T. (eds.) LoCA 2007. LNCS, vol. 4718, pp. 279–296.
Springer, Heidelberg (2007)
[11] Wu, C., Zhang, Y., Sheng, W., Kanchi, S.: Rigidity guided localization for mobile robotic
sensor networks. International Journal of Ad Hoc and Ubiquitous Computing 6(2),
114–128 (2010)
[12] Dargie, W., Poellabauer, C.: Fundamentals of wireless Sensor Networks. Wiley Series on
Wireless Communications and Mobile Computing (2010)
Author Index