Sie sind auf Seite 1von 558

Lecture Notes in Computer Science 7563

Commenced Publication in 1973


Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen

Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Alfred Kobsa
University of California, Irvine, CA, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
TU Dortmund University, Germany
Madhu Sudan
Microsoft Research, Cambridge, MA, USA
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max Planck Institute for Informatics, Saarbruecken, Germany
Andrew Ravenscroft
Stefanie Lindstaedt
Carlos Delgado Kloos
Davinia Hernández-Leo (Eds.)

21st Century Learning


for 21st Century Skills
7th European Conference
on Technology Enhanced Learning, EC-TEL 2012
Saarbrücken, Germany, September 18-21, 2012
Proceedings

13
Volume Editors

Andrew Ravenscroft
University of East London (UEL)
CASS School of Education and Communities
Stratford Campus, Water Lane, London E15 4LZ, UK
E-mail: a.ravenscroft@uel.ac.uk

Stefanie Lindstaedt
Graz University of Technology (TUG)
Knowledge Management Institute and Know-Center GmbH
Inffeldgasse 21a, 8010 Graz, Austria
E-mail: lindstaedt@tugraz.at

Carlos Delgado Kloos


Universidad Carlos III de Madrid
Departamento Ingeniería Telemática
Avenida Universidad 30, 28911 Leganés (Madrid), Spain
E-mail: cdk@it.uc3m.es

Davinia Hernández-Leo
Universitat Pompeu Fabra
Departament de Tecnologies de la Informació i les Comunicacions
Roc Boronat 138, 08018 Barcelona, Spain
E-mail: davinia.hernandez@upf.edu

ISSN 0302-9743 e-ISSN 1611-3349


ISBN 978-3-642-33262-3 e-ISBN 978-3-642-33263-0
DOI 10.1007/978-3-642-33263-0
Springer Heidelberg Dordrecht London New York

Library of Congress Control Number: 2012946459

CR Subject Classification (1998): H.4, H.3, I.2, C.2, H.5, J.1

LNCS Sublibrary: SL 2 – Programming and Software Engineering


© Springer-Verlag Berlin Heidelberg 2012
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is
concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication
or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,
in its current version, and permission for use must always be obtained from Springer. Violations are liable
to prosecution under the German Copyright Law.
The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply,
even in the absence of a specific statement, that such names are exempt from the relevant protective laws
and regulations and therefore free for general use.
Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India
Printed on acid-free paper
Springer is part of Springer Science+Business Media (www.springer.com)
Preface

The European Conferences on Technology Enhanced Learning (EC-TEL) are


now established as a main reference point for the state of the art in Technol-
ogy Enhanced Learning (TEL) research and development, particularly within
Europe and also worldwide. The seventh conference took place in Saarbrucken
in Germany, and was hosted by CeLTech – Centre for e-Learning Technology
(Saarland University/DFKI) – during 18–21 September 2012. This built upon
previous conferences held in Palermo, Italy (2011), Barcelona, Spain (2010),
Nice, France (2009), Maastricht, The Netherlands (2008), and Crete, Greece
(2006 and 2007). EC-TEL 2012 provided a unique opportunity for researchers,
practitioners, and policy makers to address current challenges and advances in
the field. This year the conference addressed the pressing challenge facing TEL
and education more widely, namely how to support and promote 21st century
learning for the 21st century skills. This theme is a key priority within the Eu-
ropean Union and constituent countries and also worldwide, as research needs
to address crucial contemporary questions such as:

– How can schools prepare young people for the technology-rich workplace of
the future?
– How can we use technology to promote informal and independent learning
outside traditional educational settings?
– How can we use next generation social and mobile technologies to promote
informal and responsive learning?
– How does technology transform education?

Our programme tackled the theme and key questions comprehensively through
its related activities, namely: 4 world leading keynote speakers; 38 high-quality
long and short scientific papers; 9 pre-conference workshops and 2 tutorials;
an industrial track; a doctorial consortium; and interactive demonstrations and
posters. ‘Interactivity’ was a key feature of the conference, which encouraged
the provision of demonstrations linked to scientific articles, continually running
video sequences of demonstrations throughout the conference venue and holding
a competitive ‘TEL Shootout’ where delegates voted on the best demonstration.
The four keynote speakers provided exciting and complementary perspec-
tives on the conference theme and sub-themes. Mary Lou Maher (Design Lab,
University of Maryland) emphasized the role and importance of designing for
diversity and creativity during her talk about Technology Enhanced Innovation
and Learning: Design Principles for Environments that Mediate and Encourage
Diversity and Creativity. Richard Noss (Director, London Knowledge Lab and
UK Teaching and Learning Research Programme) provided an insightful exam-
ination of the precepts and implications of the conference theme in his address
21st Century Learning for 21st Century Skills: What Does It Mean, and How Do
VI Preface

We Do It? Wolfgang Wahlster (Director, German Research Centre for Artificial


Intelligence) provided an innovative perspective on the essential links between
situated learning and industry requirements and practices in his talk Situated
Learning and Assistance Technologies for Industry 4.0. A further international
perspective was provided by Prof. Ruimin Shen (Shanghai Jiao Tong University)
who gave a keynote on Technology Enhanced Learning in China: The example
of the SJTU E-Learning Lab.
In addition, in what has become a tradition for EC-TEL conferences, the del-
egates were addressed by Marco Marsella, Deputy Head of the Unit for eContent
and Safer Internet from the European Commission, so that ongoing and future
research and development could be clearly articulated with the priorities for re-
search funding within Europe. A key overarching objective of EC-TEL 2012 was
to examine and improve the transitions between research, practice and industry.
This was reflected through the co-sponsors of the conference, which included the
European Association of Technology Enhanced Learning (EATEL), TELspain,
eMadrid, Springer and IMC information multimedia communication AG.
This year saw 130 submissions from 35 countries for consideration as full
papers at EC-TEL 2012. After intense scrutiny in the form of some 300 reviews,
the programme committee selected just 26 papers, or 20% of those submitted.
Also selected for inclusion in the proceedings were 12 short papers; 16 papers
from demonstration sessions and 11 poster papers.
Specifically, the conference programme was formed through the themes: learn-
ing analytics and retrieval; academic learning and context; personalized and
adaptive learning; learning environments; organizational and workplace learn-
ing; serious and educational games; collaborative learning and semantic means;
and, ict and learning. Collectively, these themes embraced a key feature of the
conference and EC-TEL research and practice. This is that TEL research and
development needs to embrace the increasing interconnectedness of learning tech-
nologies and the contextualized formal and informal practices for learning and
education.
Finally, in introducing these proceedings to you we hope that the high-quality,
rich and varied articles that are included take TEL research and thinking forward
in ways that address the changing and technology-rich landscape in which we
think, learn and work in the 21st century.

September 2012 Andrew Ravenscroft


Stefanie Lindstaedt
Carlos Delgado Kloos
Davinia Hernández-Leo
Conference Organisation

Executive Committee

General Chair
Carlos Delgado Kloos eMadrid/TELSpain/Universidad Carlos III
de Madrid, Spain

Programme Chairs
Andrew Ravenscroft CASS School of Education and Communities,
University of East London (UEL), UK
Stefanie Lindstaedt Graz University of Technology & Know-Center,
Austria

Workshop Chair
Tobias Ley Talinn University, Estonia

Poster and Demonstration Chair


Davinia Hernández-Leo Universitat Pompeu Fabra, Spain

Dissemination Chair
Sergey Sosnovsky CeLTech – Centre for e-Learning Technology
(Saarland University / DFKI), Germany

Industrial Session Chair


Volker Zimmermann imc, Germany

Local Organization Chair


Christoph Igel CeLTech – Centre for e-Learning Technology
(Saarland University / DFKI), Germany

Doctoral Consortium Chairs


Katherine Maillet Institut Mines-Telecom, France
Mario Allegra Istituto per le Tecnologie Didattiche, Italy
VIII Conference Organisation

Programme Committee
Heidrun Allert Christian-Albrechts-Universität zu Kiel
Charoula Angeli University of Cyprus
Luis Anido Rifon Universidade de Vigo
Inmaculada Arnedillo-Sanchez Trinity College Dublin
Nicolas Balacheff CNRS
Francesco Bellotti University of Genoa
Adriana Berlanga Open University of the Netherlands
Katrin Borcea-Pfitzmann Technische Universität Dresden
Francis Brouns Open University of the Netherlands
Peter Brusilovsky University of Pittsburgh
Daniel Burgos International University of La Rioja
Lorenzo Cantoni Università della Svizzera italiana
Linda Castañeda University of Murcia
Manuel Castro UNED
Mohamed Amine Chatti RWTH Aachen University
Giuseppe Chiazzese Italian National Research Council
Agnieszka Chrzaszcz AGH-UST
Audrey Cooke Curtin University
Raquel M. Crespo Universidad Carlos III de Madrid
Ulrike Cress Knowledge Media Research Center
Alexandra I. Cristea University of Warwick
Paul de Bra Eindhoven University of Technology
Carlos Delgado Kloos Universidad Carlos III de Madrid
Christian Depover Université de Mons
Michael Derntl RWTH Aachen University
Philippe Dessus LSE, Grenoble
Darina Dicheva Winston-Salem State University
Stefan Dietze L3S Research Center
Yannis Dimitriadis University of Valladolid
Vania Dimitrova University of Leeds
Hendrik Drachsler Open University of the Netherlands
Jon Dron Athabasca University
Benedict Du Boulay University of Sussex
Erik Duval K.U. Leuven
Martin Ebner University of Graz
Alfred Essa Desire2Learn
Dieter Euler University of St. Gallen
Baltasar Fernandez-Manjon Universidad Complutense de Madrid
Carmen Fernández-Panadero Universidad Carlos III de Madrid
Christine Ferraris Université de Savoie
Katharina Freitag imc AG
Muriel Garreta Universitat Oberta de Catalunya
Dragan Gasevic Athabasca University
Conference Organisation IX

Denis Gillet Swiss Federal Institute of Technology in


Lausanne
Fabrizio Giorgini eXact learning solutions
Christian Glahn Swiss Federal Institute of Technology Zurich
Monique Grandbastien LORIA, UHP Nancy1
David Griffiths University of Bolton
Begona Gros Universitat Oberta de Catalunya
Christian Guetl Graz University of Technology
Joerg Haake FernUniversitaet in Hagen
Andreas Harrer Catholic University Eichstätt-Ingolstadt
Davinia Hernández-Leo Universitat Pompeu Fabra
Knut Hinkelmann University of Applied Sciences Northwestern
Switzerland
Patrick Hoefler Know-Center
Ulrich Hoppe University Duisburg-Essen
Christoph Igel CeLTech – Centre for e-Learning Technology
Sanna Järvelä University of Oulu
Julia Kaltenbeck Know-Center
Petra Kaltenbeck Know-Center
Marco Kalz Open University of the Netherlands
Nuri Kara Middle East Technical University
Nikos Karacapilidis University of Patras
Michael Kickmeier-Rust University of Graz
Barbara Kieslinger Centre for Social Innovation
Ralf Klamma RWTH Aachen University
Joris Klerkx Katholieke Universiteit Leuven
Tomaz Klobucar Jozef Stefan Institute
Milos Kravcik RWTH Aachen University
Mart Laanpere Tallinn University
Lydia Lau University of Leeds
Effie Law University of Leicester
Tobias Ley Tallinn University
Stefanie Lindstaedt Know-Center
Andreas Lingnau University of Strathclyde
Martin Llamas-Nistal University of Vigo
Rose Luckin The London Knowledge Lab
George Magoulas Birkbeck College
Katherine Maillet Institut Mines-Télécom
Allegra Mario Italian National Research Council
Russell Meier Milwaukee School of Engineering
Martin Memmel DFKI GmbH
Doris Meringer Know-Center
Riichiro Mizoguchi University of Osaka
Paola Monachesi Utrecht University
Pablo Moreno-Ger Universidad Complutense de Madrid
Pedro J. Muñoz Merino Carlos III University of Madrid
X Conference Organisation

Mario Muñoz-Organero Carlos III University of Madrid


Rob Nadolski Open University of the Netherlands
Ambjorn Naeve Royal Institute of Technology
Roger Nkambou Université du Québec à Montréal
Viktoria Pammer Know-Center
Abelardo Pardo Carlos III University of Madrid
Kai Pata Tallinn University
Jermann Patrick Ecole Polytechnique Fédérale de Lausanne
Jan Pawlowski University of Jyväskylä
Asensio Perez Juan University of Valladolid
Eric Ras Public Research Centre Henri Tudor
Andrew Ravenscroft University of East London (UEL)
Uwe Riss SAP Research
Miguel Rodriguez Artacho UNED University
Vicente Romero Atos Origin
J.I. Schoonenboom VU University Amsterdam
Britta Seidel CeLTech – Centre for e-Learning Technology
Mike Sharples The Open University
Bernd Simon WU, Vienna
Peter Sloep Open University of the Netherlands
Sergey Sosnovsky CeLTech – Centre for e-Learning Technology
Marcus Specht Open University of the Nethderlands
Slavi Stoyanov Open University of the Netherlands
Deborah Tatar VirginiaTech
Pierre Tchounikine University of Grenoble
Stefaan Ternier Open University of the Netherlands
Stefan Trausan-Matu “Politehnica” University of Bucharest
Martin Valcke Ghent University
Christine Vanoirbeek Ecole Polytechnique Fédérale de Lausanne
Katrien Verbert K.U. Leuven
Wim Westera Open University of the Netherlands
Peter Wetz Know-Center
Fridolin Wild The Open University of the UK
Martin Wolpers Fraunhofer Institute of Applied Information
Technology
Volker Zimmermann imc AG

Additional Reviewers

Alario-Hoyos, Carlos Ruı́z Calleja, Adolfo


De La Fuente Valentı́n, Luis Seta, Luciano
Falakmasir, Mohammad Hassan Simon, Bernd
Gutiérrez Rojas, Israel Taibi, Davide
Pérez-Sanagustı́n, Mar Voigt, Christian
Rodrı́guez Triana, Marı́a Jesús
Table of Contents

Part I: Invited Paper


21st Century Learning for 21st Century Skills: What Does It Mean, and
How Do We Do It? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Richard Noss

Part II: Full Papers


Exploiting Semantic Information for Graph-Based Recommendations
of Learning Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Mojisola Anjorin, Thomas Rodenhausen,
Renato Domı́nguez Garcı́a, and Christoph Rensing

An Initial Evaluation of Metacognitive Scaffolding for Experiential


Training Simulators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Marcel Berthold, Adam Moore, Christina M. Steiner,
Conor Gaffney, Declan Dagger, Dietrich Albert, Fionn Kelly,
Gary Donohoe, Gordon Power, and Owen Conlan

Paper Interfaces for Learning Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37


Quentin Bonnard, Himanshu Verma, Frédéric Kaplan, and
Pierre Dillenbourg

The European TEL Projects Community from a Social Network


Analysis Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
Michael Derntl and Ralf Klamma

TinkerLamp 2.0: Designing and Evaluating Orchestration Technologies


for the Classroom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Son Do-Lenh, Patrick Jermann, Amanda Legge,
Guillaume Zufferey, and Pierre Dillenbourg

Understanding Digital Competence in the 21st Century: An Analysis


of Current Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Anusca Ferrari, Yves Punie, and Christine Redecker

How CSCL Moderates the Influence of Self-efficacy on Students’


Transfer of Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Andreas Gegenfurtner, Koen Veermans, and Marja Vauras

Notebook or Facebook? How Students Actually Use Mobile Devices in


Large Lectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Vera Gehlen-Baum and Armin Weinberger
XII Table of Contents

Enhancing Orchestration of Lab Sessions by Means of Awareness


Mechanisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Israel Gutiérrez Rojas, Raquel M. Crespo Garcı́a, and
Carlos Delgado Kloos

Discerning Actuality in Backstage: Comprehensible Contextual


Aging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Julia Hadersberger, Alexander Pohl, and François Bry

Tweets Reveal More Than You Know: A Learning Style Analysis on


Twitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Claudia Hauff, Marcel Berthold, Geert-Jan Houben,
Christina M. Steiner, and Dietrich Albert

Motivational Social Visualizations for Personalized E-Learning . . . . . . . . . 153


I.-Han Hsiao and Peter Brusilovsky

Generator of Adaptive Learning Scenarios: Design and Evaluation in


the Project CLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
Aarij Mahmood Hussaan and Karim Sehaba

Technological and Organizational Arrangements Sparking Effects on


Individual, Community and Organizational Learning . . . . . . . . . . . . . . . . . 180
Andreas Kaschig, Ronald Maier, Alexander Sandow, Alan Brown,
Tobias Ley, Johannes Magenheim, Athanasios Mazarakis, and
Paul Seitlinger

The Social Requirements Engineering (SRE) Approach to Developing a


Large-Scale Personal Learning Environment Infrastructure . . . . . . . . . . . . 194
Effie Lai-Chong Law, Arunangsu Chatterjee, Dominik Renzel, and
Ralf Klamma

The Six Facets of Serious Game Design: A Methodology Enhanced by


Our Design Pattern Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
Bertrand Marne, John Wisdom, Benjamin Huynh-Kim-Bang, and
Jean-Marc Labat

To Err Is Human, to Explain and Correct Is Divine: A Study of


Interactive Erroneous Examples with Middle School Math Students . . . . 222
Bruce M. McLaren, Deanne Adams, Kelley Durkin,
George Goguadze, Richard E. Mayer, Bethany Rittle-Johnson,
Sergey Sosnovsky, Seiji Isotani, and Martin van Velsen

An Authoring Tool for Adaptive Digital Educational Games . . . . . . . . . . . 236


Florian Mehm, Johannes Konert, Stefan Göbel, and Ralf Steinmetz
Table of Contents XIII

A Dashboard to Regulate Project-Based Learning . . . . . . . . . . . . . . . . . . . . 250


Christine Michel, Elise Lavoué, and Laurent Pietrac

Lost in Translation from Abstract Learning Design to ICT


Implementation: A Study Using Moodle for CSCL . . . . . . . . . . . . . . . . . . . 264
Juan Alberto Muñoz-Cristóbal, Luis Pablo Prieto,
Juan Ignacio Asensio-Pérez, Iván M. Jorrı́n-Abellán, and
Yannis Dimitriadis

The Push and Pull of Reflection in Workplace Learning: Designing


to Support Transitions between Individual, Collaborative and
Organisational Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
Michael Prilla, Viktoria Pammer, and Silke Balzert

eAssessment for 21st Century Learning and Skills . . . . . . . . . . . . . . . . . . . . 292


Christine Redecker, Yves Punie, and Anusca Ferrari

Supporting Educators to Discover and Select ICT Tools with


SEEK-AT-WD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Adolfo Ruiz-Calleja, Guillermo Vega-Gorgojo, Areeb Alowisheq,
Juan Ignacio Asensio-Pérez, and Thanassis Tiropanis

Key Action Extraction for Learning Analytics . . . . . . . . . . . . . . . . . . . . . . . 320


Maren Scheffel, Katja Niemann, Derick Leony, Abelardo Pardo,
Hans-Christian Schmitz, Martin Wolpers, and Carlos Delgado Kloos

Using Local and Global Self-evaluations to Predict Students’ Problem


Solving Behaviour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Lenka Schnaubert, Eric Andrès, Susanne Narciss, Sergey Sosnovsky,
Anja Eichelmann, and George Goguadze

Taming Digital Traces for Informal Learning: A Semantic-Driven


Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348
Dhavalkumar Thakker, Dimoklis Despotakis, Vania Dimitrova,
Lydia Lau, and Paul Brna

Part III: Short Papers


Analysing the Relationship between ICT Experience and Attitude
toward E-Learning: Comparing the Teacher and Student Perspectives
in Turkey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Dursun Akaslan and Effie Lai-Chong Law

Integration of External Tools in VLEs with the GLUE! Architecture:


A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
Carlos Alario-Hoyos, Miguel Luis Bote-Lorenzo,
Eduardo Gómez-Sánchez, Juan Ignacio Asensio-Pérez,
Guillermo Vega-Gorgojo, and Adolfo Ruiz-Calleja
XIV Table of Contents

Mood Tracking in Virtual Meetings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377


Angela Fessl, Verónica Rivera-Pelayo, Viktoria Pammer, and
Simone Braun

Teachers and Students in Charge: Using Annotated Model Solutions in


a Functional Programming Tutor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
Alex Gerdes, Bastiaan Heeren, and Johan Jeuring

The Effect of Predicting Expertise in Open Learner Modeling . . . . . . . . . . 389


Martin Hochmeister, Johannes Daxböck, and Judy Kay

Technology-Embraced Informal-in-Formal-Learning . . . . . . . . . . . . . . . . . . 395


Isa Jahnke

Towards Automatic Competence Assignment of Learning Objects . . . . . . 401


Ricardo Kawase, Patrick Siehndel, Bernardo Pereira Nunes,
Marco Fisichella, and Wolfgang Nejdl

Slicepedia: Automating the Production of Educational Resources from


Open Corpus Content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 407
Killian Levacher, Seamus Lawless, and Vincent Wade

Fostering Multidisciplinary Learning through Computer-Supported


Collaboration Script: The Role of a Transactive Memory Script . . . . . . . . 413
Omid Noroozi, Armin Weinberger, Harm J.A. Biemans,
Stephanie D. Teasley, and Martin Mulder

Mobile Gaming Patterns and Their Impact on Learning Outcomes:


A Literature Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419
Birgit Schmitz, Roland Klemke, and Marcus Specht

Adaptation “in the Wild”: Ontology-Based Personalization of


Open-Corpus Learning Material . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425
Sergey Sosnovsky, I.-Han Hsiao, and Peter Brusilovsky

Encouragement of Collaborative Learning Based on Dynamic Groups . . . 432


Ivan Srba and Mária Bieliková

Part IV: Demonstration Papers


An Authoring Tool to Assist the Design of Mixed Reality Learning
Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
Charlotte Orliac, Christine Michel, and Sébastien George

An Automatic Evaluation of Construction Geometry Assignments . . . . . . 447


Šárka Gergelitsová and Tomáš Holan
Table of Contents XV

Ask-Elle: A Haskell Tutor: Demonstration . . . . . . . . . . . . . . . . . . . . . . . . . 453


Johan Jeuring, Alex Gerdes, and Bastiaan Heeren

Backstage – Designing a Backchannel for Large Lectures . . . . . . . . . . . . . . 459


Vera Gehlen-Baum, Alexander Pohl, Armin Weinberger, and
François Bry

Demonstration of the Integration of External Tools in VLEs with the


GLUE! Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465
Carlos Alario-Hoyos, Miguel Luis Bote-Lorenzo,
Eduardo Gómez-Sánchez, Juan Ignacio Asensio-Pérez,
Guillermo Vega-Gorgojo, and Adolfo Ruiz-Calleja

Energy Awareness Displays: Prototype for Personalised Energy


Consumption Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 471
Dirk Börner, Jeroen Storm, Marco Kalz, and Marcus Specht

I-Collaboration 3.0: A Model to Support the Creation of Virtual


Learning Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 477
Eduardo A. Oliveira, Patricia Tedesco, and Thun Pin T.F. Chiu

Learning to Learn Together through Planning, Discussion and


Reflection on Microworld-Based Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . 483
Manolis Mavrikis, Toby Dragon, Rotem Abdu, Andreas Harrer,
Reuma De Groot, and Bruce M. McLaren

Making Learning Designs Happen in Distributed Learning


Environments with GLUE!-PS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489
Luis Pablo Prieto, Juan Alberto Muñoz-Cristóbal,
Juan Ignacio Asensio-Pérez, and Yannis Dimitriadis

Math-Bridge: Adaptive Platform for Multilingual Mathematics


Courses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495
Sergey Sosnovsky, Michael Dietrich, Eric Andrès,
George Goguadze, and Stefan Winterstein

MEMO – Situated Learning Services for e-Mobility . . . . . . . . . . . . . . . . . . . 501


Holger Diener, Katharina Freitag, Tobias Häfner, Antje Heinitz,
Markus Schäfer, Andreas Schlemminger, Mareike Schmidt, and
Uta Schwertel

PINGO: Peer Instruction for Very Large Groups . . . . . . . . . . . . . . . . . . . . . 507


Wolfgang Reinhardt, Michael Sievers, Johannes Magenheim,
Dennis Kundisch, Philipp Herrmann, Marc Beutner, and
Andrea Zoyke

Proportion: Learning Proportional Reasoning Together . . . . . . . . . . . . . . . 513


Jochen Rick, Alexander Bejan, Christina Roche, and
Armin Weinberger
XVI Table of Contents

Supporting Goal Formation, Sharing and Learning of Knowledge


Workers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 519
Colin Milligan, Anoush Margaryan, and Allison Littlejohn

U-Seek: Searching Educational Tools in the Web of Data . . . . . . . . . . . . . . 525


Guillermo Vega-Gorgojo, Adolfo Ruiz-Calleja,
Juan Ignacio Asensio-Pérez, and Iván M. Jorrı́n-Abellán

XESOP: A Content-Adaptive M-Learning Environment . . . . . . . . . . . . . . . 531


Ivan Madjarov and Omar Boucelma

Part V: Poster Papers


A Collaboration Based Community to Track Idea Diffusion Amongst
Novice Programmers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 539
Reilly Butler, Greg Edelston, Jazmin Gonzalez-Rivero,
Derek Redfern, Brendan Ritter, Orion Taylor, and Ursula Wolz

Argument Diagrams in Facebook: Facilitating the Formation of


Scientifically Sound Opinions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 540
Dimitra Tsovaltzi, Armin Weinberger, Oliver Scheuer,
Toby Dragon, and Bruce M. McLaren

Authoring of Adaptive Serious Games . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 541


Maurice Hendrix, Evgeny Knutov, Laurent Auneau,
Aristidis Protopsaltis, Sylvester Arnab, Ian Dunwell,
Panagiotis Petridis, and Sara de Freitas

Collaborative Learning and Knowledge Maturing from Two


Perspectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 542
Uwe V. Riss and Wolfgang Reinhardt

Computer Supported Intercultural Collaborative Learning: A Study on


Challenges as Perceived by Students . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 543
Vitaliy Popov, Omid Noroozi, Harm J.A. Biemans, and
Martin Mulder

Just4me: Functional Requirements to Support Informal Self-directed


Learning in a Personal Ubiquitous Environment . . . . . . . . . . . . . . . . . . . . . 545
Ingrid Noguera, Iolanda Garcia, Begoña Gros, Xavier Mas, and
Teresa Sancho

Observations Models to Track Learners’ Activity during Training on a


Nuclear Power Plant Full-Scope Simulator . . . . . . . . . . . . . . . . . . . . . . . . . . 546
Olivier Champalle, Karim Sehaba, and Alain Mille

Practical Issues in e-Learning Multi-Agent Systems . . . . . . . . . . . . . . . . . . . 547


Alberto González Palomo
Table of Contents XVII

Students’ Usage and Access to Multimedia Learning Resources in an


Online Course with Respect to Individual Learning Styles as Identified
by the VARK Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 548
Tomislava Lauc, Sanja Kišiček, and Petra Bago

Technology-Enhanced Replays of Expert Gaze Promote Students’


Visual Learning in Medical Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 549
Marko Seppänen and Andreas Gegenfurtner

Towards Guidelines for Educational Adventure Games


Creation (EAGC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 550
Gudrun Kellner, Paul Sommeregger, and Marcel Berthold

Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 551


21st Century Learning for 21st Century Skills:
What Does It Mean, and How Do We Do It?

Richard Noss

London Knowledge Lab and Technology Enhanced Learning (TEL) Research Programme,
Institute of Education, University of London, London, WC1N 3QS, United Kingdom
r.noss@ioe.ac.uk

Abstract. I want to argue in this lecture, that life – especially educational life –
is never that simple. What exactly are 21st century skills? How, for example, do
they differ from ‘knowledge’? And once we know what they are, does there
follow a strategy – or at least a set of principles – for what learning should look
like, and the roles we ascribe to technology? Most importantly, if 21st century
knowledge is qualitatively different from the 19th and 20th century knowledge
that characterises much of our existing curricula, we will need to consider
carefully just how to make that knowledge learnable and accessible through the
design of digital technologies and their evaluation.

Keywords: pedagogy, technology, teaching, learning.

Abstract

21st Century Learning for 21st Century Skills. What’s not to like? We know, it seems,
that the newish century demands new, process-oriented skills like teamwork,
flexibility, problem solving, to take account of the shift from material labour to
immaterial, weightless production. We can take for granted, at least in a gathering like
this, that 21c. learning is learning with digital technology. And we can surely agree
that we are gaining with impressive speed, understanding of the technology’s
potential to enable a new kind of pedagogy.
I want to argue in this lecture, that life – especially educational life – is never that
simple. What exactly are 21st century skills? How, for example, do they differ from
‘knowledge’? And once we know what they are, does there follow a strategy – or at
least a set of principles – for what learning should look like, and the roles we ascribe
to technology? Most importantly, if 21st century knowledge is qualitatively different
from the 19th and 20th century knowledge that characterises much of our existing
curricula, we will need to consider carefully just how to make that knowledge
learnable and accessible through the design of digital technologies and their
evaluation.
The problem is this. The needs of the 21st century are seen as broadly
dichotomised. Much of the discussion about who needs to know what, is predicated
on the assumption that technology has created the need for fewer and fewer people
really to understand the way the world works; and for more and more merely to

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 3–5, 2012.
© Springer-Verlag Berlin Heidelberg 2012
4 R. Noss

respond to what technology demands of them. There is partial truth here: very few
people need to know how derivatives work (it seems that the bankers don’t either);
and the supermarket checkout operator no longer needs to calculate change. So this
gives rise to the belief that there is stuff that the elite need to know; and stuff that
everyone needs to know – and that these have very little in common. Inevitably, the
latter is reduced to process-oriented skills, denuded of real knowledge that can help
individuals engage as empowered agents in their own lives. And the gap between
these two poles is widening, despite the best intentions of educators and
policymakers. So the danger is real: Knowledge for the top of the pyramid; skills and
processes for the bottom.
Of course, the imperatives of the workplace should not be the only driver of
educational policy or practice. But they cannot be ignored, and if they are going to
inform or even direct it, it would be helpful if we were clear about what we are trying
to do. This is all the more important as we are at something of a crossroads in our
thinking about technology (more fashionably, a ‘tipping point’).
The first 30 years of educational computing were dominated by a commercial
paradigm borrowed from business and industry. When educators and policy makers
thought of technology for schools, colleges and universities, they were guided with
reference to a social niche nicely occupied by Windows, the all-pervasive metaphor of
the office, the desktop, the filing system, and so on. It worked fine in many respects,
except one: it pretty much guaranteed that the existing practices of teaching and
learning institutions remained more or less intact, lubricated by the application of
technology, but not changed fundamentally by it. The technology beautifully
legitimised the commercial/business paradigm of learning – think, for example, how
the interactive whiteboard has been, for the most part, the technological end of a
pedagogy based on eyes-front, teacher-led practice.
I don’t want to future-gaze too much, and certainly do not want to stand accused of
technocentrism, which I’ve been pretty vocal about over these last thirty years1. But
technology does shape the ambient culture, as well as being shaped by it, and
understanding how that works is an important part of how we should respond. It is
hard not to notice a change in the ways technology is impacting people’s lives; and
again, without attributing magical powers to this or that passing platform, I think that
the sudden ubiquity of the i-Pad/smartphone paradigm – a paradigm quite different
from the commercial paradigm that preceded it - should give us pause for thought.
Until now, technology has been seen as institutional; but now, we have reached the
point where it has moved from the institution to the home, the pocket and the street –
it has become personal.
There is a lot to say about this, and I’ll save it for the lecture. But one thing is
clear: this change is double-edged. i-Pads are wonderful machines for viewing photos,
organising playlists, and providing a platform for the exponentially increasing number
of apps, all just a click away. That click is attractive for schools and colleges – no

1
Seymour Papert describes technocentrism as “the fallacy of referring all questions to the
technology”.
(http://www.papert.org/articles/ACritiqueofTechnocentrism.html)
21st Century Learning for 21st Century Skills 5

need for training, cumbersome networks, and above all, a convergence between what
learners already do and what they might be encouraged to do. But as we all know,
ease of access comes at the price of invisibility – the same digital natives who know
how to use their phones for mashups and facebook, are digital immigrants when it
comes to engaging with the technology in any deep way: and for educators that’s a
very expensive price to pay for simplicity.
So the challenge I want to take up in this lecture is this: how can we design,
implement and exploit technology, so that it recognises the diversity of what we are
trying to teach, and to whom we are trying to teach it? And, just as important, can
technology help us to achieve what seems, at first sight, to be the impossible: to help
all learners, across the social and economic spectrum, to learn about their agency in a
world where, increasingly, agency is at best invisible and at worst, non-existent. For
that they will need knowledge, not ‘knowledge about knowledge’ or ‘learning about
learning’.2
The structure of the lecture will be as follows. First, I’ll take a look at what is
known about the needs of ‘knowledge economies’, and the gap that has opened up
between the knowledge rich and the knowledge poor. Second, I want to review what
we know about technology, share some research findings of the Technology
Enhanced Learning Research Programme, and show how our state of knowledge can
be put to use in bridging the gap. Third, I want to future gaze just a little, in terms of
developments in technology, and how they focus our attention on the question of what
to teach, rather than merely how to teach it. And finally, I want to return to the first
theme, and show that by focusing on the new things we can learn with technology
(things that are essentially unlearnable without it), we can address the problem this
conference has set itself by somewhat adapting the title – to understand 21st Century
Learning for 21st Century Knowledge.

References
1. Young, M.: Bringing Knowledge Back. In: From Social Constructivism to Social Realism in
the Sociology of Education. Routledge, Oxford (2008)
2. Papert, S.: A Critique of Technocentrism in Thinking About the School of the Future (2012),
http://www.papert.org/articles/ACritiqueofTechnocentrism.html
(accessed July 16, 2012)

2
Michael Young’s book, Bringing Knowledge Back In: From Social Constructivism to Social
Realism in the Sociology of Education, extends these ideas.
Exploiting Semantic Information
for Graph-Based Recommendations
of Learning Resources

Mojisola Anjorin, Thomas Rodenhausen,


Renato Domı́nguez Garcı́a, and Christoph Rensing

Multimedia Communications Lab,


Technische Universität Darmstadt, Germany
{mojisola.anjorin,thomas.rodenhausen,renato.dominguez.garcia,
christoph.rensing}@kom.tu-darmstadt.de
http://www.kom.tu-darmstadt.de

Abstract. Recommender systems in e-learning have different goals as


compared to those in other domains. This brings about new requirements
such as the need for techniques that recommend learning resources be-
yond their similarity. It is therefore an ongoing challenge to develop rec-
ommender systems considering the particularities of e-learning scenarios
like CROKODIL. CROKODIL is a platform supporting the collaborative
acquisition and management of learning resources. It supports collabora-
tive semantic tagging thereby forming a folksonomy. Research shows that
additional semantic information in extended folksonomies can be used
to enhance graph-based recommendations. In this paper, CROKODIL’s
folksonomy is analysed, focusing on its hierarchical activity structure.
Activities help learners structure their tasks and learning goals. AScore
and AInheritScore are proposed approaches for recommending learning
resources by exploiting the additional semantic information gained from
activity structures. Results show that this additional semantic informa-
tion is beneficial for recommending learning resources in an application
scenario like CROKODIL.

Keywords: ranking, resource recommendation, folksonomy, tagging.

1 Introduction
Resources found on the Web ranging from multimedia websites to collaborative
web resources, become increasingly important for today’s learning. Learners ap-
preciate a learning process in which a variety of resources are used [9]. This shows
a shift away from instructional-based learning to resource-based learning [17].
Resource-based learning is mostly self-directed [3] and the learner is often con-
fronted, in addition to the actual learning process, with an overhead of finding
relevant high quality learning resources amidst the huge amount of informa-
tion available on the Web. In learning scenarios, recommender systems support
learners by suggesting relevant learning resources [15]. An effective ranking of

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 9–22, 2012.

c Springer-Verlag Berlin Heidelberg 2012
10 M. Anjorin et al.

learning resources would reduce the overhead when learning with resources found
on the Web.
Social bookmarking applications, in which users collaboratively attach tags
to resources, offer support to the user during the search, annotation and sharing
tasks involved in resource-based learning [3]. Tagging helps to quickly retrieve
a resource later via search, or navigation, or to give an overview about the
resource’s content. Through the collaborative tagging of resources, a structure
called a folksonomy is created. Promising results using additional semantic infor-
mation to improve the ranking of resources in extended folksonomies have been
made [1]. It is therefore of great interest to investigate how semantic information
can benefit the ranking of learning resources in an e-learning scenario such as
CROKODIL [4]. CROKODIL1 is a platform supporting the collaborative acqui-
sition and management of learning resources. It offers support to the learner in
all tasks of resource-based learning [3]. CROKODIL is based on a pedagogical
concept which focuses on activities as the main concept for organizing learning
resources [3]. Activities aim to support the learner during his learning process
by organizing his tasks in a hierarchical activity structure. Relevant knowledge
resources found on the Web are then attached to these activities. The resulting
challenge is now how best to exploit these activity structures in order to recom-
mend relevant learning resources to other users working on related activities.
In this work, we consider the hierarchical activity structures available in the
CROKODIL application scenario [4] as additional semantic information which
can be used for ranking resources. We therefore propose the algorithms AScore
and AInheritScore which exploit the activity structures in CROKODIL to im-
prove the ranking of resources in an extended folksonomy for the purpose of
recommending relevant learning resources.
The extended folksonomy of the CROKODIL application scenario is defined
in Sect. 2. Related work is summarized in Sect. 3. Proposed approaches are
implemented in Sect. 4 and evaluated in Sect. 5. This paper concludes with a
brief summary and an outlook on possible future work.

2 Analysis of Application Scenario: CROKODIL

CROKODIL supports the collaborative semantic tagging [5] of learning resources


thereby forming a folksonomy structure consisting of users, resources and tags
[3]. Tags can be assigned tag types such as topic, location, person, event or
genre. Activities as mentioned in Sect.1 are created describing learning goals or
tasks to be accomplished by a learner or group of learners. Resources needed
to achieve these goals are attached to these activities. In addition, CROKODIL
offers social network functionality to support the learning community [3]. Groups
of learners working on a common activity can be created, as well as friendship
relations between two learners. In the following a folksonomy and CROKODIL’s
extended folksonomy are defined.
1
http://www.crokodil.de/, http://demo.crokodil.de(retrieved 06.07.2012)
Graph-Based Recommendations of Learning Resources 11

A folksonomy is described as a system of classification derived from collab-


oratively creating and managing tags to annotate and categorize content[16].
This is also known as a social tagging system or a collaborative tagging system.
A folksonomy can also be represented as a folksonomy graph GF as defined in
Sect. 4.
Definition 1 (Folksonomy). A folksonomy is defined as a quadruple [11]:
F := (U, T, R, Y ) where:
– U is a finite set of users
– T is a finite set of tags
– R is a finite set of resources
– Y ⊆ U × T × R is a tag assignment relation over these sets
E.g., user thomas ∈ U attaches a tag London ∈ T to the resource olympic.org ∈
R, thus forming a tag assignment (thomas, London, olympic.org) ∈ Y .
An extended folksonomy is a folksonomy enhanced with additional seman-
tic information [1]. CROKODIL is an extended folksonomy where the semantic
information gained from activities, semantic tag types, learner groups and friend-
ships extend the folksonomy. These additional semantic information can also be
seen as giving a context to elements in the folksonomy [4] [1]. For example, re-
sources belonging to the same activity, can be seen as belonging to the same
context of this activity.

Definition 2 (CROKODIL’s Extended Folksonomy). CROKODIL’s ex-


tended folksonomy is defined as: FC := (U, Ttyped , R, YT , (A, <), YA , YU , G,
f riends) where:
– U is a finite set of learners
– Ttyped is a finite set of typed tags consisting of pairs (t, type), where t is
an arbitrary tag and type ∈ {topic, location, event, genre, person, other}
– R is a finite set of learning resources
– YT ⊆ U × Ttyped × R is a tag assignment relation over the set of users,
typed tags and resources
– (A, <) is a finite set of activities with a partial order < indicating sub-
activities
– YA ⊆ U × A × R is an activity assignment relation over the set of users,
activities and resources
– YU ⊆ U × A is an activity membership assignment relation over the set
of users and activities
– G ⊆ P(U ) is the finite set of subsets of learners called groups of learners
– f riends ⊆ U × U is a symmetric binary relation which indicates a friend-
ship relation between two learners

E.g., thomas is preparing for a quiz about the olympic games. He therefore
creates an activity prepare quiz about the olympics having a sub-activity col-
lect historical facts. This means A = {prepare quiz about the olympics, collect
historical facts} and collect historical facts < prepare quiz about the olympics.
12 M. Anjorin et al.

In addition, (thomas, prepare quiz about the olympics) ∈ YU and (thomas, col-
lect historical facts) ∈ YU . He finds the website olympic.org, to which he at-
taches the tag London with tag type location, (thomas, (London, location),
olympic.org) ∈ YT . He then attaches this resource to the activity prepare quiz
about the olympics, (thomas, prepare quiz about the olympics, olympic.org) ∈ YA .
Thomas creates a group olympic experts ∈ G and invites moji ∈ U and his friend
renato ∈ U to help him gather facts about the olympic games.
In this paper, we will be focusing on the additional semantic information
gained from the activities in CROKODIL’s extended folksonomy and investigat-
ing how this can improve the ranking of learning resources.

3 Related Work
Recommender systems have shown to be very useful in e-learning scenarios [15].
Collaborative filtering approaches use community data such as feedback, tags
or ratings from learners to make recommendations e.g.[8] whereas content-based
approaches make recommendations based on the similarity between learning re-
sources e.g. [18]. Recommender systems in e-learning have different information
retrieval goals as compared to other domains thus leading to new requirements
like recommending items beyond their similarity [15]. It is therefore increasingly
important to develop recommender systems that consider the particularities of
the e-learning domain. Graph-based recommendation techniques can be classi-
fied as neighborhood-based collaborative filtering approaches, having the advan-
tage of avoiding the problems of sparsity and limited coverage [7]. Graph-based
recommender systems e.g. [1,6] consider the graphical structure when recom-
mending items in a folksonomy. The data is represented in the form of a graph
where nodes are users, tags or resources and edges the transactions or relations
between them. One of the most popular approaches is FolkRank [12] which is
based on the PageRank computation on a graph created from a folksonomy.
FolkRank can be used to recommend users, tags or resources in social book-
marking systems. The intuition is that a resource tagged with important tags by
important users becomes important itself. The same holds for tags and users.
Furthermore, it is of interest for recommender systems in e-learning to take
advantage of additional semantic information such as context awareness which
includes pedagogical aspects like learning goals [15]. Abel [1] shows it is worth
exploiting additional semantic information which are found in extended folk-
sonomies to improve ranking strategies. Approaches, for example GFolkRank
[1], are introduced which extend FolkRank to a context-sensitive ranking algo-
rithm exploiting the additional semantic information gained from the grouping
of resources in GroupMe!2 . Groups in GroupMe! allow resources e.g. belonging
to a common topic to be semantically grouped together. Groups can also con-
tain other groups [2]. GFolkRank, an extension of FolkRank [12] is a ranking
algorithm that leverages groups available in GroupMe! for ranking. Groups are
interpreted as tags i.e. if a user adds a resource r to a group g then GFolkRank
2
http://groupme.org/, retrieved 06/07/2012
Graph-Based Recommendations of Learning Resources 13

translates this as a tag (group) assignment. The folksonomy graph is therefore


extended with additional group nodes and group assignments. In addition, other
approaches are proposed such as GRank [1]. GRank is designed for ranking re-
sources with a tag as input. It computes a ranking for all resources, which are
related to the input tag with respect to the group structure in GroupMe!
The concept of groups in the GroupMe! application is similar to the con-
cept of activities in the CROKODIL application. Therefore, this opportunity to
exploit the semantic information gained from activities in CROKODIL will be
investigated in the following sections.

4 Concept and Implementation


Given a certain user u as input, the resource recommendation task is to find a
resource r which is relevant to this user. This recommendation task is also seen
as a ranking task. A ranking algorithm computes for an input user u a score
vector that contains the score values score(r) for each resource r in the graph.
These scored resources are then ordered forming a ranked list according to their
score values with the highest scored resource at the top of the list. The top
ranked resources are then recommended to the user u. For example, the scores
score(r1 ) = 5 and score(r2 ) = 7 and score(r3 ) = 3 create a ranked list: r2 ,
r1 and r3 . Therefore the top recommendation to user u will be resource r2 .
We propose two ranking algorithms, AScore and AInheritscore. Both algo-
rithms compute a folksonomy graph GF considering not only activities when
ranking resources but also including activity hierarchies and users assigned to
work on these activities in the graph structure.
In the following, three sets are defined that will be used in Definition 3 to
determine the weights of the edges in the folksonomy graph GF . For a given
user u ∈ U , tag t ∈ T and resource r ∈ R:
– Let Ut,r = { u ∈ U | (u, t, r) ∈ Y } ⊆ U be the set of all users that have
assigned resource r a tag t
– Let Tu,r = { t ∈ T | (u, t, r) ∈ Y } ⊆ T be the set of all tags that user u
assigned to resource r
– Let Ru,t = { r ∈ R | (u, t, r) ∈ Y } ⊆ R be the set of all resources that user
u assigned a tag t
Definition 3 (Folksonomy Graph). Given a folksonomy F , the folksonomy
graph GF [1] is defined as an undirected, weighted graph GF := (VF , EF ) where:
– VF = U ∪ T ∪ R is the set of nodes
– EF = { {u, t} , {t, r} , {u, r} | u ∈ U, t ∈ T, r ∈ R, (u, t, r) ∈ Y } ⊆ VF × VF
is the set of undirected edges
– Each of these edges is given a weight w(e), e ∈ EF according to their fre-
quency within the set of tag assignments:
• w(u, t) = |Ru,t | the number of resources that user u assigned the tag t
• w(t, r) = |Ut,r | the number of users who assigned tag t to resource r
• w(u, r) = |Tu,r | the number of tags that user u assigned to resource r
14 M. Anjorin et al.

4.1 AScore
AScore is an algorithm based on GFolkRank [1] as described in Sect. 3. AScore
extends the folksonomy graph GF in a similar way with activity nodes and
activity assignments. However, in addition, AScore extends the folksonomy graph
with activity hierarchy relations between activities (4) as well as with users
belonging to an activity (3). A user u is said to belong to an activity a, when
the user u is working on the activity a. This is represented as an edge in the
graph between u and a. Furthermore, AScore considers the hierarchical activity
structure when determining the weights of the newly introduced edges. The
AScore algorithm is described below:
– Let GC = (VC , EC ) be the folksonomy graph of the extended folksonomy FC
– VC = VF ∪ A
– EC is a combination of edges (1) from the folksonomy graph EF with EA
(2), which are all activity assignments where a user u added a resource r to
an activity a. Additionally, EU (3) is added, which comprises all assignments
of a user u to an activity a. Finally, the activity hierarchies EH (4) are added
as edges between a sub-activity asub and a super-activity asuper .

EC = EF ∪ EA ∪ EU ∪ EH . (1)

EA = {{u, a}, {a, r}, {u, r} | u ∈ U, r ∈ R, a ∈ A, (u, a, r) ∈ YA } . (2)

EU = {{u, a} | u ∈ U, a ∈ A, (u, a) ∈ YU } . (3)

EH = {{asub , asuper } | asub , asuper ∈ A, asub < asuper } . (4)

The newly introduced edges are now given weights. The edges in EA are given
all the same weight activityAssign(u, r, a) (5) because, similar to GFolkRank
[1], a resource can only be added once to an activity. Attaching additional se-
mantic information to a resource (like assigning it to a group in GroupMe! or
to an activity in CROKODIL) is seen as more valuable than simply tagging it
[1], therefore activityAssign(u, r, a) is assigned the maximum number of users
who assigned tag t to resource r (5). Similarly, the edges between a user u and
an activity a are given the weight wMembership (u, a) (6) which is the maximum
number of resources assigned with tag t by user u, who is working on activ-
ity a. The edges between activities of the same hierarchy are given the weight
wHierarchy (asub , asuper ). These edges are seen to be at least as strong as the
connections between an activity and other nodes in the graph, therefore in (7),
the maximum weight is assigned.
w(u, a) = w(a, r) = w(u, r) = activityAssign(u, r, a)
(5)
where activityAssign(u, r, a) = max(| Ut,r |) .

wMembership (u, a) = max(| Ru,t |) . (6)


Graph-Based Recommendations of Learning Resources 15

wHierarchy (asub , asuper ) = max(activityAssign(u, r, asub ), wMembership (u, asub )) .


(7)
After the folksonomy graph GC has been created and the weights of the edges
determined, any graph-based ranking algorithm for folksonomies e.g. FolkRank
can now be applied to calculate the scores of each node.

4.2 AInheritScore
AInheritScore is an algorithm based on GRank [1] as described in Sect. 3. AIn-
heritscore computes for an input user u a score vector that contains the score
values score(r) for each resource r. The input user u however needs to be trans-
formed into input tags tq , depending upon how many tags the user u has. Each
of these input tags tq is weighted according to its frequency of usage by user u.
The parameters da , db , dc are defined to emphasize the “inherited” scores gained
by relations in the hierarchy. The values of these parameters are set in Sect. 5
for the evaluations.
1. da for resources having the input tag directly assigned to them
2. db for resources in the activity hierarchy having a resource that is tagged
with the input tag
3. dc for users in the activity hierarchy having assigned the input tag
Additionally, an activity distance activityDist(a1 , a2 ) between two activities is
calculated as the number of hops from activity a1 to activity a2 . However, it
is also possible to calculate a lesser distance for sub-activities, or include the
fan-out in the computation. AInheritscore contrasts to GRank in the following
points:
1. Activities are not considered to be resources and cannot be assigned a tag.
2. AInheritscore considers activity hierarchies as well as users assigned to ac-
tivities when computing the scores.
3. Activity hierarchies are leveraged by the inheritance of scores. These scores
are emphasized by considering the connections in the activity hierarchy. The
distance between activities in the hierarchy are considered as well.
AInheritscore algorithm is described in the following steps:
1. For each input tag tq
2. Let score = 0 be the score vector
3. Determine Rq = Ra ∪ Rb ∪ Rc where:
(a) Ra contains all resources with the input tag tq directly assigned to them
w(tq , r) > 0.
(b) Rb contains all resources belonging to the same activity hierarchy as
another resource r, that has the input tag tq directly assigned to it:
w(tq , r) > 0
(c) Rc contains all resources belonging to the same activity hierarchy as a
user u, who has tagged a resource with the input tag tq: w(u, tq ) > 0
16 M. Anjorin et al.

4. For all r ∈ Rq belonging to activity a do


(a) increase the score value of r:
score(r)+ = w(tq , r) · da (8)
  
(b) for each r ∈ Rq belonging to activity a , where a and a are in the same
activity hierarchy, increase again the score of r:
w(tq , r )
score(r)+ = · db (9)
activityDist(a, a )
(c) for each u ∈ Uq working on activity a , where a and a are in the same
activity hierarchy, increase again the score of r:
w(u, tq )
score(r)+ = · dc (10)
activityDist(a, a)
5. Output: score

5 Evaluation
The goal of this paper is to investigate how the implicit semantic information
contained in activity hierarchies can be exploited to improve the ranking of
resources in an extended folksonomy such as CROKODIL. As the CROKODIL
data set has not yet attained a sufficient size for significant evaluation, a data set
with an extended folksonomy containing similar concepts to those of activities
in CROKODIL was sought.

5.1 Corpus
The GroupMe! data set was chosen as the concept of groups in GroupMe! is a
similar concept to the activities and activity hierarchies in CROKODIL as men-
tioned in Sect. 3. There are however differences and a mapping of the concepts
is necessary to be able to use the data set:
– The aim of groups in GroupMe! is to provide a collection of related re-
sources. In CROKODIL however, activities are based on a pedagogical con-
cept to help learners structure their learning goals in a hierarchical structure.
Learning resources needed to achieve these goals are attached to these ac-
tivities. Therefore, the assignment of a resource to a group in GroupMe! is
interpreted as attaching a resource to an activity in CROKODIL.
– Groups in GroupMe! are considered resources and can therefore belong to
other groups. These groups of groups or hierarchies of groups are interpreted
as activity hierarchies in CROKODIL.
– Tags can be assigned to groups in GroupMe!. In contrast however, tags
can not be assigned to activities in CROKODIL. These tags on groups in
GroupMe! are therefore not considered in the data set.
Groups of groups or group hierarchies are unfortunately sparse in the GroupMe!
data set. A p-core extraction [13] would reduce these hierarchies even more,
therefore no p-core extraction is made. The data set has the characteristics
described in Table 1.
Graph-Based Recommendations of Learning Resources 17

Table 1. The extended folksonomy GroupMe! data set

Users Tags Resources Groups Posts Tag Assignments


649 2580 1789 1143 1865 4366

5.2 Evaluation Methodology

The evaluation methodology LeavePostOut [13] is used for the evaluations of


AScore and AInheritscore. In addition, we propose an evaluation methodology
LeaveRTOut which is inspired from LeavePostOut. A post Pu,r is defined in
[11] as all tag assignments of a specific user u to a specific resource r. Leave-
PostOut as shown in Fig.1 removes the post Pu,r , thereby ensuring that no
information in the folksonomy remains that could connect the user u directly to
resource r [13]. LeaveRTOut as shown in Fig. 2 eliminates the connection in the
folksonomy between a tag t and a resource r instead of eliminating the connec-
tion between a user u and a resource r. LeaveRTOut therefore sets a different
task to solve as LeavePostOut. For the evaluations, the user u of a post is used
as input. LeavePostOut is used to determine adequate parameters for the algo-
rithms. AInheritScore takes the values of GRank’s parameters which according
to a sensitivity analysis in [1] shall be set to da = 10, db = 2. dc is set as well as
db = dc = 2.
For the evaluations, the metrics Mean Average Precision (MAP) and Preci-
sion at k [14] are used. MAP is used to determine the overall ranking quality
while Precision at k determines the ranking quality in the top k positions. Pre-
cision at k is extended to Mean Normalized Precision (MNP) at k to obtain a

Fig. 1. LeavePostOut evaluation methodology

Fig. 2. LeaveRTOut evaluation methodology


18 M. Anjorin et al.

single measure over a number of information needs Q as well as to be more suit-


able for the evaluation methodology, i.e. in respect to the maximal achievable
P recisionmax (k). Mean Normalized Precision at k is defined as follows:

|Q|
1  P recision(k)
M N P (Q, k) = · (11)
|Q| j=1
P recisionmax (k)

For the statistical significant tests, Average Precision [14] is used for a single
information need q, applying the Wilcoxon signed-rank tests 3 .

5.3 Results

LeavePostOut and LeaveRTOut results from AScore and AInheritScore are com-
pared to those of GRank, GFolkRank, FolkRank and Popularity. Popularity is
calculated as the number of tags and users a resource is connected to. The results
are visualized as a violin plot [10] in Fig.3 and Fig.4. The distribution of the data
values are shown along the y-axis. The width of the violin plot is proportional to
the estimated density at that point. As can be seen, most of the algorithms have
most items ranked in positions < 500, whereas popularity still has too many
items ranked in further positions.
The MAP results for LeaveRTOut are presented in Table 3. GFolkRank and
AScore perform best with a MAP of 0.20, followed by FolkRank, GRank, AIn-
heritScore and last Popularity. The results of the Mean Normalized Precision at
k for k ∈ [1, 10] for both LeavePostOut (left) and LeaveRTOut (right) are shown
in Fig.5.

Fig. 3. Violinplot of LeavePostOut results

3
http://stat.ethz.ch/R-manual/R-patched/library/stats/html/wilcox.test.
html, retrieved 20/03/2012
Graph-Based Recommendations of Learning Resources 19

Fig. 4. Violinplot of LeaveRTOut results

Table 2. Mean Average Precision (MAP) results for LeavePostOut

Popularity FolkRank GFolkRank AScore GRank AInheritscore


0,00 0,19 0,70 0,70 0,38 0,47

Table 3. Mean Average Precision (MAP) results for LeaveRTOut

Popularity FolkRank GFolkRank AScore GRank AInheritscore


0.02 0.18 0.20 0.20 0.14 0.11



 
  

 
 

Fig. 5. Mean Normalized Precision at k: LeavePostOut (left) and LeaveRTOut(right)


20 M. Anjorin et al.

The results of all pairwise comparisons for statistical significance are shown
in Table 4 and Table 5. The LeavePostOut results differ from the LeaveRTOut
results due to the fact that they set a differently hard task to solve. Hence,
the results from the two methodologies are useful to assess the effectiveness of
the algorithms in different ranking scenarios. For example, results from Leave-
PostOut show on the one hand, that GFolkRank is more effective than AScore.
On the other hand, results from LeaveRTOut show that AScore is more effective
than GFolkRank. In summary, LeavePostOut results show that the algorithms
leveraging additional semantic information are overall more effective than
FolkRank as these algorithms designed for the extended folksonomy have the
advantage of being able to leverage the additional information gained from activ-
ities to recommend relevant resources. The selection of an algorithm for ranking
learning resources will therefore depend upon its application scenario and what
is important for ranking. For example, AScore would be the choice when ac-
tivity hierarchies are particularly important for ranking learning resources such
as in the CROKODIL application scenario or GFolkRank if this is not the case
in other scenarios.

Limitations. The proposed algorithms AScore and AIhneritscore are fundamen-


tally based on the concept of activity hierarchies from the CROKODIL applica-
tion scenario. The results achieved with the GroupMe! data set thus may not be
representative as the group hierarchies from the GroupMe! data set modeled as
the CROKODIL activity hierarchies were very sparse. Furthermore, the param-
eters for the algorithms were based on MAP values from LeavePostOut with a
user as input. The algorithms may perform differently with regard to a metric or
evaluation methodology, if parameterized accordingly. Additionally, the statisti-
cal significance is computed based on Average Precision, which is a measure of

Table 4. Significance matrix of pair-wise comparisons of LeavePostOut results

More effective than → Popularity FolkRank GFolkRank AScore GRank AInheritScore


Popularity      
FolkRank      
GFolkRank      
AScore      
GRank      
AInheritScore      

Table 5. Significance matrix of pair-wise comparisons of LeaveRTOut results

More effective than → Popularity FolkRank GFolkRank AScore GRank AInheritScore


Popularity      
FolkRank      
GFolkRank      
AScore      
GRank      
AInheritScore      
Graph-Based Recommendations of Learning Resources 21

the overall ranking quality. If the statistical significance is to be compared based


on the effectiveness of ranking in top positions, a different series of significance
tests needs to be conducted.

6 Conclusion
Resource-based learning is mostly self-directed and the learner is often con-
fronted with an overhead of finding relevant high quality learning resources on
the Web. Graph-based recommender systems that recommend resources beyond
their similarity can reduce the effort of finding relevant learning resources. We
therefore propose in this paper two approaches AScore and AInheritScore that
exploit the hierarchical activity structures in CROKODIL to improve the ranking
of resources in an extended folksonomy for the purpose of recommending learn-
ing resources. Evaluation results show that this additional semantic information
is beneficial for recommending learning resources in an application scenario such
as CROKODIL. The algorithms leveraging additional semantic information are
overall more effective than FolkRank as these algorithms designed for the ex-
tended folksonomy have the advantage of being able to leverage the additional
information gained from activities and activity hierarchies to recommend rele-
vant resources.
Future work will be to evaluate these algorithms with a data set from the
CROKODIL application scenario. Additionally, a user study in the CROKODIL
application scenario is planned to determine the true relevance of recommenda-
tions of learning resources based on human judgement in a live evaluation.

Acknowledgments. This work is supported by funds from the German Federal


Ministry of Education and Research and the mark 01 PF 08015 A and from the
European Social Fund of the European Union (ESF). The responsibility for the
contents of this publication lies with the authors.
Special thanks to Fabian Abel for making the most current data set from
GroupMe! available for the evaluations.

References
1. Abel, F.: Contextualization, User Modeling and Personalization in the Social Web.
PhD Thesis, Gottfried Wilhelm Leibniz Universitšt Hannover (2011)
2. Abel, F., Frank, M., Henze, N., Krause, D., Plappert, D., Siehndel, P.: GroupMe! -
Where Semantic Web Meets Web 2.0. In: Aberer, K., Choi, K.-S., Noy, N.,
Allemang, D., Lee, K.-I., Nixon, L.J.B., Golbeck, J., Mika, P., Maynard, D.,
Mizoguchi, R., Schreiber, G., Cudré-Mauroux, P. (eds.) ASWC 2007 and ISWC
2007. LNCS, vol. 4825, pp. 871–878. Springer, Heidelberg (2007)
3. Anjorin, M., Rensing, C., Bischoff, K., Bogner, C., Lehmann, L., Reger, A., Faltin,
N., Steinacker, A., Lüdemann, A., Domı́nguez Garcı́a, R.: CROKODIL - A Plat-
form for Collaborative Resource-Based Learning. In: Kloos, C.D., Gillet, D., Crespo
Garcı́a, R.M., Wild, F., Wolpers, M. (eds.) EC-TEL 2011. LNCS, vol. 6964, pp.
29–42. Springer, Heidelberg (2011)
22 M. Anjorin et al.

4. Anjorin, M., Rensing, C., Steinmetz, R.: Towards ranking in folksonomies for per-
sonalized recommender systems in e-learning. In: Proc. of the 2nd Workshop on
Semantic Personalized Information Management: Retrieval and Recommendation.
CEUR-WS, vol. 781, pp. 22–25 (October 2011)
5. Böhnstedt, D., Scholl, P., Rensing, C., Steinmetz, R.: Collaborative Semantic Tag-
ging of Web Resources on the Basis of Individual Knowledge Networks. In: Houben,
G.-J., McCalla, G., Pianesi, F., Zancanaro, M. (eds.) UMAP 2009. LNCS, vol. 5535,
pp. 379–384. Springer, Heidelberg (2009)
6. Cantador, I., Konstas, I., Jose, J.: Categorising Social Tags to Improve Folksonomy-
Based Recommendations. Web Semantics: Science, Services and Agents on the
World Wide Web 9, 1–15 (2011)
7. Desrosiers, C., Karypis, G.: A Comprehensive Survey of Neighborhood-Based Rec-
ommendation Methods. In: Ricci, F., Rokach, L., Shapira, B., Kantor, P. (eds.)
Recommender Systems Handbook, pp. 107–144. Springer (2011)
8. Drachsler, H., Pecceu, D., Arts, T., Hutten, E., Rutledge, L., van Rosmalen, P.,
Hummel, H.G.K., Koper, R.: ReMashed – Recommendations for Mash-Up Personal
Learning Environments. In: Cress, U., Dimitrova, V., Specht, M. (eds.) EC-TEL
2009. LNCS, vol. 5794, pp. 788–793. Springer, Heidelberg (2009)
9. Hannafin, M., Hill, J.: Resource-Based Learning. In: Handbook of Research on
Educational Communications and Technology, pp. 525–536 (2008)
10. Hintze, J., Nelson, R.: Violin plots: A box plot-density trace synergism. The Amer-
ican Statistician 52(2), 181–184 (1998)
11. Hotho, A., Jäschke, R., Schmitz, C., Stumme, G.: BibSonomy: A Social Bookmark
and Publication Sharing System. In: Proc. of the Conceptual Structures Tool In-
teroperability Workshop (2006)
12. Hotho, A., Jäschke, R., Schmitz, C., Stumme, G.: Information Retrieval in Folk-
sonomies: Search and Ranking. In: Sure, Y., Domingue, J. (eds.) ESWC 2006.
LNCS, vol. 4011, pp. 411–426. Springer, Heidelberg (2006)
13. Jäschke, R., Marinho, L., Hotho, A., Schmidt-Thieme, L., Stumme, G.: Tag Rec-
ommendations in Folksonomies. In: Kok, J.N., Koronacki, J., Lopez de Mantaras,
R., Matwin, S., Mladenič, D., Skowron, A. (eds.) PKDD 2007. LNCS (LNAI),
vol. 4702, pp. 506–514. Springer, Heidelberg (2007)
14. Manning, C., Raghavan, P., Schÿtze, H.: Introduction to Information Retrieval.
Cambridge University Press (2008)
15. Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H., Koper, R.: Recommender
Systems in Technology Enhanced Learning. In: Ricci, F., Rokach, L., Shapira, B.,
Kantor, P. (eds.) Recommender Systems Handbook, pp. 387–415. Springer (2011)
16. Peters, I.: Folksonomies: Indexing and Retrieval in Web 2.0. De Gruyter Saur
(2010)
17. Rakes, G.: Using the Internet as a Tool in a Resource-Based Learning Environment.
Educational Technology 36, 52–56 (1996)
18. Romero Zaldivar, V.A., Crespo Garcı́a, R.M., Burgos, D., Kloos, C.D., Pardo,
A.: Automatic Discovery of Complementary Learning Resources. In: Kloos, C.D.,
Gillet, D., Crespo Garcı́a, R.M., Wild, F., Wolpers, M. (eds.) EC-TEL 2011. LNCS,
vol. 6964, pp. 327–340. Springer, Heidelberg (2011)
An Initial Evaluation of Metacognitive Scaffolding
for Experiential Training Simulators

Marcel Berthold1, Adam Moore2, Christina M. Steiner1, Conor Gaffney3,


Declan Dagger3, Dietrich Albert1, Fionn Kelly4, Gary Donohoe4,
Gordon Power3, and Owen Conlan2
1
Knowledge Management Institute, Graz University of Technology,
Rechbauerstr. 12, 8010 Graz, Austria
{marcel.berthold,christina.steiner,dietrich.albert}@tugraz.at
2
Knowledge, Data & Engineering Group, School of Computer Science and Statistics,
Trinity College, Dublin, Ireland
{mooread,oconlan}@scss.tcd.ie
3
EmpowerTheUser, Trinity Technology & Enterprise Campus,
The Tower, Pearse Street, Dublin, Ireland
{conor.gaffney,declan.dagger,gordon.power}@empowertheuser.com
4
Department of Psychiatry, School of Medicine, Trinity College, Dublin, Ireland
fkelly@stpatsmail.com, DONOGHUG@tcd.ie

Abstract. This paper elaborates on the evaluation of a Metacognitive


Scaffolding Service (MSS), which has been integrated into an already existing
and mature medical training simulator. The MSS is envisioned to facilitate self-
regulated learning (SRL) through thinking prompts and appropriate learning
hints enhancing the use of metacognitive strategies. The MSS is developed in
the European ImREAL (Immersive Reflective Experience-based Adaptive
Learning) project that aims to augment simulated learning environments
throughout services that are decoupled from the simulation itself. Results
comparing a baseline evaluation of the ‘pure’ simulator (N=131) and a first user
trial including the MSS (N=143) are presented. The findings indicate a positive
effect on learning motivation and perceived performance with consistently good
usability. The MSS and simulator are perceived as an entity by medical students
involved in the study. Further steps of development are discussed and outlined.

Keywords: self-regulated learning, metacognitive scaffolding, training simulator,


augmentation.

1 Introduction

Self-regulated learning (SRL) and especially metacognition, is currently a prominent


topic in technology-enhanced learning (TEL) research. Many studies provide
evidence of the effectiveness of SRL in combination with metacognitive scaffolding
(cf. [1, 2]). Self-regulated learning refers to learning experiences that are directed by
the learner and describes the ways in which individuals regulate their own cognitive
and metacognitive processes in educational settings (e.g. [3, 4]). An important aspect

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 23–36, 2012.
© Springer-Verlag Berlin Heidelberg 2012
24 M. Berthold et al.

of self-regulated learning is therefore the learners’ use of different cognitive and


metacognitive strategies, in order to control and direct their learning [5]. These
strategies include cognitive learning strategies, self-regulatory strategies to control
cognition (i.e. metacognitive strategies) and resource management strategies. Self-
regulated learning also involves motivational processes and motivational beliefs [4]. It
has been shown that good self-regulated learners perform better and are more
motivated to learn [6] than weak self-regulated learners. TEL environments provide
opportunities to support and facilitate metacognitive skills, but most learners need
additional help and guidance [7] to perform well in such environments.
In the EU project, ImREAL1 (Immersive Reflective Experience-based Adaptive
Learning), intelligent services are being developed to augment and improve
experiential simulated learning environments – including one to scaffold
metacognitive processes. The development of the scaffolding service focuses on the
salient and timely support of learners in their metacognitive processes and self-
regulated learning in the context of a simulation environment. Herein we report a
concrete study examining the medical training simulator provided by
EmpowerTheUser2 augmented with the ImREAL Metacognitive Scaffolding Service
(MSS). The service will provide prompts and suggestions adapted to a learner’s needs
and traits of metacognition and aiming at enhancing motivation towards the learning
activity in the simulation. While the aspect of supporting metacognition needs to be
integrated in the learning process, the according service will be technically decoupled
from the specific learning system itself. Overall, the research presented investigates
the effectiveness and appropriateness of the service and the scaffolding it provides. To
allow a more detailed examination of the issues, we address four sub questions:
1. Is self-regulated learning supported? For the evaluation and analysis of self-
regulated learning we distinguish between the general learning approach (i.e.
application of cognitive, metacognitive strategies), and the metacognitive and
specific learning processes in the simulation (i.e. cognitive, metacognitive
strategies or actions within simulator context); thereby, learning and metacognitive
scaffolding in the simulation may optimally, and on a long-term basis, influence
the general learning approach of a learner. That means learning in the simulation in
combination with metacognitive scaffolding may optimally influence the general
learning approach. If this approach is successful it may have also an influence on
SRL on a long-term basis.
2. Does the simulator augmentation through the service lead to better learning
performance? The learning performance refers to the (objective or
subjective/perceived) learners’ knowledge/competence acquisition and
performance in the learning situation and to the transfer of acquired knowledge to
other situations.
3. Does the simulator augmentation through the service increase motivation? The
aspect of motivation addresses the motivation to learn, i.e. the structures and
processes explaining learning actions and the effects of learning [8].

1
http://www.imreal-project.eu
2
http://www.empowertheuser.ie
An Initial Evaluation of Metacognitive Scaffolding 25

4. Is the service well integrated in the simulation and learning experience? This
refers to the question whether the scaffolding interventions provided during the
simulation via the MSS are perceived by learners as appropriate and useful – in
terms of their content, context and timing.
In order to answer these evaluation questions the paper is organized in the following
structure. Section 2 gives an overview of the MSS and outlines related work. Section
3 presents the simulator and its normal usage. Section 3 gives an overview of the
MSS and outlines related work. In section 4 the experimental design of the study is
introduced and section 5 includes the according results. These results are discussed in
section 5. Section 6 provides a conclusion and an outlook to further research.

2 Metacognitive Scaffolding – Background and Technology

Scaffolding is an important part of the educational process, supporting learners in


their acquisition of knowledge and developing their learning skills. Scaffolding has
been a major topic of research since the pioneering work of Vygotsky (e.g. 1978 [9])
and the key work of Bruner, Wood and colleagues (cf. [10]). Bruner [11] identified
several aspects which should be considered when providing feedback to students such
as form and timing.
Work on the use of scaffolding with the help of computer-based learning
environments has been extensive (cf. [12]). Originally, the emphasis was on cognitive
scaffolding which has many forms (cf. [13]). In the last ten years there has been a
move towards research in metacognitive scaffolding (e.g. [14–17]) as well as in the
use of metacognitive scaffolding in adaptive learning environments (e.g. [18–21]).
Other forms of scaffolding have also been explored both in educational and
technology enhanced learning contexts – such as affective scaffolding and conative
scaffolding. Van de Pol et al. [14] sought to develop a framework for the analysis of
different forms of scaffolding. In the technology enhanced learning community,
Porayska-Pomsta and Pain [22] explored affective and cognitive scaffolding through a
form of face theory (the affective scaffolding also included an element of
motivational scaffolding). Aist et al. [23] examined the notion of emotional
scaffolding and found different kinds of emotional scaffolding had an effect on
children's persistence using a reading tutoring system.
There are different forms of metacognitive scaffolding. Molenaar et al. [2]
investigated the distinction between structuring and problematizing forms of
metacognitive scaffolding and found that problematizing scaffolding seemed to have
a significant effect on learning the required content. They used Orientation, Planning,
Monitoring, Evaluation and Reflection as subcategories of metacognitive scaffolding.
Sharma and Hannafin [24] reviewed the area of scaffolding in terms of the
implications for technology enhanced learning systems. They point out the need to
balance metacognitive and “procedural” scaffolds since only receiving one kind can
lead to difficulties – with only procedural scaffolding students take a piecemeal
approach, and with only metacognitive scaffolding students tend to fail to complete
their work. They also argue for systems that are sensitive to the needs of individuals.
26 M. Berthold et al.

Boyer et al. [25] examined the balance between motivational and cognitive
scaffolding through tutorial dialogue and found evidence that cognitive scaffolding
supported learning gains while motivational scaffolding supported increase in self-
efficacy.
The aim of the ImREAL project is to bring simulators closer to the ‘real world’. As
part of training for a diagnostic interview, in the ‘Real World’ a mentor sits at back
observing and providing occasional input / interventions as necessary. The MSS has
been developed to integrate into the simulator learning experience as an analogue of a
mentor, sitting alongside the simulator to provide scaffolding. The ETU simulator
supports meta-comprehension and open reflection via note taking.
For this trial metacognitive scaffolding was provided using calls to a RESTful [26]
service developed as part of the ImREAL project. The service utilises technology
initially developed for the ETTHOS model [27] and presents Items from the
Metacognitive Awareness Inventory [28] according to an underlying cognitive
activity model, matched to Factors in the MAI. In this way the importance of the tasks
being undertaken by the learner is clear scaffolding is developed in order to match a
learners’ cognitive activity to metacognitive support.
The scaffolding service supplements the pre-existing ETU note-taking tool, both of
which are illustrated in Figure 1 below. The text of the thinking prompt item is
phrased in order to elicit a yes/no response. If additional context / rephrasing has been
added by the instructional experts that is displayed before the open text response area.
A link that activates an explanatory text occurs underneath the text input area, as well
as a “Like button” which can be selected and the submit action.

a) b)

Fig. 1. a) MSS Interface b) ETU Note-taking tool

3 Overview of Simulator and Normal Usage

For this research the ETU Talent Development Platform was used, with training for
medical interview situations. The user plays the role of a clinical therapist and selects
interview questions from a variety of possible options to ask the patient. When a
question is selected a video is presented that shows the verbal interaction of the
therapist with the patient (close up of the patient, voice of the therapist) and the verbal
and non-verbal reaction of the patient (close up of the patient). Starting the
simulation, users can choose between two types of scenarios (Depression and Mania),
which offer the same types of subcategories: Introduction and negotiating the agenda,
eliciting information, outlining a management plan and closing the interview.
An Initial Evaluation of Metacognitive Scaffolding 27

After a scenario is chosen, the user may simulate the interview as long as they
prefer or until the interview is “naturally” finished. Furthermore, the users could have
as many runs of the simulation as they want and could choose a different scenario in
the following attempts. When going through the simulator the student obtain scores.
The simulator performance scores are a measure of the students’ potential to perform
effectively in a real interview. In this study we focused only on the Depression
interview scenario. A screenshot of a typical interaction within the ETU system is
show below in Figure 2.

Fig. 2. Screen shot of the EmpowerTheUser Simulator. The scenario of diagnosing a patient
with clinical depression is just beginning.

4 Experimental Design

4.1 Cohort
143 medical students participated in the study and performed the simulation as part of
their second year (2011/2012) medical training at Trinity College, Dublin (TCD).
They were on average approximately 22 years old (40% male vs. 60% female, 80%
Irish). In addition, these results are compared, as far as they have been assessed at
both time points, to a baseline evaluation based on using the simulator without
ImREAL services. In the baseline evaluation, 131 TCD medical students from the
previous year group (2010/2011) participated (cross-section design).

4.2 Measurement Instrument


ETU Simulator. Within the simulation learning performance is assessed by tracking
scores for each of the 4 subsections, as well as dialogue scores and notes are recorded
that were written in a note pad for reflections.

Questionnaire on Self-Regulated Learning. Self-regulated learning skills were


measured by the Questionnaire for Self-Regulated Learning (QSRL; [29]). The QSRL
consists of 54 items, which belong to six main scales (Memorizing / Elaboration /
Organization / Planning / Self-monitoring / Time management) and three subscales
(Achievement Motivation / Internal attribution / Effort). In the online version of the
28 M. Berthold et al.

questionnaire, respondents indicate their agreement to an item by moving a slider on


an analogue scale between the positions “strongly disagree” to “strongly agree”. The
possible score range is 0-100 in each case, with higher values indicating a better
result.

Survey Questions on Use of ETU, Experience in Performing Clinical Interviews


and Relevance of the ETU Simulator. In order to control possible influences of
prior experience with the respective simulated learning environment or real world
medical interviews, their experiences were assessed through three survey questions.
The following survey question assessed the relevance of the simulator with answer
options ranging from “not relevant at all” to “very relevant”.

Thinking Prompts. Triggers that made calls to the MSS were inserted into the
practice phase of the simulator but not available during ‘live/scored usage’. The
triggers were created using the ETU authoring platform and made a call to the MSS
requesting a prompt of a particular Factor (Planning, Information Management,
Comprehension, Debugging or Evaluation). As explained above, each Factor
consisted of a number of Items or Thinking Prompts. An item was not redisplayed
once a reflection had been entered with it.

Motivation. Motivation was assessed with four survey questions referring to learning
more about clinical interviews, improve own interview skills, performing a good
interview during the simulation and applying what has been learned in a real
interview.

Workload. Measures of workload were assessed by six subscales of the NASA-TX


[30, 31] with a score range of 0-100. In this case higher values indicate a higher
workload. An overall workload score was calculated based on the subscales by
computing a mean of all item contributions. In contrast to the original NASA-TLX,
students did not mark their answers to an analogue scale, but entered digits between
0-100 into a text field.

Usability and Service Specific Integration. The Short Usability Scale (SUS, [32])
consists of ten items with answer options of a five-point-Likert-scale ranging from
“strongly disagree” to “strongly agree”. The raw data were computed to an overall
SUS score. The overall SUS score ranged from 0-100 with higher values indicating
higher usability. Additionally to the SUS questions, three service specific usability
questions were administered regarding the relation of the prompts to the rest of
the simulation and obvious differences. The answer options were the same as for
the SUS.

Learning Experience with MSS. Learning experience with MSS was measured by
10 questions referring to helpfulness and appropriateness of the MSS thinking
prompts within the simulator with answer options on a 5-point-Likert-scale ranging
from “not at all” to “very much”. In addition, a free text comment field was provided.
An Initial Evaluation of Metacognitive Scaffolding 29

Procedure. The baseline evaluation, using the pure simulator, was conducted in mid-
February and beginning of March of 2011; the first user trial was carried out in
Dublin from mid-February until the beginning of March of 2012. The TCD medical
students used the ETU medical training simulator.
Data collection was carried out during the simulation (e.g. ETU scores, MSS data)
and after learning with the ETU simulator (questionnaire data). At first the students
worked on the simulation for as long as they wanted and could choose between two
scenarios: Mania or Depression. After they were finished they were directed to the
online questionnaires. In this stage they filled in the survey questions on relevance
and on motivation, NASA-TLX, SUS, questions on prompts, learning experience and
the QSRL.
After working on the simulation in the TCD course students still had access to the
ETU simulator via the internet for approximately two weeks. It was not mandatory to
use the simulation in the medical course at TCD or to participate in the evaluation.

5 Experimental Results

PAWS Statistics, version 18.0 [33] and Microsoft Excel (2010) were used for
statistical analyses and graphical presentations. If not explicitly mentioned, statistical
requirements for inference statistical analyses and procedures were fulfilled. For all
analyses the alpha level was α=.05. Due to an unbalanced number of participants in
the samples in regard to comparisons of the first user trial and baseline evaluation
appropriate pre-tests have been performed and the according values are presented.
This section focuses mainly on the first user trial evaluation based on using the
ETU simulator with the integrated MSS ImREAL services.

5.1 Log-Data
ETU Simulator – Descriptive Data. All students of the first user trial reported that
they have never used the ETU medical training simulator before. Nonetheless, they
were quite experienced in conducting clinical interviews, since 97% reported to have
already performed at least one, but only 15 % had experienced interviewing a
psychiatric patient.
A comparison of the first user trial and the baseline evaluation showed that
duration time in minutes (Mbase=17.89, SDbase=11.15; M1UT=15.45, SD1UT =6.81) and
scored points (Mbase=31.34, SDbase=6.33; M1UT=27.61, SD1UT =5.91) in the simulation
decreased from baseline evaluation to the first user trial (duration time: t211,49=2.17,
p=.031; score: t272=5.10, p<.001). These results show that students spent on average
less time in the simulator and reached lower scores. This is rather surprising, because,
students of the baseline cohort and first user trial cohort were similarly experienced
whereas the participants of the first user trial worked with the additional MSS. In this
case longer duration time was expected for the cohort of the first user trial.
30 M. Berthold et al.

Metacognitive Scaffolding Service (MSS) Comments. 10 comments have been


collected by MSS learning experience questionnaire free text comment field. The
participants provided interesting comments, which however referred more to the
simulator than to the MSS. This implies that the MSS seems to be perceived as well
integrated in the simulation, because students do not seem to differentiate between the
additional service and the simulation itself. The participants pointed to sometimes
inappropriate prompts in combination with the simulator in situations, especially,
when only one answering option was available in the dialogue with the patient and
they were asked to think about their strategy. Nevertheless, one learner recorded that
“I am learning a lot actually, it is amazing how much you can miss just by asking a
question in a slightly different way! I keep going back a step and looked through the
other options to see where the scenario goes. Usually I’ve picked the most suitable
one, but not always. Sometimes I am surprised about how much I would have
missed!!”.

Prompts Analysis. Five different types of prompts were presented according to the
five MAI phases described in section 3. In total 2001 prompts (Planning: 469,
Information Management: 752, Monitoring: 425, Debugging: 301 and Reflection: 54)
were shown to 50 students. The other students ignored the up-popping prompts. Every
student who used the practice facility in the simulator was presented with a pop-up
suggesting they reflected. Clicking on that pop-up would move the simulation to the
MSS screen (Figure 2a). The relative frequency of the prompts was compared to the
expected frequency based on the probability of available prompts for each phase. The
results indicate that the learners were scaffolded more often in the second phase
“Information Management” and were less scaffolded in the reflection phase as could
have been expected (χ2(4,0.95)= 314.55, p<.001, Figure 3). On the one hand, learners
seem to need more assistance in effectively processing information by hints to use
more organizational, elaborative, summarizing or selective learning strategies. On the
other hand they are rather confident in the reflection phase and wave the offer of
scaffolds.

Fig. 3. Comparison of the expected and empirical distribution of metacognitive scaffolds for
the five phases of Schraw’s Metacognitive Awareness Inventory
An Initial Evaluation of Metacognitive Scaffolding 31

5.2 Questionnaire Data

QSRL. The quantitative results of the MSS are a little surprising, because students
estimated the use of cognitive learning strategies, especially elaboration strategies,
relatively high. In general, all SRL scores are located above the center point of the
score range and indicate positive results for all cognitive and metacognitive strategies.
However, a stronger use of elaboration strategies is reported (t20 = 3.34, p=.003) in the
first user trial. It needs to be explicitly stated that this is not an unfavorable result as
such as elaboration strategies are strategies of deeper learning [34], which should be
further supported by scaffolding services.
A comparison to the baseline study shows no significant increase in any of the
usage of reported learning strategies.

Motivation. 38 participants filled in the motivation questions. The results show that
the scores were on a high motivation level around 3.16-3.49 on a 4-point-Likert rating
scale. This implies that the students were very motivated to learn about the clinical
interview during the simulation, to improve their interview skills, perform a good
interview during the simulation and to apply what they have just learnt in the
simulation in a real world clinical interview context. Furthermore, a comparison of
the overall motivation scores assessed immediately after the simulation of the first
user trail and immediately after the baseline evaluation reveals significant higher
motivation scores for the MSS trial (M=3.35, SD=.4.14) compared to the baseline
(M=2.48, SD=.73; t118.47=-8.64, p<.001).

Workload. A moderate overall workload could be observed. It has to be noted at this


stage that for a learning environment it should not be aimed at reducing the workload
to a minimum; rather, the challenge should be at an appropriate, medium level of
challenge – in an optimal case adapted to the individual learner. Participants reported
the highest, but still moderately pronounced, load for effort. This subscale refers to
the mental and physical resources that had to be mobilized to accomplish the task.
Consequently, the result for effort can be relegated rather to mental than physical
demand. Yet the simulation is a complex program that supports and requires active
learning processes; a reduction of mental demand is somewhat challenging, but could
possibly be realized by improving the MSS and addressing the challenge to reduce
repetitions and provide only appropriate scaffolds. Furthermore, the second highest
score was observed for performance (see Figure 4), showing that the students felt they
successfully accomplished what they were supposed to do. Performance scores
(referring to subjective/perceived learning outcome) even increased for the first user
trial (M=54.83, SD=17.90) compared to the baseline evaluation (M=43.00, SD=23.68)
significantly (t109=-2.63, p=.01). A t-test for independent groups remains insignificant
comparing overall workload scores for the first user trial (M=44.19, SD=10.86) and
baseline evaluation (M=44.81, SD=12.011; t75.52=.27, ns.).
32 M. Berthold et al.

Fig. 4. Comparison of Baseline and first user trial data for workload

Usability and Service Specific Integration. No differences could be observed


between the rather high usability overall scores for the first user trial (M=62.50,
SD=17.90) and the baseline evaluation (M=62.80, SD=16.08; F57.90=.90, ns.).
With respect to service specific integration with the ETU medical training and MSS
prompts the majority of the students ratings were positive with 21 out of 33 stating
they felt supported during their learning process by the MSS and that the service was
well integrated in the system (M=3.26, SD=.40).

Learning Experience with MSS. The learning experience with the MSS was
relatively positive. More than 63% of the participants perceived the MSS learning
experience overall as very much helpful and appropriate. The high score for the
individual items were all above the center point of the scale, which underlies this
encouraging impression.

6 Discussion

In this paper we examined the effectiveness and appropriateness of the MSS. Results
of the first user trial have been reported, involving the ETU medical training
simulator augmented with the MSS. These results have been compared to a baseline
evaluation where the ‘pure’ simulator was administered without any additional
ImREAL services. Addressing the evaluation questions stated in the introduction
section:

Is Self-Regulated Learning Supported? Even though self-regulated learning and


metacognitive scaffolding are closely connected, because the SRL process heavily
relies on applying cognitive and especially metacognitive learning strategies and
techniques, no changes in SRL profile could be observed comparing the first user trial
to the baseline data. This is because influencing self-regulated learning aspects is
rather a long-term process [35]. This result might also be explained by having a look
at the usage frequency of the simulator. The students were confronted with the
simulation only in the TCD course and no one had used the simulation before.
An Initial Evaluation of Metacognitive Scaffolding 33

Furthermore, duration time of working with the simulator, which was on average less
than half an hour, might not be too short to change a rather stable learning approach.
For future studies the application of a longitudinal-evaluation-design could be
suggested instead of a cross-section evaluation, to better meet the requirements of a
longer-term process. In addition, teachers’ or supervisors’ judgments on SRL
performance could be included to assess their observations on potential changes in
learners’ daily learning behavior. However, the last point might be difficult to realize
in an university setting with more than 140 students in a course.
In general, all SRL self-reports were positive, indicating a higher use of elaboration
strategies compared to memorizing strategies. Elaboration strategies represent
strategies of deeper learning [34]. Nevertheless, fostering memorizing/rehearsal
strategies might be taken up as an idea for improving the MSS. Assuming that the
participants in the evaluation trials constitute a representative sample of ETU
simulator users, ImREAL could start from this result and aim at improving users’
rehearsal strategies through the provision of appropriate scaffolding. Of course, this
strategy type should not be the only one to be supported. Rehearsal strategies help the
learner to select and remember important information, but may not represent very
deep levels of cognitive processing [34]. As a result, ImREAL services should
especially try to further support elaboration as well as organizational strategies. In the
ImREAL pedagogical framework learning [36] is seen as a cyclic process of three
phases: forethought, learning and reflection. These individual phases are already
represented in the ETU system, but not covered comprehensively. As described
above, medical students do not tend to use the ETU simulator very often and if they
do they undertake the interview scenario only for a short period of time. Therefore,
reflection and coverage of the SRL phases should be further extended and supported
by the ImREAL MSS.

Does the Simulator Augmentation through the Service Lead to Better Learning
Performance? Results concerning the learning performance draw a clear picture. The
actual objective data collected by the ETU simulator demonstrates that overall scores
decreased from the baseline evaluation to the first user trial. Accordingly also self-
reports on performance decreased. A decrease in self-report scores is expected, if
actual performance is lower and may have been influenced by the MSS encouraging
learners to think about their learning process and therefore make an accurate
estimation of their performance. Accurate self-estimation might be seen as a factor to
regulate the own learning approach. One reason why overall scores decreased could
be the fact that the students spend less time working with the simulator. This is due to
a change in the curriculum

Does the Simulator Augmentation through the Service Increase Motivation?


Motivation scores increased from the baseline evaluation to the first user trial.
In addition to the consideration of motivation as a state characteristic, motivational
beliefs (motivational traits in terms of being more stable and outlasting than state
motivation) can be further influenced by positive sounding scaffolds and hints to
34 M. Berthold et al.

optimize learning. If students see the prompts as support of their learning approach a
positive attitude to the whole learning process can be expected and could explain the
current result, because these motivational beliefs are factors influencing the initial
motivation of the learner [37].

Is the Service Well Integrated in the Simulation and Learning Experience?


Results on usability of the whole system (simulator + MSS) and service specific
integration provide evidence that the MSS is well integrated in the simulation and
leads to real augmentation. This is not only demonstrated by the positive scores on the
service specific integration questions, but also by user comments, which were overall
quite positive. Such positive results may be attributed to the MSS operating in an
appropriately timely and salient manner, with the pop-up triggers appearing at
apposite times created by the instructional design experts. Also the RESTful interface
allows calls to be made to an ETU simulator-specific interface for the MSS, ensuring
there are no obvious presentational and interactional differences between the hosting
simulator and the MSS.

7 Conclusions and Outlook

The results above demonstrate a clear advantage in providing a MSS to augment an


experiential training simulator, leading to more engaged, motivated learners without
overly burdening them or interrupting the flow of their learning experience. With
respect to the actual learning performance no positive effect could be identified. This
would be desirable to investigate in more detail in future studies. These further studies
should optimally be realized in a longitudinal-evaluation-design, as well as an
assessment of real-world performance on medical interviews (i.e. learning transfer).
The collecting and monitoring of the development of motivation throughout both
evaluation runs is important, because in the next version of the ImREAL MSS there
will be a strong focus on extending it by ‘affective scaffolding’. As a result, the data
from the first user trial evaluation (with metacognitive scaffolding ‘only’) will serve
as benchmark for a comparison with evaluation outcomes for the affective
metacognitive scaffolding, thus allowing to investigate the additional benefit of the
affective part.
The MSS will be integrated within additional experiential simulators to investigate
the service’s capabilities for generalization and integration within different systems
and usage cases and to further evaluate its effect on learning experience.

Acknowledgement. The research leading to these results has received funding from
the European Community's Seventh Framework Program (FP7/2007-2013) under
grant agreement no 257831 (ImREAL project) and could not be realized without the
close collaboration between all ImREAL partners.
An Initial Evaluation of Metacognitive Scaffolding 35

References
1. Veenman, M.V.J.: Learning to Self-Monitor and Self-Regulate. Regulation (2008)
2. Molenaar, I., Van Boxtel, C.A.M., Sleegers, P.J.C.: The effects of scaffolding
metacognitive activities in small groups. Computers in Human Behavior 26, 1727–1738
(2010)
3. Puustinen, M., Pulkkinen, L.: Models of Self-regulated Learning: A review. Scandinavian
Journal of Educational Research 45, 269–286 (2001)
4. Zimmerman, B.: Becoming a Self-Regulated Learner: An Overview. Theory Into
Practice 41, 64–70 (2002)
5. Pintrich, P.R.: The role of motivation in promoting and sustaining self-regulated learning.
International Journal of Educational Research 31, 459–470 (1999)
6. Veenman, M.V.J.: Learning to Self-Monitor and Self-Regulate. In: Mayer, R., Alexander,
P. (eds.) Handbook of Research on Learning and Instruction, pp. 197–218. Routledge,
New York (2011)
7. Manuela, B.: Effects of Reflection Prompts when Learning with Hypermedia. Journal of
Educational Computing Research 35, 359–375 (2006)
8. Krapp, A.: Die Psychologie der Lernmotivation. Perspektiven der Forschung und
Probleme ihrer pädagogischen Rezeption [The psychology of motivation to learn.
Perspectives of research and their pedagogical reception]. Zeitschrift für Pädagogik 39,
187–206 (1993)
9. Vygotskiǐ, L.S., Cole, M.: Mind in society: the development of higher psychological
processes. Harvard University Press, Cambridge (1978)
10. Wood, D., Bruner, J.S., Ross, G.: The role of tutoring in problem solving. Journal of Child
Psychology and Psychiatry 17, 89–100 (1976)
11. Bruner, J.S.: Toward a theory of instruction. Harvard University Press, Cambridge (1966)
12. Lajoie, S.P.: Extending the Scaffolding Metaphor. Instructional Science 33, 541–557
(2005)
13. Clark, A.: Towards a science of the bio-technological mind. International Journal of
Cognition and Technology 1, 21–33 (2002)
14. Pol, J., Volman, M., Beishuizen, J.: Scaffolding in Teacher–Student Interaction: A Decade
of Research. Educational Psychology Review 22, 271–296 (2010)
15. Dinsmore, D.L., Alexander, P.A., Loughlin, S.M.: Focusing the Conceptual Lens on
Metacognition, Self-regulation, and Self-regulated Learning. Educational Psychology
Review 20, 391–409 (2008)
16. Greene, J., Azevedo, R.: The Measurement of Learners’ Self-Regulated Cognitive and
Metacognitive Processes While Using Computer-Based Learning Environments.
Educational Psychologist 45, 203–209 (2010)
17. Azevedo, R.: Does adaptive scaffolding facilitate students’ ability to regulate their learning
with hypermedia? Contemporary Educational Psychology 29, 344–370 (2004)
18. Azevedo, R., Hadwin, A.F.: Scaffolding Self-regulated Learning and Metacognition –
Implications for the Design of Computer-based Scaffolds. Instructional Science 33, 367–
379 (2005)
19. Luckin, R., Hammerton, L.: Getting to Know Me: Helping Learners Understand Their
Own Learning Needs through Metacognitive Scaffolding. In: Cerri, S.A., Gouardéres, G.,
Paraguaçu, F. (eds.) ITS 2002. LNCS, vol. 2363, pp. 759–771. Springer, Heidelberg
(2002)
36 M. Berthold et al.

20. Roll, I., Aleven, V., McLaren, B.M., Koedinger, K.R.: Designing for metacognition—
applying cognitive tutor principles to the tutoring of help seeking. Metacognition and
Learning 2, 125–140 (2007)
21. Roll, I., Aleven, V., McLaren, B.M., Koedinger, K.R.: Improving students’ help-seeking
skills using metacognitive feedback in an intelligent tutoring system. Learning and
Instruction 21, 267–280 (2011)
22. Porayska-Pomsta, K., Pain, H.: Providing Cognitive and Affective Scaffolding Through
Teaching Strategies: Applying Linguistic Politeness to the Educational Context. In: Lester,
J.C., Vicari, R.M., Paraguaçu, F. (eds.) ITS 2004. LNCS, vol. 3220, pp. 77–86. Springer,
Heidelberg (2004)
23. Aist, G., et al.: Experimentally augmenting an intelligent tutoring system with human-
supplied capabilities: adding human-provided emotional scaffolding to an automated
reading tutor that listens. In: Proceedings Fourth IEEE International Conference on
Multimodal Interfaces, pp. 483–490 (2002)
24. Sharma, P., Hannafin, M.J.: Scaffolding in technology-enhanced learning environments.
Interactive Learning Environments 15, 27–46 (2007)
25. Boyer, K.E., Phillips, R., Wallis, M., Vouk, M.A., Lester, J.C.: Balancing Cognitive and
Motivational Scaffolding in Tutorial Dialogue. In: Woolf, B.P., Aïmeur, E., Nkambou, R.,
Lajoie, S. (eds.) ITS 2008. LNCS, vol. 5091, pp. 239–249. Springer, Heidelberg (2008)
26. Fielding, R.T.: Architectural Styles and the Design of Network-based Software
Architectures, PhD-thesis, Citeseer (2000)
27. Macarthur, V., Moore, A., Mulwa, C., Conlan, O.: Towards a Cognitive Model to Support
Self-Reflection: Emulating Traits and Tasks in Higher Order Schemata. In: EC-TEL 2011
Workshop on Augmenting the Learning Experience with Collaborative Reflection,
Palermo, Sicily, Italy (2011)
28. Schraw, G., Sperling Dennison, R.: Assessing metacognitive awareness. Contemporary
Educational Psychology 19, 460–475 (1994)
29. Fill Giordano, R., Litzenberger, M., Berthold, M.: On the Assessment of strategies in self-
regulated learning (SRL)–differences in adolescents of different age group and school
type. Poster. 9. Tagung der Österreichischen Gesellschaft für Psychologie, Salzburg (2010)
30. Hart, S.G., Staveland, L.E.: Development of NASA-TLX (Task Load Index): Results of
Empirical and Theoretical Research. In: Hancock, N.M.P.A. (ed.) Human Mental
Workload, pp. 239–250. North Holland Press, Amsterdam (1988)
31. Hart, S.G.: NASA-Task Load Index (NASA-TLX); 20 Years Later. Proceedings of the
Human Factors and Ergonomics Society Annual Meeting 50, 904–908 (2006)
32. Brooke, J.: SUS: A “quick and dirty” usability scale. In: Jordan, P.W., Thomas, B.,
Weerdmeester, B., McClelland, A.L. (eds.) Usability Evaluation in Industry, pp. 189–194.
Taylor & Francis, London (1996)
33. SPSS Inc., PASW Statistics 18 Core System User ’s Guide. SPSS Inc. (2007)
34. Weinstein, C.E., Mayer, R.E.: The teaching of learning strategies. In: Wittrock, M. (ed.)
Handbook of Research on Teaching, pp. 315–327. Macmillan, New York (1986)
35. Pressley, M.: More about the development of self-regulation: Complex, long-term, and
thoroughly social. Educational Psychologist 30, 207–212 (1995)
36. Hetzner, S., Steiner, C., Dimitrova, V., Brna, P., Conlan, O.: Adult Self-regulated Learning
through Linking Experience in Simulated and Real World: A Holistic Approach. In: Kloos,
C.D., Gillet, D., Garcia, R.M.C., Wild, F., Wolpers, M. (eds.) EC-TEL 2011. LNCS,
vol. 6964, pp. 166–180. Springer, Heidelberg (2011)
37. Vollmeyer, R., Rheinberg, F.: Motivational effects on self-regulated learning with different
tasks. Educational Psychology Review 18, 239–253 (2006)
Paper Interfaces for Learning Geometry

Quentin Bonnard, Himanshu Verma, Frédéric Kaplan, and Pierre Dillenbourg

CRAFT, École Polytechnique Fédérale de Lausanne,


RLC D1 740, Station 20, 1015 Lausanne, Switzerland
{quentin.bonnard,h.verma,frederic.kaplan,pierre.dillenbourg}@epfl.ch

Abstract. Paper interfaces offer tremendous possibilities for geometry


education in primary schools. Existing computer interfaces designed to
learn geometry do not consider the integration of conventional school
tools, which form the part of the curriculum. Moreover, most of com-
puter tools are designed specifically for individual learning, some pro-
pose group activities, but most disregard classroom-level learning, thus
impeding their adoption. We present an augmented reality based table-
top system with interface elements made of paper that addresses these
issues. It integrates conventional geometry tools seamlessly into the ac-
tivity and it enables group and classroom-level learning. In order to eval-
uate our system, we conducted an exploratory user study based on three
learning activities: classifying quadrilaterals, discovering the protractor
and describing angles. We observed how paper interfaces can be easily
adopted into the traditional classroom practices.

Keywords: Paper interfaces, Sheets, Cards, Geometry learning,


Tabletop.

1 Introduction
Geometry education in primary schools is a domain ripe for exploiting the possi-
bilities of computers, as they allow for an easy exploration of the problem space.
However, there are some constraints which make it difficult to effectively utilize
computers in a classroom scenario. Particularly, they do not cover the entire
curriculum, which is based on pen and paper. For example, the only way for
children to learn how to draw an arc is by using a physical compass.
Paper interfaces can prove to be an effective solution to this dilemma, as
paper is already situated and integrated in the classroom environment and its
practices. In addition, paper is cheap to produce, yet persistent and malleable
to adapt to the dynamics of the classroom. As a computer interface it can trans-
form into a dynamic display capable of computing and processing data. Besides
these benefits of paper interfaces, paper has different properties and affordances
depending upon its material, shape and size. Also, many interface metaphors
such as cut-copy-paste, files and folders, check-boxes etc. are actually inspired
by practices involving paper. Effective identification of these properties followed
by a proper utilization, might render the paper interface intuitive for the users to
interact. We hypothesize that geometry education in primary schools can greatly

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 37–50, 2012.

c Springer-Verlag Berlin Heidelberg 2012
38 Q. Bonnard et al.

benefit from the use of paper interfaces and their characteristics. For example,
folding is a natural embodiment of axial symmetries, cutting can be the physical
counterpart of recomposing figures in order to compute their areas.
In this article, we present an augmented reality based tabletop system to
facilitate geometry learning for primary school children. Our system incorporates
a camera-projector device which is capable of projecting content over sheets and
cards placed on a table top. We also present three exploratory user studies to
study the influence and feasibility of using paper interfaces in primary schools.
We report on the observations related to these user studies with three different
geometry learning activities concerning shapes and angles.

2 Related Work

The domain of paper interfaces is broad and not very well defined, just like the
paper is used as an umbrella term for a variety of artefacts and practices. The
archetype of using paper consists of writing with a pen (or a pencil) on a white
rectangular sheet, but paper interfaces have been built for book reading [1],
sticky notes [2], painting [3], presentation notes [4], trading cards [5], postcards
[6] and even cover sheets [7]. However, in this section we would focus on the
approaches addressing education, and start with the work related to the use of
computers in geometry.

2.1 Computers in Geometry Education

Many researchers have tried to study the use of computers in geometry educa-
tion involving software controlled by mouse and screen [8], augmented reality
systems [9], or emulation of pen and paper [10]. Garcia et al. [8] identified that
students appreciate the ability to repeat a geometrical construction (and playing
it step-by-step) as allowed by a computer. Also, Dynamic Geometry Software
(DGS) such as Cabri Géomètre [11], GeoGebra1 enables learners to explore the
dynamic behavior of a geometrical construction, i.e. what moves and what re-
mains fixed under given constraints. Straesser [12] explains how DGS opens new
possibilities in geometry education, by enabling geometric constructions not eas-
ily possible with pen and paper. However, the use of WIMP interfaces in teaching
involves the risk of spending more time to learn the software than learning ge-
ometry, as these interfaces are completely different from the typical geometry
tools frequently used in classrooms such as compass, ruler etc. [13].
Augmented reality interfaces aim at making the interaction more natural by
integrating virtual elements in the real world. For example, Kaufmann and his
colleagues [14] addressed some of the shortcomings of learning spatial geometry
on a mouse/screen/keyboard system: with head mounted displays, the manip-
ulation of 3 dimensional objects is more direct, and they allow for face-to-face
collaboration. Martı́n-Gutiérrez [15] and his colleagues designed an augmented
1
www.geogebra.org
Paper Interfaces for Learning Geometry 39

book combined with a screen to develop spatial abilities in engineering students.


They measured a positive impact on the spatial abilities, and the users found
the system easy-to-use, attractive, and useful. Underkoffler and Ishii [9] made
the reality augmenting device even less intrusive than head mounted displays
or screens by using the so called I/O bulbs to simulate optics. I/O bulbs are
camera/projector system above an interaction surface allowing students to ma-
nipulate tangible artefacts representing optical elements and see the effects on
the trajectory of light.
Oviatt [10] and her colleagues bring forward this intent of making the interface
as quiet as possible. They compared how student worked on geometrical problems
using pen and paper, and interfaces approximating pen and paper with less and
less exactitude: a smart pen using the microscopic pattern, a pen tablet, and a
graphical tablet. They showed that the closer from the familiar work practice
(i.e. pen and paper), the better is the performance.
To summarize, computers can add an essential dimension to geometry learn-
ing: dynamic information. However, existing educational interfaces for geometry
are not adapted to classroom education, where paper prevails. Thus, paper in-
terfaces can act as a bridge between computers and learning practices.

2.2 Paper-Based Interfaces in Education


We review the work related to paper interfaces in education based on the two
aspects of paper that can be useful in the educational context. The first one, in-
troduced by Wellner’s seminal paper [16] on linking digital documents with their
paper counterpart, presents paper as the support for working transparently on
a digital document and its physical copy. This aspect is important for educa-
tion, because it allows the researcher to study and extend the existing practice,
in order to integrate the classroom more easily. Practices existing in the class-
room that can be augmented include taking notes [17], reading textbooks [18],
storytelling [19,20], or drawing schema [21].
The second aspect of paper useful for deployment in the classroom is its tan-
gible aspect. It provides a cheap, easy way to attach virtual elements to reality.
For example, Radu and MacIntyre [22] used cards for their tangible program-
ming environment for pupils. Song and her colleagues used a cube covered by
marked paper [23] to combined the advantages of digital and physical media.
Millner and Resnick [24] even used a paper plate to prototype a steering wheel
control, with printed buttons. Several frameworks [25,26,27] have already been
proposed to study the design space of tangible interfaces [28,29]. For example,
Hornecker and Dünser [30] showed that pupils expect the system to match the
physical properties of the tangible interface.
In both aspects, it is important to identify the properties of paper. Regarding
the work practices related to paper, McGee [31] analyzed the established usages
in order to list the properties that natural interfaces should have. In their litera-
ture review [32], Klemmer and Landay classified the other approaches based on
whether they were using a book, a document, a table, or a printer (among other
things).
40 Q. Bonnard et al.

To sum-up, it has been identified that paper has two characteristics: it can be
annotated, and it can be easily manipulated. These two features are associated
to the established classroom practices that can be used to design intuitive paper
interfaces. In this paper, we will investigate the most common forms allowing
this: sheets and cards.

3 System Used

Our system for geometry education is built on the TinkerLamp [33], which is
a tabletop environment developed at our lab. The TinkerLamp, shown in Fig-
ure 1, incorporates a camera and a projector directed to the tabletop surface
via a mirror, which extends the augmented surface. The augmented surface is
of dimension 70 × 40 cm. The cam-
era and projector are connected to an
embedded computer, so that the in-
teraction with the hardware is mini-
mum for the end user: switch ON or
OFF. It only requires to be plugged
into an electric outlet.
We use fiducial markers similar
to ARTags2 to identify and precisely
track the various elements of the inter-
face. Since the interface is projected
from the top, it is possible to use inter-
face elements (paper sheets and cards)
as a projection surface in addition to
the tabletop surface.
The different interface elements
mainly consist of paper sheets and
cards. The properties and behaviors
of these interface elements are iden-
tified by the system using the fidu-
cial markers printed over them. In
addition to paper elements, we also
Fig. 1. Our camera-projector on a table,
use traditional geometry tools such as
along with various types of objects which
ruler and protractor as part of our sys- can be augmented: sheets, cards, tools and
tem. We refer to this kind of interface wooden blocks
as a scattered interface [34].

4 Exploratory User Study

In order to study the influence and potential of paper interfaces in geometry


education for primary school pupils, we used our system to design three learning
2
http://www.artag.net
Paper Interfaces for Learning Geometry 41

activities. These activities are based on cards and sheets, which are the two
elements of our interface. Each activity was designed while keeping in mind the
three circles of usability in the classroom - individual, group and classroom -
as examined by Dillenbourg et al. [35]. This was done in order to integrate the
system well in the conventional classroom curriculum.
Our analysis is based on observational field notes made during the experiment,
the videos from a panoramic camera placed under the lamp filming the pupils, and
the snapshots of the interaction surface taken every second by the camera of the
lamp. We logged the position of every fiducial marker with a time stamp, which
allowed us to replay the interaction with the system, since the fiducial markers are
the only input of the projected augmentation. This way, we could generate any ad-
ditional log from the software. From the information collected, it would be possible
to conduct more detailed analyses, however this will be the topic of future work.

4.1 First Activity: Classifying Quadrilaterals


We designed the first activity as a pedagogical script to introduce the clas-
sification of quadrilaterals (squares, rhombuses, trapezoids, etc.) as shown in
Figure 2a. The script consists of sheets, four cards, and a set of quadrilateral
cardboard shapes. Each of these elements has a fiducial marker to identify them
and they were produced with a regular printer. The cardboard shapes were num-
bered, so that they could be referenced from the sheets.

(a) The components of the activity (b) Configuration of the tool cards into
about the classification of quadrilater- a test bench, where cardboard shapes
als: five cardboard quadrilaterals are (a trapezoid here) are brought to dis-
classified into two groups on the instruc- play all their characteristics.
tion sheet, a card shows the measure of
the angles of a rectangle, the feedback
card displays the validation text.

Fig. 2. First Activity: Classifying Quadrilaterals

The sheets, carrying instructions, are shown in the left part of Figure 2a.
They consists of a short instructional text and two areas (marked with different
colors - gray and white) denoting two different classes of quadrilaterals. The
text instructs the learner to use the three cards shown on Figure 2b to find a
42 Q. Bonnard et al.

common characteristic in a subset of shapes, and separate them into two classes.
The cards have a small text describing their function. When a specific card is
brought close to a shape, the system will display the given characteristic of the
shape (such as side length, angle measures and parallel sides).
The learner is instructed to place a fourth card next to the current page
once the shapes are placed in the classification areas (see the top right part of
Figure 2a). If all shapes have not been placed in the areas, the learner will be
reminded to do so. If the grouping is not the expected one, the learner will be
invited to try again. If the grouping into areas is correct, the formulation of the
answer will appear, e.g. “Good job! Quadrilaterals with a pair of parallel sides are
called trapezoids”. Feedbacks are intentionally trivial; the cards are not meant
to replace teachers.

Procedure and Discussion


This activity was deployed at two occasions in schools with pupils in the age
group of 7–10 years. On the first occasion, 13 pupils in groups of 2-3 individuals
worked on the first sheet of the activity for 5 minutes. On the second occasion,
the study was performed with 12 pupils (in groups of 3) who worked on the
complete activity for 40 minutes. In both cases a short presentation of the system
was given to the whole class. Hereafter, we present the observed usage of various
interface elements while identifying their characteristic behaviors.
Usage of the Cards
– Cards are used as scaffolding. It is crucial that pupils learn how to measure
using standard tools (ruler, protractor etc.). However, once these skills are
mastered, the manual measurement can become menial and wastes time
which can be utilized for the main topic of the lesson. In this regard, cards
acted as scaffolds for skills that pupils have already mastered well (measuring
side lengths). They also acted as scaffolds for skills that pupils did not master
yet (drawing parallel lines), that was necessary to introduce another concept
(trapezoids).
– Cards provided easy-to-use functionalities. We observed that pupils had no
difficulty in using the cards, thanks to the printed self-description and their
simple, easy to try functionalities. Cards were used in two ways: either they
were brought close to the shape to display properties, or the shape was
brought close to them.
– Cards allowed the composition of new functionalities. One group provided
an interesting example of appropriation of the interface. They created a test
bench by placing the tool cards together, and bringing the cardboard shapes
in the common neighborhood of all the cards so as to show all the related
information at once, as shown in Figure 2b.
Usage of the Sheets
– Sheets structured the activity in space. As opposed to the ephemeral workspaces
that can emerge with cards as seen previously with the test bench, sheets
Paper Interfaces for Learning Geometry 43

predefined a necessary workspace i.e. the two areas corresponding to the


groups in which the cardboard shapes are to be placed.
– Sheets structured the activity in time. The sequence of exercises is also pre-
defined by the sheets. We note that this structure is flexible in a sense that
if the teacher wants to skip an exercise in the software, it is as simple as
skipping a page in the sheets.
Usage of the Cardboard Shapes
– Cardboard shapes are more concrete. Cardboard shapes can be replaced by
cards with a textual description of shape or an illustration of the shape rep-
resented. However, the lower level of abstraction provided by the visual match
between a geometrical shape and its corresponding cardboard representation,
assists in reducing the cognitive effort to discover common points between them.
– Cardboard shapes are persistent. The cardboard shapes in this activity have
an existence of their own, and not only in the context of our system. Since
they are made of cardboard and not altered, they could be reused between
two experiments.

4.2 Second Activity: Discovering the Protractor


We designed the second activity as an exploratory activity for pupils, in order to
learn to use the protractor, after the introduction of angles in the classroom by
the teacher. This activity incorporates a deck of cards of two kinds - two angle
control cards and ten angle measure cards (see Figure 3a).


 



 

 




 

 






















 



 
   







(a) The various elements of the task in- (b) The drawing representing a pro-
troducing angles: the two control cards tractor used on the pre- and post-test
and two of the angle measure. One is sheets for the task introducing angles
flipped and shows the measure of the is not necessarily associated with a real
angle constructed with the blue control protractor by the pupils.
card (70◦ ).
Fig. 3. Second Activity: Discovering the Protractor

These cards can be divided into two groups based on the orange or a blue icon
printed on them. These two colors indicate the direction of measurement of angles
- orange cards correspond to clockwise measurement, whereas blue cards denote
counter-clockwise measurement. This distinction had been identified during our
44 Q. Bonnard et al.

collaboration with the teachers as the main difficulty when learning to use a
protractor. When a control card is shown to the system, an angle appears, with
its origin in the centre of the projection area, an extremity on the centre of the
control card, and the other extremity fixed horizontally on the left or right side
of the origin (the X-axis), depending on whether the card is orange or blue (see
Figure 3a).
Each angle measure card has a different angle value (in degrees) printed on
them along with the instructions to construct an angle by using the correspond-
ing control card. In order to check if the value of the produced angle (indicated
on the angle measure card) is satisfying, the measure card is flipped and the
current value of angle is displayed in a color depending on the degree of error
(green for correct, yellow for close enough and red for otherwise).

Procedure and Discussion

This activity was conducted with 106 pupils (between 8–10 years) from 4 classes
in a group of 2. Each group was required to go through 10 angle measure cards
in 10 minutes, with individuals taking turns to measure angles. For the first 2
classes, the experimenters distributed the cards in a designated order one after
the other. Whereas, for the other 2 classes the whole stack of cards was given
to the group and no ordering was enforced. Also, the pupils were asked to take
a pre-test and a post-test on paper where they were asked to identify and write
down the angle measures next to a printed protractor as shown in Figure 3b.
Next, we present our observations about how different interface elements were
used.
Usage of Cards

– Cards materialize roles. This activity provides an example of group regula-


tion via shared resources, as cards simply showed who was manipulating or
checking. Also, time is regulated via the ownership of the control card. The
pupils would try to homogenize the time each of them spends manipulating,
as it is obvious who is doing all the work (i.e. having all the fun). This is
beneficial since a lack of balance has been shown to reduce the benefits of
learning in groups [36]. Similarly, having to share the control will encourage
its negotiation, which has been shown to lead to greater learning gains [37].
– Cards materialize progress. Often, the measure cards were kept next to the
pupil who managed to build the corresponding angle, acting as a trophy.
Apart from the gain in engagement for the pupils, it is also a valuable help
towards orchestration of the classroom, which refers to the teacher’s respon-
sibility to identify and manage the evolving learning opportunities and con-
straints, in real-time [35]. In this case, a teacher can easily get an instant
summary of what each pupil did, and react accordingly.
– Cards materialize the mode. Cards also materialize even more ephemeral
parts of the interaction, such as the current mode (building or checking). In
this activity, it had a great implication on the engagement: all the groups
preferred switching the feedback on and off for the sake of suspense rather
Paper Interfaces for Learning Geometry 45

than continuously displaying it. When we told them that they can also dis-
play the variations of measure of the angle being built, one pupil answered:
“I’m hiding it to see if [the pupil manipulating the control card] manages to
build the angle”.
– The order of the cards did not matter. This activity revealed the fact that
the order of the cards is not important, as pupils often selected the measure
card they were more comfortable with. For example, a group skipped all the
cards corresponding to clockwise angles. Out of the eight groups for which
the order of the cards was not enforced, only two followed the designated
sequence of cards. Two groups skipped angles of a given orientation (one
built only clockwise angles while the other only counter-clockwise angles).

Usage of Tools

– Tools cannot be replaced. During the study, we realized the importance of


using a real protractor and not a printed representation. The pre-test and
post-test did not give any statistically significant results due to a ceiling
effect, but yielded an interesting anecdote. During the pre-test, one of the
pupil counted each increment within the angle to measure the graduation
instead of reading the measure directly. During the activity, she correctly
read the measure directly on the protractor. Again during the post-test, she
counted the increments. She clearly did not match the printed graduation
with the one on the real protractor.

4.3 Third Activity: Describing Angles

Whereas the second activity was designed to introduce the concept of measur-
ing angles, the third activity regards describing and communicating angles. In
order to communicate an angle to someone, the pupil has to describe the an-
gle measure, direction of measurement (clockwise or anti-clockwise) as well as
the most convenient reference for
measuring this angle (which axis to
choose). In this direction, the third ac-
tivity was designed as a game to get
rid of space junk, non-functional satel-
lites that continue to orbit around
Earth. We consider that there are
3 laser guns deployed at 3 locations
around Earth capable of destroying
space junk (see the right side of Fig-
ure 4). This activity also allows for the
Fig. 4. The various elements of the problem
use of protractor during the problem
using protractors
solving task.
We divide a group of 4 pupils into 2 collaborating teams (of 2 pupils) and call
them observers and controllers, with a physical separation between them (see
Figure 5). The observers have a sheet (right side of Figure 4) with the view of
46 Q. Bonnard et al.

Fig. 5. The observer team measures the orientation to give to the laser (left), and
communicates it to the controller team (right)

Earth along with all the satellites printed on them. Already destroyed satellites
are highlighted in green and the next target in red. Also, the position of the
three laser guns along with the baseline (axis) is also printed in the observer
sheet. The observers are supposed to draw a line originating from one of the 3
laser guns to the target satellite (using a ruler). Next, they use the protractor to
measure this angle with respect to the horizontal axis for this laser. Finally the
observers have to describe this angle, direction of measurement and what laser
gun to use to the controller team, by writing this information on a small piece of
paper. This piece of paper is considered to be an ammunition for the laser gun.
The controllers are provided with 3 sheets corresponding to the 3 laser guns
(see the left part of Figure 4). The controllers can change the inclination of the
appropriate laser using the control card (similar to the one used in second activ-
ity). They reproduce the angle received from the observers using a protractor.
Finally, the lasers can be activated by flipping this small paper received from
the observers which contains a fiducial marker. Before firing, a yellow rectangle
grows for 3 seconds over the ammunition (small paper), allowing to cancel the
shot by flipping back or hiding the ammunition.
The trajectory of the laser is shown for 3 seconds on the sheets of controllers
(laser gun) and observers (Earth view), with a fading blue line. If the satellite is
hit, the ammunition turns green otherwise it turns red indicating a missed shot
(see the centre top part of Figure 4). Each ammunition can only be used for a
single shot, and the pupils are supplied with limited number of them, in order
to avoid trial-and-error strategies.

Procedure and Discussion


We ran this activity on two occasions: once with 140 pupils from 7 classes and
another time with 41 pupils from 2 classes. Groups of 4 pupils (2 observers and 2
controllers) were asked to complete this activity on a single system. Each group
was given 25 minutes with this activity and they were asked to shoot as many
satellites as they can. During the first study, we used 6 systems in a single room,
while 2 systems were used in the second study. Next, we present our observations
about the way sheets were used by groups during this activity.
Paper Interfaces for Learning Geometry 47

Usage of Sheets

– The workspace of a sheet was a stable referential. Both observers and con-
trollers placed their protractors on a sheet, which became a referential. All
the groups but one kept the satellite view in the same orientation, even if
it would have been easier to rotate the sheet before drawing the lines or
measuring the angles.
– Progress was written on the sheet. While cards can act as ephemeral trophy,
sheets durably store the progression with ink. The orchestration of a whole
class was made a lot easier by the fact that the satellite view kept track
of the intended trajectories in the form of lines between the location of the
laser on Earth and the satellite. It helped to diagnose which part of the
group (the observers or the controllers) was wrong in their measurement.
The annotations on the ammunitions kept track of the progress of the group.
The main difficulty in the activity is to establish a convention to describe
and communicate an angle without seeing it. Giving the measurement was
obvious, and the pupils would quickly realize that the origin of the shot
(i.e. which laser to use) has to be communicated too. The more tricky part
concerned the orientation of the angle. Since each shot has to be described
on the ammunitions, it was easy to track when the pupils started to realize
which information was needed.
– Sheets do not restrict expressiveness. When we explained the activity to the
pupils, we intentionally remained vague on how to describe the angle, sim-
ply hinting them that there were several informations to provide. The angle
measure and the laser gun to use were easily given as numbers. However, the
pupils did not have an established convention for the orientation. This con-
structivist exploration paves the way for the teacher to explain the concept
of clockwise and counter-clockwise measurement, since the need has been
felt directly.

5 Conclusion

The tabletop system presented in this article was designed to facilitate geometry
learning for primary school pupils. As existing classroom curriculum is based on
paper and conventional geometry tools (ruler, compass, etc.), our system incor-
porates paper sheets and cards as the two main interface elements. We designed
and conducted 3 exploratory user studies focusing on the different usages of
sheets and cards, in order to study the impact and potential of paper interfaces
in geometry learning. Our observations show very positive results regarding the
adoption of paper interfaces by the pupils, as the use of sheets and cards was
easily perceived and minimal effort was required to learn how to use the interface.
Our system takes into account the three circles of usability outlined by Dillen-
bourg et al. [35] - individual, group and classroom. On the individual level the
pupils were highly engaged and participated actively in the activities, even in
the classes that were less affected by the novelty effect in our subsequent visits.
48 Q. Bonnard et al.

This is a success given that the activities revolved around using a (boring) pro-
tractor, or classifying quadrilaterals. On the group level, the system naturally
promoted collaboration, allowing pupils to help each other and learn in teams.
At the classroom level, the paper interface enabled the teacher to monitor the
progress of teams and thus orchestrate the classroom activities accordingly. This
aims at facilitating smooth integration and adoption of computers in the entire
curriculum.
In addition, our observations regarding the characteristics of sheets and cards
provided insights about the affordances of the different paper elements. On one
hand, we observed that sheets are important for their content. Sheets were used
to organize the discourse on two levels. On the first level, the layout on a single
page encodes the order in which to read the various information and proceed
with the activity. On the second level, several sheets can be organized together
in a sequence (by stapling or binding), which enables us to implement several
lessons or exercises similar to a book. As the trace of a pen is persistent over
sheets, they can act as a permanent memory, which can be used as a way to
trace the performance of pupils during a learning activity, or display publicly
the progress within the group.
On the other hand, cards are mostly used as a physical body. The position of
the card is usually relative, and bringing one close to another element allows to
show additional properties. Further, the side of a card is another useful property;
it can be flipped to control a binary value. In general, cards can materialize the
reversible and ephemeral pieces of interaction according to rules. For example, the
presence of a card on the table or next to a pupil indicated its role in the group.
We believe that careful identification of these characteristics of paper interface
elements might provide crucial design guidelines towards the development of
paper interfaces for education in general. The affordances of different paper
elements (depending on the shape, size and material) render the interfaces easy-
to-use and highly intuitive.
In future, we would like to conduct a formal evaluation of the effects of paper
interfaces on learning. Also, we would like to investigate the technological issues
related to the predisposition of the system and to learning design. The aim
would be to enable teachers to set up pedagogical experiences without assistance
from researchers. This would naturally link the activities to specific mathematics
learning theories.

Acknowledgement. The authors wish to sincerely thank the teachers for their
precious collaboration, as well as Olivier Guédat for building the TinkerLamp;
Michael Chablais, Chia-Jung Chan Fardel, and Carlos Sanchez Witt for their
contribution to the activities during their internship.

References
1. Klemmer, S.R., Graham, J., Wolff, G.J., Landay, J.A.: Books with voices: paper
transcripts as a physical interface to oral histories. In: CHI 2003, pp. 89–96. ACM,
New York (2003)
Paper Interfaces for Learning Geometry 49

2. Moran, T.P., Saund, E., Van Melle, W., Gujar, A.U., Fishkin, K.P., Harrison,
B.L.: Design and technology for collaborage: collaborative collages of information
on physical walls. In: UIST 1999, pp. 197–206. ACM, New York (1999)
3. Flagg, M., Rehg, J.: Projector-guided painting. In: UIST 2006, pp. 235–244. ACM
(2006)
4. Nelson, L., Ichimura, S., Pedersen, E.R., Adams, L.: Palette: a paper interface for
giving presentations. In: CHI 1999, pp. 354–361. ACM, New York (1999)
5. Lam, A.H.T., Chow, K.C.H., Yau, E.H.H., Lyu, M.R.: Art: augmented reality table
for interactive trading card game. In: VRCIA 2006, pp. 357–360. ACM, New York
(2006)
6. Cho, H., Jung, J., Cho, K., Seo, Y.H., Yang, H.S.: Ar postcard: the augmented
reality system with a postcard. In: VRCIA 2011, pp. 453–454. ACM, New York
(2011)
7. Hong, J., Price, M.N., Schilit, B.N., Golovchinsky, G.: Printertainment: printing
with interactive cover sheets. In: CHI EA 1999, pp. 240–241. ACM, New York
(1999)
8. Garcı́a, R., Quirós, J., Santos, R., González, S., Fernanz, S.: Interactive multimedia
animation with Macromedia Flash in Descriptive Geometry teaching. Computers
& Education 49(3), 615–639 (2007)
9. Underkoffler, J., Ishii, H.: Illuminating light: a casual optics workbench. In: CHI
EA 1999, pp. 5–6. ACM, New York (1999)
10. Oviatt, S., Arthur, A., Brock, Y., Cohen, J.: Expressive pen-based interfaces for
math education. In: International Society of the Learning Sciences (CSCL), pp.
573–582 (2007)
11. Laborde, C., Keitel, Ruthven, K.: The computer as part of the learning environ-
ment: the case of geometry. In: Learning from Computers: Mathematics Education
and Technology, pp. 48–67. Springer (1993)
12. Straesser, R.: Cabri-geometre: Does dynamic geometry software (dgs) change geom-
etry and its teaching and learning? International Journal of Computers for Math-
ematical Learning 6(3), 319–333 (2002)
13. Kortenkamp, U., Dohrmann, C.: User Interface Design for Dynamic Geometry
Software. Acta Didactica Napocensia 3 (2010)
14. Kaufmann, H., Dünser, A.: Summary of Usability Evaluations of an Educational
Augmented Reality Application. In: Shumaker, R. (ed.) HCII 2007 and ICVR 2007.
LNCS, vol. 4563, pp. 660–669. Springer, Heidelberg (2007)
15. Martı́n-Gutiérrez, J., Luı́s Saorı́n, J., Contero, M., Alcaņiz, M., Pérez-López, D.,
Ortega, M.: Design and validation of an augmented book for spatial abilities de-
velopment in engineering students. Computers & Graphics 34(1), 77–91 (2010)
16. Wellner, P.: Interacting with paper on the DigitalDesk. Communications of the
ACM 36(7), 87–96 (1993)
17. Malacria, S., Pietrzak, T., Tabard, A., Lecolinet, É.: U-Note: Capture the Class and
Access It Everywhere. In: Campos, P., Graham, N., Jorge, J., Nunes, N., Palanque,
P., Winckler, M. (eds.) INTERACT 2011, Part I. LNCS, vol. 6946, pp. 643–660.
Springer, Heidelberg (2011)
18. Asai, K., Kobayashi, H., Kondo, T.: Augmented instructions - a fusion of aug-
mented reality and printed learning materials. In: ICALT, pp. 213–215. IEEE
Computer Society (2005)
19. Portocarrero, E., Robert, D., Follmer, S., Chung, M.: The Never Ending Sto-
rytelling Machine a platform for creative collaboration using a sketchbook and
everyday objects. In: Proc. PaperComp 2010 (2010)
50 Q. Bonnard et al.

20. Koike, H., Sato, Y., Kobayashi, Y., Tobita, H., Kobayashi, M.: Interactive textbook
and interactive venn diagram: natural and intuitive interfaces on augmented desk
system. In: CHI 2000, pp. 121–128. ACM, New York (2000)
21. Lee, W., de Silva, R., Peterson, E., Calfee, R., Stahovich, T.: Newton’s Pen: A pen-
based tutoring system for statics. Computers & Graphics 32(5), 511–524 (2008)
22. Radu, I., MacIntyre, B.: Augmented-reality scratch: a children’s authoring envi-
ronment for augmented-reality experiences. In: IDC 2009, pp. 210–213. ACM, New
York (2009)
23. Song, H., Guimbretière, F., Ambrose, M.A., Lostritto, C.: CubeExplorer: An Eval-
uation of Interaction Techniques in Architectural Education. In: Baranauskas, C.,
Abascal, J., Barbosa, S.D.J. (eds.) INTERACT 2007, Part II. LNCS, vol. 4663,
pp. 43–56. Springer, Heidelberg (2007)
24. Millner, A., Resnick, M.: Tools for creating custom physical computer interfaces.
Demonstration presented at Interaction Design and Children, Boulder, CO (2005)
25. Ullmer, B., Ishii, H.: Emerging frameworks for tangible user interfaces. IBM Sys-
tems Journal 39(3.4), 915–931 (2000)
26. Ullmer, B., Ishii, H., Jacob, R.: Token+ constraint systems for tangible interac-
tion with digital information. ACM Transactions on Computer-Human Interaction
(TOCHI) 12(1), 81–118 (2005)
27. Fishkin, K.: A taxonomy for and analysis of tangible interfaces. Personal and Ubiq-
uitous Computing 8(5), 347–358 (2004)
28. Fitzmaurice, G.: Graspable user interfaces. PhD thesis, Citeseer (1996)
29. Ishii, H., Ullmer, B.: Tangible bits: towards seamless interfaces between people,
bits and atoms. In: CHI 1997, pp. 234–241. ACM, New York (1997)
30. Hornecker, E., Dünser, A.: Of pages and paddles: Children’s expectations and
mistaken interactions with physical-digital tools. Interacting with Computers
21(1-2), 95–107 (2009)
31. Mcgee, D.R.: Augmenting environments with multimodal interaction. PhD thesis,
Oregon Health & Science University (2003) AAI3100651
32. Klemmer, S.R., Landay, J.A.: Toolkit support for integrating physical and digital
interactions. Human-Computer Interaction 24(3), 315–366 (2009)
33. Zufferey, G., Jermann, P., Dillenbourg, P.: A tabletop learning environment for
logistics assistants: activating teachers. In: Proceedings of the Third IASTED In-
ternational Conference on Human Computer Interaction, HCI 2008, pp. 37–42.
ACTA Press, Anaheim (2008)
34. Cuendet, S., Bonnard, Q., Kaplan, F., Dillenbourg, P.: Paper interface design for
classroom orchestration. In: Proceedings of the 2011 Annual Conference Extended
Abstracts on Human Factors in Computing Systems, CHI EA 2011, pp. 1993–1998.
ACM, New York (2011)
35. Dillenbourg, P., Zufferey, G., Alavi, H.S., Jermann, P., Do, L.H.S., Bonnard, Q.,
Cuendet, S., Kaplan, F.: Classroom orchestration: The third circle of usability. In:
Connecting Computer-Supported Collaborative Learning to Policy and Practice:
CSCL 2011 Conference Proceedings. Volume I - Long Papers. International Society
of the Learning Sciences, pp. 510–517 (2011)
36. Cohen, E.: Restructuring the classroom: Conditions for productive small groups.
Review of Educational Research 64(1), 1 (1994)
37. Do-Lenh, S., Kaplan, F., Dillenbourg, P.: Paper-based concept map: the effects of
tabletop on an expressive collaborative learning task. In: Proceedings of the 23rd
British HCI Group Annual Conference on People and Computers: Celebrating
People and Technology, BCS-HCI 2009, pp. 149–158. British Computer Society,
Swinton (2009)
The European TEL Projects Community
from a Social Network Analysis Perspective

Michael Derntl and Ralf Klamma

RWTH Aachen University


Advanced Community Information Systems (ACIS)
Informatik 5, Ahornstr. 55, 52056 Aachen, Germany
{derntl,klamma}@dbis.rwth-aachen.de

Abstract. In this paper we draw a community landscape of European


Commission-funded TEL projects and organizations in the 6th and 7th
Framework Programmes and eContentplus. The project metadata were
crawled from the web and maintained as part of the TEL Mediabase,
a large collection of data obtained from different web sources including
blogs, bibliographies, and project fact sheets. We apply social network
analysis and impact analysis on the project consortium progression graph
and on the organizational collaboration graph to identify the most cen-
tral TEL projects and organizations. The key findings are that networks
of excellence and integrated projects have the strongest impact on the
project network; that eContentplus was a funding bridge between FP6
and FP7; and that the tightly knit collaboration network may inhibit the
assimilation of new organizations and ideas into the TEL community.

1 Introduction

For many years, technology enhanced learning (TEL) has been a well-funded
thematic area in the work programmes of the European Community’s (EC)
Framework Programmes (FP). Current projects like the STELLAR net-
work of excellence [http://stellarnet.eu] and the TEL-Map support ac-
tion [http://telmap.org] take a supporting role for the EC and other TEL
stakeholders to provide input to future challenges and roadmapping for the Eu-
ropean TEL community. This is evidence that the TEL community has a genuine
interest in its thematic and collaborative structures and dynamics. To facilitate
this kind of self-reflection based on available data, we established the Media-
base [1] in 2006 as a collection of TEL related social media artifacts that have
been crawled from the web, fed in to relational databases, and analyzed us-
ing web-based tools available to the TEL community for self-observation and
self-modeling [2].
Following that spirit, this paper focuses on the macro level of funding and
organizational collaboration in European TEL by analyzing the current status
and historic evolution of the landscape of TEL projects and organizations using
social network analysis (SNA) techniques. Specifically, we research these aspects:

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 51–64, 2012.

c Springer-Verlag Berlin Heidelberg 2012
52 M. Derntl and R. Klamma

– Project progression: identify sustained organizational collaboration ties be-


tween consecutive projects to identify cliques of successful collaborators and
a measure of project impact; identify the most central projects, and consortia
that unite central organizations.
– Inter-organizational collaboration: identify central participating organiza-
tions in terms of SNA metrics (e.g. betweenness centrality, PageRank, and
clustering in the organizational collaboration network) and EC funding.
– Times series analysis: analyze the development of SNA metrics in the project
progression network and in the organizational collaboration networks since
the start of FP6.

The paper is structured as follows. In the next section we discuss related work.
In Section 3 we outline the TEL projects community data set and formal founda-
tions of projects as social networks. In Section 4 we analyze project progression,
i.e. the sustained collaboration of project partners in follow-up projects. In Sec-
tion 5 we analyse the organizational collaboration network in TEL projects. In
Section 6 we highlight the dynamics in these social networks since the start of
FP6. Sections 7 and 8 sum up the key findings and conclude the paper.

2 Related Work

An analysis of consortium involvements of TEL partners in other projects, part-


ners, and events was previously conducted and reported [3] by the STELLAR
network of excellence. This analysis was centered on STELLAR as the central
entity. A more general approach was adopted in [4], where the authors define a
formal model for social network analysis of all projects funded by the EC in FP1
through FP4. They model projects and organizations as graphs and apply SNA
algorithms to identify the overall characteristics of these R&D networks. They
identify these networks as being typical of complex, scale-free networks with
small diameter and high clustering. Similar findings are reported in [5] following
a social network analysis of the first five FPs.
In a recent paper [6], the authors model the affiliation network of FP6 projects
using an agent-event metaphor. In their model, organizations are agents, who
participate in projects, which represent the events. This model was employed to
obtain general insight into FP6 collaborations using SNA techniques, studying
the effects of different network representations—one-mode network (actors and
events separated) vs. two-mode network (actors and events unified)—on the
analyses and results. A similar study that focused on the forming and evolving
of these networks is reported in [7], finding the emergence of dense hierarchical
networks resting on an “oligarchic core” of participants.
Previous work also includes the application of community detection in R&D
project collaborations, e.g. in [8] the authors apply community detection on
the FP6 organizational collaboration graph, with a main interest in the role of
nationality and type of organizations. An analysis of FP5 collaboration with
emphasis on the role of geographical or technological proximity of partners is
The European TEL Projects Community 53

reported in [9]. The authors found that both factors impact the collaboration
network, with technological proximity having the greater effect.
In [10] an analysis of the collaboration networks in FP4 with particular em-
phasis on the differences of telecommunications and agro-industrial industries
is performed. The paper focuses on specific thematic areas and their compara-
tive characteristics in terms of consortium size, required funding, and the role of
scientific vs. industry partners.
In this paper, we build on the ideas and formal foundations laid out in these
previous research endeavors. Our locus of interest, however, is different and
unique: we are interested in a particular thematic area, i.e. technology enhanced
learning, and draw insights and findings of relevance to participants and stake-
holders in that particular community. In a related presentation [11] we formu-
lated first ideas about project impact based on social network analysis, however
without proposing a formal measure of impact. To close this gap, the present
paper reports analyses of the R&D networks in TEL from a temporal-dynamic
perspective, going beyond previous work by proposing and applying a measure
of impact of project consortia on the TEL collaboration landscape.

3 The European TEL Projects Community

Data Set. The TEL projects database includes details on TEL projects funded
under FP6, FP7, and eContentplus programmes. In total, the metadata of 77
TEL projects (see Table 1) were collected and used for the analyses presented
in this paper. The data includes detailed information on the projects like start
and end dates, cost, EC funding, and consortium members (in total there are
604 distinct organizations participating in these 77 projects). The database was
fed by a crawler that was deployed to collect and scrape project facts from the
CORDIS website [12] for FP6 and FP7 projects, as well as from the respective
eContentplus pages. As evident in Table 1, only projects funded under TEL
related calls were included in the data set.1
In the CORDIS data we found that there were several typos and variants in
the spelling of organizations and countries; the list of organizations was therefore
manually post-processed to merge variants and correct spelling errors. Organi-
zational name changes were not accounted for. For instance, Giunti Labs S.r.l.
was rebranded to eXact Learning Solutions in 2010. In the data set, these—and
all organizations with similar rebrandings—are represented as separate entities.
Likewise, organizational mergers are not accounted for, e.g. ATOS Origin and
Siemens Learning, which merged in 2011, and Aalto University in Finland, which
was established in 2010 as a merger of three universities. Also, the CORDIS
fact sheets expose some omissions; one that was discovered is the fact sheet of
the EdReNe eContentplus project, which only includes the coordinator without
any of the consortium members. Finally, a plethora of projects from other TEL
related funding channels is not yet included.
1
A browseable version of the complete TEL projects data set is avaialble on the
Learning Frontiers portal at http://learningfrontiers.eu/?q=project_space
54 M. Derntl and R. Klamma

Table 1. TEL projects data set


Call Projects #
eContentplus Call 2005 CITER, JEM, MACE, MELT 4
eContentplus Call 2006 COSMOS, EdReNe, EUROGENE, eVip, Intergeo, KeyToNature, Or- 7
ganic.Edunet
eContentplus Call 2007 ASPECT, iCOPER, EduTubePlus 3
eContentplus Call 2008 LiLa, Math-Bridge, mEducator, OpenScienceResources, OpenScout 5
FP6 IST-2002-2.3.1.12 CONNECT, E-LEGI, ICLASS, KALEIDOSCOPE, LEACTIVE- 8
MATH, PROLEARN, TELCERT, UNFOLD
FP6 IST-2004-2.4.10 APOSDLE, ARGUNAUT, ATGENTIVE, COOPER, ECIRCUS, 14
ELEKTRA, I-MAESTRO, KP-LAB, L2C, LEAD, PALETTE, PRO-
LIX, RE.MATH, TENCOMPETENCE
FP6 IST-2004-2.4.13 ARISE, CALIBRATE, ELU, EMAPPS.COM, ICAMP, LOGOS, 10
LT4EL, MGBL, UNITE, VEMUS
FP7 ICT-2007.4.1 80DAYS, GRAPPLE, IDSPACE, LTFLL, MATURE, SCY 6
FP7 ICT-2007.4.4 COSPATIAL, DYNALEARN, INTELLEO, ROLE, STELLAR, TAR- 7
GET, XDELIA
FP7 ICT-2009.4.2 ALICE, ARISTOTELE, ECUTE, GALA, IMREAL, ITEC, META- 13
FORA, MIROR, MIRROR, NEXT-TELL, SIREN, TEL-MAP, TER-
ENCE

Projects as Social Networks. A collaborative project can be modeled as a


social network [13,6]. A social network is modeled as a graph G = (V, E) with V
being the set of vertices (or nodes) and E being the set of edges connecting the
vertices with one another [14]. We define P as the set of projects, and O as the
set of organizations involved in these projects. Similar to [6], we define a function
μ representing the membership of any organization o ∈ O in the consortium of
any TEL project p ∈ P as follows to enable graph-based analyses of the TEL
project and organization networks:

true if o is or was member of the consortium of p
μ:P ×O →
f alse otherwise .

4 Project Consortium Progression

The project consortium progression graph GP = (VP , EP ) contains projects as


nodes (VP = P ) and their successor relationships as directed edges. A successor
relationship between two projects is established if (1) at least k organizations
have participated in both projects, and (2) the successor project started at least
t time units after the predecessor project. Let s : P → R map projects to their
start points in time, represented for simplicity as real numbers, then we can
define the set of edges as
EP = {(u, v) : u, v ∈ VP ∧ s(v) − s(u) ≥ t ∧ |{o ∈ O : μ(u, o) ∧ μ(v, o)}| ≥ k}.
The least restrictive parameter pair k = 1, t = 0 produces a graph including all
77 TEL projects and a total of 712 edges (note that t = 0 implies that the graph
also includes edges between projects that started at the same time). Looking
for a reasonable value for k to represent consortium progression in the sense of
continued collaboration we require at least two consortium members present in
The European TEL Projects Community 55

INTELLEO
UNFOLD
L2C

MGBL
DYNALEARN

TENCOMPETENCE
KeyToNature
COOPER
ICAMP

MACE
MIRROR TELCERT
OpenScout
Intergeo
PROLEARN
PROLIX LTFLL
LEACTIVEMATH TEL-MAP
Math-Bridge ROLE
LT4EL
iCOPER ARGUNAUT
ECUTE

IMREAL GRAPPLE ECIRCUS

STELLAR GALA TARGET


LiLa APOSDLE

TERENCE E-LEGI ALICE


NEXT-TELL MATURE
EUROGENE
LEAD

ASPECT KALEIDOSCOPESCY
mEducator
XDELIA

ITEC KP-LAB UNITE


ICLASS
IDSPACE PALETTE
EMAPPS.COM RE.MATH
MELT ARISTOTELE
CONNECT
CALIBRATE 80DAYS EduTubePlus
MIROR
ELEKTRA
COSPATIAL COSMOS
METAFORA OpenScienceResources
Organic.Edunet
ELU

VEMUS

Fig. 1. Project progression graph spanning FP6, FP7, and eContentplus TEL projects

a successor project, i.e. k = 2. The time span parameter t ∈ R≥0 should allow
for a time gap between two consecutive project starts to reasonably establish a
predecessor-successor relationship. Of course the time span between call deadline
and project start may vary, but the actual point in time when partners team
up for a project proposal is not represented in our data. So we let months be
the unit of time and chose t = 3 months as the lower threshold for the time
between the start of two projects. With this threshold the younger project can
reasonably considered as a successor of the older project.
The resulting graph for k = 2, t = 3 is illustrated in Fig. 1, including 68 nodes
and 198 edges. The size of each node and its label in the figure is proportional to
its weighted degree centrality, i.e. the number of adjacent projects weighted by
consortium overlap. The graph layout was produced by applying the ForceAt-
las layout in Gephi. Evidently, KALEIDOSCOPE—one of the two “inaugural”
TEL Networks of Excellence in FP6—is the most degree-central node, boosted
by its large consortium of 83 partners, which is more than five times the aver-
age consortium size of 14.5 in FP6. The graph also reveals that in addition to
strong ties between FP6 and FP7 projects, several eContentplus projects (e.g.
OpenScout, ICOPER, ASPECT) have central positions and strong connections
56 M. Derntl and R. Klamma

with projects in FP6 and FP7. This can probably be explained by the fact that
eContentplus bridged a “funding gap” in 2007 when FP6 funding was stalling
following the last FP6 projects launched in 2006, while FP7 TEL funding was
kicked off only in 2008. In fact, in 2007 only eContentplus projects were launched
with EC funding in our data set (compare also the time series analysis of the
project networks in Section 6, in particular Fig. 9). This kind of gap bridging by
eContentplus, where a large share of organizations funded under FP6 and FP7
engaged in e-content focused R&D projects, could be interpreted as evidence for
an organizational “research follows money” attitude.
Project Impact. One interesting aspect of GP is the impact of project con-
sortium members on sustaining and shaping the social TEL project ties after
the project start (in fact, this shaping already commences during the proposal
writing phase). Measuring this impact by applying graph centrality metrics (e.g.
degree, betweenness, closeness, etc.) to GP entails several problems. For one,
these metrics will favor projects with very large consortia. In our data set the
presence of the KALEIDOSCOPE project with its huge consortium is a prime
example of a node that represents an “outlier” in terms of consortium size and
degree centrality. Moreover, in GP the projects’ chances of having predecessor
and successor projects vary depending on the distribution of project start dates.
In terms of the social network, these factors bias a project’s chance of having
stronger incoming and outgoing edges. To control for these biases we conceived
an impact measure of projects p ∈ VP as follows.
Let Spt,k be the successor projects of p, i.e. the set of projects that started at
least t time units after the start of p and that include at least k consortium mem-
bers of p. Let Dpt be the set of all potential successor projects, i.e. all successor
projects that started at least t time units after the start of p, and let Cp ⊆ O be
the set of consortium members of project p. It holds that Spt,k ⊆ Dpt ⊆ P . We
define the impact δ of project p as
|Spt,k |  |Cp ∩ Cq |
δp = .
|Dpt | t,k
|Cp |
q∈Sp

|Spt,k |
In this formula, the term accounts for the actual number of successor
|Dpt |
projects of p relative to opportunity, that is, the fraction of actual vs. potential
successor projects. Essentially, this eliminates the potential (dis)advantages from
|Cp ∩Cq |
a projects’ position on the timeline. The term |C p|
represents the weighted
overlap of a project with other project consortia and thus accounts for the vary-
ing sizes of project consortia and the varying number of organizations that over-
|Cq ∩Cp | |Cq |
lap between two projects. Actually the latter term is a shorthand of |C q|
· |Cp | ,
in which the first term represents the share of q’s consortium that was “fed” with
members of project p, and the second term represents the ratio of the sizes of
 |Cq ∩Cp |
the two consortia. Consequently, summing up these ratios with |Cq |
q∈Spt,k
essentially represents the cumulative share of successive project consortia in GP
filled up exclusively with p’s consortium members.
The European TEL Projects Community 57

Table 2. Top 15 projects by impact on the TEL projects landscape


# Project p Runtime Funding Programme Type |Cp | |Sp3,2 | |Dp3 | IFM δp
1 PROLEARN 2004–07 6.01 FP6 NoE 22 22 69 .19 1.14
2 KALEIDOSCOPE 2004–07 9.35 FP6 NoE 83 34 69 .07 .66
3 GRAPPLE 2008–11 3.85 FP7 STREP 15 8 28 .10 .38
4 iCOPER 2008–11 4.80 ECP BPN 23 7 25 .07 .33
5 STELLAR 2009–12 4.99 FP7 NoE 16 5 18 .05 .23
6 ASPECT 2008–11 3.70 ECP BPN 22 6 25 .06 .21
7 COOPER 2005–07 1.95 FP6 STREP 8 6 49 .10 .20
8 MACE 2006–09 3.15 ECP CEP 13 7 41 .06 .20
9 LTFLL 2008–11 2.85 FP7 STREP 11 5 28 .06 .18
10 PROLIX 2005–09 7.65 FP6 IP 19 7 49 .02 .17
11 TELCERT 2004–06 1.80 FP6 STREP 9 7 69 .09 .16
12 ROLE 2009–13 6.60 FP7 IP 16 3 18 .01 .09
13 TENCOMPETENCE 2005–09 8.80 FP6 IP 15 5 49 .01 .09
14 CONNECT 2004–07 4.69 FP6 STREP 18 6 69 .02 .08
15 ICLASS 2004–08 12.59 FP6 IP 17 6 69 .01 .08
Funding . . . European Commission contribution in million euro.
IFM . . . Impact for Money = δ/Funding.
Type . . . Network of Excellence (NoE), Integrated Project (IP), Content Enrichment Project
(CEP), Specific Targeted Research Project (STREP), Best Practice Network (BPN).

The top 15 projects by impact for t = 3, k = 2 ordered by descending δ are


displayed in Table 2. The table includes projects from all three programmes with
FP6 represented strongest, and it prominently features Networks of Excellence
and Integrated Projects, although these are by number among the rarest project
types in the Framework Programmes. The best impact-for-money (IFM) ratio
was achieved by PROLEARN, followed by COOPER and GRAPPLE. More data
and further research is required to identify indicators for high-impact projects.
The project proposals and deliverables would certainly help in this regard, since
these typically contain information on and cross-references to work in other
projects.

5 Organizational Collaboration
In addition to the project consortium progression network presented in the pre-
vious section, TEL projects can be viewed from another angle: the organizational
collaboration graph GO = (VO , EO ) contains organizations and their collabora-
tions in the project consortia [13]. This graph shows organizations as nodes and
an (undirected) edge between two nodes if there are at least k projects where
both organizations have participated in, i.e. VO = O and
EO = {(u, v) : u, v ∈ VO ∧ u 
= v ∧ |{p ∈ P : μ(p, u) ∧ μ(p, v)}| ≥ k}.
GO is an undirected graph, which is visualized in Fig. 2. For k = 1 it includes 603
distinct partners and 9315 edges, of which only those representing at least two
shared projects are displayed in Fig. 2. Apart from the core component, there
are no strongly connected sub-networks, which means that in every project there
is at least one partner who is involved in another project.
We calculated SNA metrics and funding for each participant in VO ; the result-
ing table of the top ten organizations is given in Table 3. The table is ordered
58 M. Derntl and R. Klamma

Berzsenyi Daniel Foiskola


Mdr Partners

Cross Czech A.S.

The Manchester Metropolitan University


Ciber Espacio Sl
Svietimo Ir Kulturos Mobiliuju Technologiju Institutas

Stowarzyszenie Miedzynarodowe Centrum Zarzadzania Informacja Senter For Ikt I Utdanningen Edubit Vzw
Educatio Public Services Non-Profit Llc

Czech Efficient Learning Node Elfa S.R.O.


Stendes Aprupes Un Attistibas Biedriba Mezazile

Educa.Ch
Indire Istituto Nazionale Di Documentazione Per L'Innovazione E LaInstitut
RicercaSuissse Des Medias Pour La Formation Et La Culture
Educativa
Schul-Und Ausbildungsberatung National Ministry Of Education
Ministerio De Educación, Política Social Y Deporte Makash - Advancing Cmc Applications In Education, Culture And Science

Univerza V Mariboru TiigrihuppeCentre


Opetushallitus
Sihtasutus
National De Documentation Pedagogique

Pomorski Fakultet U Rijeci Anglia Ruskin University Higher Education Corporation Smart Technologies (Germany) Gmbh
Contento Svenska Aktiebolag Icodeon Limited
Bundesministerium Für Unterricht, Kunst Und Kultur
Generalitat De Catalunya Instituto De Educação Da Universidade De Lisboa
Sveuciliste U Rijeci, Filozofski Fakultet U Rijeci
The National Agency For Tarsadalmi Svietimo Informaciniu Technologiju Centras
Education In Sweden
Educatio Szolgaltato Kozhasznu Tarsasag

Futurelab Education
Aster - Societa Consortile Per Azioni
Andragoski Zavod Maribor Ljudska Univerza
Skolavefurinn Ehf
Uni-C, Danmarks Edb-Center For Forskning Og Uddannelse
Young Digital Planet Sa*Ydp Sa Promethean Limited
Department Of Education And Science
Austrian Research Centers Gmbh - Arc Istituto Nazionale Di Documentazione Per L'Innovazione E La Ricerca Educativa
Fwu Institut Fuer Film Und Bild In Wissenschaft Und Unterricht Gemeinnuetzige Gesellschaft Mit Beschraenkter Haftung
Cambridge Hitachisoft Educational Solutions Plc
Knowledge
Universidad De Vigo Markets Consulting Gmbh

Menntamalaraduneytid
Rwcs Limited

Eun Partnership Aisbl


Libera Universita Di Bolzano

Universita Degli Studi Di Trieste Univerza V Ljubljani Vocabulary Management Group Limited
Cambridge University Press (Holdings) Limited
Universidad De Salamanca The American University Of Paris

Ministerie Van De Vlaamse Gemeenschap


The University OfMoholy-Nagy
BoltonMuveszeti Egyetem
Sveriges Exportrad Oxford Brookes University
Ministerio Da Educacao European Institute For E-Learning
Amnin D.O.O Centr Za Znanstveno Vizualizacijo
Universita
Ceska Televize Facultes Universitaires Notre-Dame De La Paix Asbl
Degli Studi Di Verona
Sun Microsystems Belgium Nv
Apertus Tavoktatas-Fejlesztesi Modszertani Kozpont Tanacsado Es Szolgaltato Kozhasznu Tarsasag Akademia Gorniczo-Hutnicza Im. Stanislawa Staszica W Krakowie Ontdeknet Bv
Evolaris Next Level Privatstiftung Athabasca University
La Cantoche Production
Eesti Opetajate Liit Software De Base Sa
University Of Sussex Czech E-Learning Network
Tovek Spol. S.R.O.
Universita Degli Studi Dell'Aquila Tampereen Yliopisto
Ministerstwo Edukacji Narodowej Dum Zahranicnich Sluzeb Tallinna Ulikool Universita Degli Studi Di Padova
Institut Für Angewandte Systemtechnik Bremen Gmbh
Magyar Tudomanyos Akademia Szamitastechnikai Es Automatizalasi Kutato Intezet
Valstybinis Informacines Technologijos Institutas Volkswagen Ag Institut Europeen D'Administration Des Affaires
Isik University
Taideteollinen Korkeakoulu Logicacmg Nederland B.V.
Fundacio Barcelona Media Universitat Pompeu Fabra
Bundesministerium Fuer Wissenschaft Und Forschung Stichting Expertisecentrum Voor Taxonomische Identificaties
Julius Kühn-Institut
Unesco-Ihe Institute For Water Education
Universitat StMetis-Net
Gallen Etairia Periorismenis Evthinis
Alphalabs Sarl
Faculty Of Organizational Sciences, University Of Belgrade

Giunti Labs S.R.L.


University Of Tartu
Bikam Eood Centre For Research And Technology Hellas
Lambrakis Foundation Univerzita Hradec Kralove Ini Doo Stichting Surf
Stad Antwerpen
Isvor Fiat Societa Consortile Di Sviluppo E Addestramento Industriale Per Azioni
Consejo Superior De Investigaciones Cientificas Ids Scheer Ag Unicredit Produzioni Accentrate Societa Per Azioni
University Of Strathclyde

Jyvaskylan Yliopisto
Ainoouchaou Pliroforikis Ae
T. & B. E Associati Srl Prirodoslovni Muzej Slovenije

Universitaet Koblenz-Landau
Fondazione Museo Nazionale Della Scienza E Della Tecnologia Leonardo Da Vinci Educentrum Vzw Association O.R.T.
Univerzita Tomase Bati Ve Zline Fachhochschule Potsdam Eea S.R.O. Athens Laboratory Of Business Administration

Gottfried Wilhelm Leibniz Universitaet Hannover


Eloki Limited Friedrich-Alexander Universitaet Erlangen - Nuernberg
University OfCiencia
CentralViva-Agencia
Florida Oesterreichische Studiengesellschaft Fuer Kybernetik
Nacional Para A Cultura Cientifica E Tecnologica Danshir Systems Ltd.
Natural History Museum Ben-Gurion University Of The Negev
Cite Des Sciences Et De L'Industrie Universita Cattolica Del Sacro Cuore

Hypatia As
Siveco Romania Sa Ceske Vysoke Uceni Technicke V Praze
Apple Computer International Universitaet Der Bundeswehr Muenchen
Nautes Srl

Katholieke Universiteit Leuven


Deutsches Museum Von Meisterwerken Der Naturwissenschaft Und Technik Akademia Gorniczo-Hutnicza Im. Stanisbawa Staszica W Krakowie Fva Sas Di Louis Ferrini & C
Csodak Palotaja Kulturalis Nonprofit Korlatolt Felelossegu Tarsasag Bt Nederland
Chambre De Commerce Et D'Industrie De Paris European Universities Continuing Education NetworkNv Institut National De L'Audiovisuel
Synergetics N.V. Vrije Universiteit Brussel
Eugenides Foundation
Centro Para El Desarrollo De Las Telecomunicaciones De Castilla Y Leon Acrosslimits Ltd Consorzio Per La Ricerca E L'Educazione Permanente, Torino
ation Européenne Des Expositions Scientifiques, Techniques Et Industrielles Reseau Menon E.E.I.G.
Hendel Sascha - Internetdienstleistungen Im Architektursektor Centrale Recherche S.A.
Social Care Institute For Excellence Budapesti Muszaki Es Gazdasagtudomanyi Egyetem
Institutul National De Cercetare-Dezvoltare In Informatica - Ici Bucuresti Ra
Freundes Und Foerdererkreis Des Rabanus Maurus Gymnasiums Mainz E.V. Zentrum
Universitat Politecnica Fuer
De Catalunya Soziale Innovation Collaboratorio S.N.C. Di Barzon Furio & C.
Universita Iuav Di Venezia
Universite Du Luxembourg
Institute Of Technology And Development Foundation
Institute For The Study Of Knowledge Management In Education Lesite.Tv G.I.E. Europaisches Microsoft Innovations Center Gmbh Fondation Maison Des Sciences De L'Homme
Intrasoft International Sa
Diethneis Ypiresies Perivallontos Kai Poiotitas Meletes Efarmoges Anonymi Etaireia
Universitaet Wien Ies Egitim Ve Bilgi Teknolojileri As
Siauliu Universitetas Institute Of Mathematics And Informatics - Bulgarian Academy Of Sciences Technical University Of Crete
Universita Politecnica Delle Marche
Etaireia Erevnas Kai Epimorfosis Stis Technologies Tis Pliroforias
X/Open Company Limited Qpr Software Oyj
National Taiwan University Education Highway Innovationszentrum Fuer Schule
Szamalk UndEsNeue
Oktatasi Technologie
Informatikai Gmbh
Zartkoruen Mukodo Reszvenytarsasag Vytauto Didziojo Universitetas
Siemens Aktiengesellschaft Hellenic Open University European Distance And E-Learning Network
Vaexjoe Universitet
Universitaet Bayreuth Donau-Universitaet Krems Wuerttembergischer Genossenschaftsverband Raiffeisen/Schulze-Delitzsch E.V. Intel Performance Learning Solutions Limited

Imc Information Multimedia Communication Ag


Institut Telecom
Tiedekeskussaeaetioe Companhia De Ideias Anonimas Comunicacao Social Lda
National Centre For Scientific Research Demokritos Microsoft Nv Universite D' Angers
Shumenski Universitet Episkop Konst
Humance Ag Padagogische Hochschule Schwabisch Gmund
Ministry Of National Education And Religious Affairs
Universidad Politecnica De Madrid Universitaet Stuttgart Ing. Gejza Bodon Exos Consulting Eduweb Multimedia Technologia Es Tavoktatasi Reszvenytarsasg
The University Court Of The University Of St Andrews
Siauliai Juventos Pagrindine Mokykla Fondation Europeenne Pour Le Developpement Du Management Aisbl
Foundation For Research And Technology Hellas
Universidade Do Minho
Fondazione Maddalena Di Canossa
Institut Jozef Stefan Maths For More Sl Jihoceska Univerzita V Ceskych Budejovicich
Centro Di Eccellenza Su Metodi E Sistemi Per L'Apprendimento E La Conoscenza
Universite Montpellier Ii Antenna Hungaria Magyar Musorszoro Radiohirkozlesi Reszvenytarsasag
European Physical Society University Of Hull Festo Lernzentrum Saar Gmbh

Wirtschaftsuniversitaet Wien

hofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V


Brunel University
University Of Brighton
Italdata S.P.A.
Extreme Media Solutions Ltd Strategisch Plan Kempen Vzw Universidad Carlos Iii De Madrid Middle East Technical University University Of Southampton
Klett Lernen Und Information Gmbh Institute Of Information Technologies - Bulgarian Academy Of Sciences
Cabrilog Sas
Helsingin Kaupunki TheBerlin
Escp-Eap Europaeische Wirtschaftshochschule BritishE.V.
Institute For Learningand Development Lbg
International Environment
European And Quality
Collaborative For Services
Science, S.A.
Industry And Technology Exhibitions
Rai-Radiotelevisione Italiana Spa Imaginary Srl Sofia Digital Oy
Liverpool John Moores University Higher Education Corporation Universitaet Basel
Uppsala Universitet
Kauno Technologijos Universitetas Siemens It Solutions And Services Sa

Open Universiteit Nederland


Ruhr-Universitaet Bochum
University Of Leicester
Sofiiski Universitet "Sveti Kliment Ohridski"
Fakultet Prirodoslovno-Matematickih Znanosti I Odgojnih Podrucja Sveucilista U Splitu
European Schools The University Of Dundee
Lt Design Software Gmbh Science And Technology Facilities Council
University Of East Anglia
Institute For Learning Shanghai Jiao Tong University Hogskolen I Oslo
Paedagogische Hochschule Ludwigsburg InstituteInnovation
Of Communication And Computer Systems
The Weizmann Institute Of Science The University Of Glasgow

Ellinogermaniki Agogi Scholi Panagea Savva Ae Universiteit Van Amsterdam Abbeynet S.P.A.
Universita Della Svizzera ItalianaAalto-Korkeakoulusaatio Cs Systemes D'Information Sa

U&I Learning Nv Universitaet Fuer Bodenkultur Wien Tracoin Quality Bv


Cambridge Training And Development Limited Space Applications Services Nv
Grame
Umea Universitet

Deutsches Forschungszentrum Fuer Kuenstliche Intelligenz Gmbh


Computational Modelling Cambridge Limited
Universite De Pau Et Des Pays De L'Adour
At Bristol Limited
National Technical University Of Athens Eurotype Ohg
Bundesgymnasium Und Bundesrealgymnasium Schwechat Universidad Nacional De Educacion A Distancia Rtb Egitim Cozumleri Limited Sirketi Neurologische Klinik Gmbh Bad Neustadt

The Open University


Philippos Nakas Music House Universitaet Kassel Technische Universiteit Eindhoven Technische Universitat Berlin
University
Morpheus Software Vof Of Piraeus Research Center
Institute For Language And Speech Processing Stiftung Universitat Hildesheim Ecole Des Hautes Etudes Commerciales Gesellschaft Zur Foerderung Kuenstlerischer Informatik E.V.
Landesinitiative Neue Kommunikationswege Mecklenburg-Vorpommern E.V. Registered Nursing Home Association Limited
Uab Balteck
Sun Microsystems Gmbh Regola Srl
Universidad De Alcala Extrim Mintia Solousions Etaireia Pliroforikis Kai Tilepikoinonion Etaireia Periorismenis Efthinis Politecnico Di Milano
Atit Bvba
Rigas Tehniska Universitate Rheinisch-Westfaelische Technische Hochschule Aachen
Eotvos Lorand Tudomanyegyetem
Web Models S.R.L.
Tampereen Teknillinen Yliopisto Linkopings Universitet The City University
Magyar Okologiai Gazdalkodasert Egyesulet
Asociatia Tehne - Centrul Pentru Dezvoltare Si Inovare Ou Miksike
In Educatie
Universitatea Tehnica Cluj-Napoca The Chancellor, Masters And Scholars Of The University Of Cambridge

University Of Cyprus
Centre De Recherche Public Henri Tudor British Telecommunications Public Limited Company*

Karl-Franzens-Universitaet Graz Mathcore Engineering Ab

Atos Origin Sociedad Anonima Espanola


Ethniko Diktyo Erevnas Technologias Ae Infoman Ag

Kungliga Tekniska Hoegskolan


European Research And Project Office Gmbh
Fondazione Bruno Kessler
Fundação Universidade De Brasilia
Universitatea Politehnica DinThe
Bucuresti
University Of Edinburgh
Universitatea De Stiinte Agronomice Si Medicina Veterinara - Bucuresti Central Laboratory Of General Ecology - Zentralna Laboratoriya Po Obschta Ekologiya
Universitet For Miljo- Og Biovitenskap
Magyar Tudomanyos Akademia - Filozofiai Kutatointezet Kompetenzzentrum Fuer Wissensbasierte Anwendungen Und Systeme ForschungsOrt France
- Und Entwicklungs Gmbh
Universite Rene Descartes Norges Teknisk-Naturvitenskapelige Universitet
Virtech Ltd Ntnu

Ecole Polytechnique Federale De Lausanne


Bundesministerium Für Land- Und Forstwirtschaft, Umwelt Und Wasserwirtschaft
Bit Media E-Learning Solution Gmbh And Co Kg
Aurus Kennis- En Trainingssystemen B.V.
Tel Aviv University
European Aeronautic Defence And Spa Technische Universiteit Delft
Empowertheuser Ltd

Aalborg Universitet Centro Di Ricerca In Matematica Pura Ed Applicata - Consorzio


Universidad De Malaga

Universitaet Graz
Klett Lernen Und Wissen Gmbh
Universitaet Duisburg-Essen
Geoponiko Panepistimion Athinon Eesti Maaulikool
Groupe Des Ecoles Des Telecommunications Universitaet Des Saarlandes Innovation Service Network Podjetnisko In Poslovno Svetovanje D.O.O. Inovacijsko-Razvojni Institut Univerze V Ljubljani

Universitetet I Oslo Eberhard Karls Universitaet Tuebingen The University Of Manchester

Comnetmedia Ag
Universitaet Paderborn
University College London
Biba - Bremer Institut Fuer Produktion Und Logistik Gmbh
Budapesti Corvinus Egyetem
Lego System A/S Cnotinfor - Centro De Novas Tecnologias Da Informacao, Limitada Teknillinen Korkeakoulu Zentrum Fuer Graphische Datenverarbeitung E.V.
Stiftelsen Sintef Technology & Society

Takomat Johne, Schnatmann, Schwarz Gbr


Eidgenoessische Technische Hochschule Zuerich Industrie Und Handelskammer Darmstadt

The University Of Birmingham Nokia Oyj

Consiglio Nazionale Delle Ricerche


It University
Institute Of Copenhagen
For Parallel Processing Of The Bulgarian Academy Of Sciences
University Of Haifa Forschungszentrum Informatik An Der Universitaet Karlsruhe
Conception Realisation Edition Developpement Multimedia Fist Sa - France Innovation Scientifique Et Transfert
Ta MaltaCredm
Universitaet Bremen

Technische Universitaet Graz


Saxo Bank As
Universita Lean Enterprise Institute Polska Spolka Z Ograniczona Odpowiedzialnosc Ia
Tamkang University Foundation University Of Patras Erasmus Universiteit Rotterdam Cyntelix Corporation Limited
Testaluna Srl
Broadview.Tv Gmbh
Universite Joseph Fourier Grenoble
University Of 1
Exeter
Universite Pierre Mendes France
Ludwig-Maximilians-Universitaet Muenchen
De Praktijk, Natuurwetenschappelijk Onderwijs V.O.F. Blekinge Tekniska Hoegskola Tty-Saatio
Bar Ilan University Exact Learning Solutions Spa
University Of Bristol Fundacion Esade Sap Ag
Birkbeck College University Of Bath Universite Catholique De Louvain
Research Academic Computer Technology Institute
Siemens Aktiengesellschaft Oesterreich
Support It (Uk) Limited Serious Games Interactive Boc Information Technologies Consulting Sp. Z.O.O.
Universita Degli Studi Di Catania University Of Leeds

Tele-Universite
Stichting Technasium
The Provost Fellows And Scholars Of The College Of The Holy And Undivided Trinity Of Queen Elizabeth Near Dublin
The Regents Of The University Of California
Institut National De Recherche En Informatique Et En Automatique
Universitatea Alexandru Ioan Cuza
Fundacao Da Faculdade De Ciencias Da Universidade De Lisboa
Democritus University Of Thrace Playgen Ltd
Centre Europeen D'Education Permanente

Soluciones Integrales De Formacion Y Gestion Structuralia, S.A


Fondazione
Exitech S.R.L. Accademia Nazionale Di Santa Cecilia
Centre Internacional
Joanneum Research Forschungsgesellschaft Mbh De Metodes Numerics En Enginyeria
Tartu Ulikool
Universite De Tlemcen Univerzita Komenskeho V Bratislave
Katholische Universitat Eichstatt-Ingolstadt
City University
Universitad De Barcelona Zuercher Hochschule Winterthur
Enovate As Universiteit Twente Medical University Plovdiv Cyntelix Corporation Bv The University Of Reading
European Cervical Cancer Asociation
Pouliadis Associates Corp
Univerzita Karlova V Praze Universite Paul Sabatier Toulouse Iii
Situsi Limited
Eprep Instytut Podstaw Informatyki Polskiej Akademii Nauk (Ipi Pan) Nato Undersea Research Centre
Universitetet I Bergen University Of The West Of Scotland Boc Asset Management Gmbh Stichting Fnb

The Governing Council Of The University Of Toronto


University Of Limerick
Universite De Fribourg
Aristotelio Panepistimio Thessalonikis Institut De Recherche Et De Coordination Acoustique Musique - Ircam

Universita Degli Studi Di Genova Universite De Nice - Sophia Antipolis Universidad Complutense De Madrid Pontydysgu Ltd
Icwe Gmbh

TheMedien
University Of Nottingham
Mindonsite - Integral Coaching Sa Uni Research As Universita Degli Studi Di Firenze
Faculdade De Engenharia Da Universade Do Porto Mto Psychologische Forschung Und Beratung Gmbh
Centre National De La Recherche Scientifique
Universitaet Hamburg
Universidade De Tras-Os-Montes E Alto Douro Association De L'Enseignement Superieur Commercial Rhone Alpes
Coventry University
In Der Bildung
London Metropolitan University Sibelius Software Limited
Ministerul Apararii Nationale Fundacion Isaac Albeniz
Stichting Hogeschool Van Utrecht
Universitaet Augsburg
Universite Stendhal Grenoble 3
The Cyprus Foundation For Muscular Dystrophy Research
Lancaster University Healthware Spa - Phi Succubus Interactive
Universitaet Innsbruck Copenhagen Business School
The Hebrew University Of Jerusalem
Geie Ercim
Albert-Ludwigs-Universitaet Freiburg

The University Of Warwick


University Of Newcastle Upon Tyne Verein Offenes Lernen
University Of Plymouth Fundacio Per A La Universitat Oberta De Catalunya Moma Spa
Fachhochschule Nordwestschweiz
Universite Des Sciences Et Technologies De Lille - Lille I
Esteam Ab
Universite De Liege
Talent Anonymos Etairia Pliroforikis
Heriot-Watt University
Universita Degli Studi Di Milano
Erasmus Universitair Medisch Centrum Rotterdam
Gako Hojin Seikei Gakuen
Dzs, Zaloznistvo In Trgovina D.D. Universite Du Maine: Le Mans
The University Of Exeter
Instituto Superior De Engenharia De Coimbra

Inesc Id - Instituto De Engenharia De Sistemas E Computadores, Investigacao E Desenvolvimento Em Lisboa


Fakultni Nemocnice V Motole
Goeteborgs Universitet Sony France S.A. Saint George'S Hospital Medical School
Fondazione Europea Per La Genetica Grupo Anaya S.A. National Endowment For Science, Technology And The Arts Julius-Maximilians Universitaet Wuerzburg
Academisch Ziekenhuis Leiden - Leids Universitair Medisch Centrum
The University Of Sheffield National And Kapodistrian University Of Athens Amis Druzba Za Telekomunikacije D.O.O. The University Of Hertfordshire Higher
Interagens Srl
Education Corporation

Talent Anonimos Etaireia Pleroforikes


Engineering - Ingegneria Informatica Spa
Universitatea De Medicina Si Farmacie Iuliu Hatieganu Cluj-Napoca

European Society Of Human Genetics


Bereich Humanmedizin Der Georg-August-Universitaet Goettingen Institute Of Education, University Of London
Alma Mater Studiorum-Universita Di Bologna
Universitaetsklinikum Heidelberg

Universite Paris Descartes Universita Degli Studi Di Siena Association Pour La Recherche Et Le Developpement Des
IcattMethodes Et Media
Interactive Processus
B.V. Industriels
National University Corporation, Kyoto University
University Of Sunderland
Universidade Catolica Portuguesa
Otto-Friedrich-Universitaet Bamberg
Telelingua International Sa Universitaet Zu Koeln

Universiteit Utrecht
Istituto Geografico De Agostini S.P.A.
Se.Ge.Sta. Srl Compedia Software & Hardware Development Ltd
Uniwersytet Jagiellonski

Oulun Yliopisto Helsingin Yliopisto Private Universitaet Witten/Herdecke Ggmbh


Jacobs University Bremen Ggmbh
University Of Aegean Alinari 24 Ore Spa Pearson Education Wageningen Universiteit
Cyprus Pedagogical Institute
Ltd
Handelshoejskolen I Aarhus
Vilniaus Universitetas
The Hebrew University Of Jerusalem
Silogic S.A. Universiteit Maastricht
Camporosso
Universite De Lausanne
The Queen'S University Of Belfast Institutet
Karolinska
Universita Degli Studi Di Salerno Foundation For Research And Technology - Hellas
Universidad De Valladolid Universite Paris 7 Denis Diderot Skelleftea Kommun
Scienter Scrl
Universite Paris-Sud Xi Technicka Univerzita V Kosiciach

Institut National Polytechnique De Toulouse


Evtek-Ammattikorkeakoulu
Ff Oo Oy
Poyry Forest Industry Forschungs & Entwicklungs Gmbh
Institut National Polytechnique De Grenoble

Radvision Ltd
Tehnicheski Universitet Sofia
Universite De Neuchatel
Tessera Multimedia S.A.

Vysoka Skola Ekonomicka V Praze

Fig. 2. Partner collaborations spanning FP6, FP7, and eContentplus TEL projects

by PageRank, a metric that not only takes into account the number of edges
of each node, but also the “importance” of the adjacent neighbor nodes [15].
This means, an organization’s importance depends on the number of collabo-
rations with other organizations and on the importance of the organization’s
collaborators.

Table 3. Top 10 organizations by PageRank


# Organization PR BC LC DC Funding
1 The Open University, United Kingdom .0125 [1] .1185 [1] .2151 [601] 219 [1] 3.55 [3]
2 Katholieke Universiteit Leuven, Belgium .0090 [2] .0752 [2] .1716 [604] 148 [3] 2.56 [6]
3 Open Universiteit Nederland, Netherlands .0086 [3] .0414 [6] .2161 [600] 133 [7] 3.45 [4]
4 Jyvaskylan Yliopisto, Finland .0080 [4] .0667 [3] .3170 [588] 170 [2] 1.26 [39]
5 Fraunhofer-Gesellschaft zur Foerderung der .0068 [5] .0529 [4] .1833 [603] 111 [22] 3.40 [5]
Angewandten Forschung E.V., Germany
6 Deutsches Forschungszentrum fuer Kuen- .0066 [6] .0390 [7] .1916 [602] 106 [27] 3.68 [1]
stliche Intelligenz Gmbh, Germany
7 Atos Origin Sociedad Anonima Espanola, .0064 [7] .0236 [15] .4316 [565] 142 [5] 1.33 [33]
Spain
8 Universitaet Graz, Austria .0064 [8] .0230 [18] .4016 [573] 148 [3] 2.03 [10]
9 Universiteit Utrecht, Netherlands .0061 [9] .0203 [23] .4323 [564] 139 [6] 1.62 [19]
10 INESC ID - Instituto de Engenharia de Sis- .0061 [10] .0368 [8] .4741 [552] 130 [8] 1.68 [16]
temas e Computadores, Investigacao e De-
senvolvimento em Lisboa, Portugal
PR. . . PageRank — BC. . . Betweenness centrality — LC. . . Local clustering coefficient —
DC. . . Degree centrality — Funding. . . EC contribution to the project cost in million Euro. Note
that CORDIS states the total funding for each project. The funding per consortium member for
each project was computed by dividing the total EC contribution to that project by the number of
consortium members. This should give a good estimate.
The European TEL Projects Community 59

Fig. 3. Strongest organizational ties in FP6, FP7, and eContentplus TEL projects

The top partnership bonds across all TEL projects are displayed in Fig. 3. The
figure shows the 22 collaboration pairs (edges) between organizations (nodes)
that are based on at least four projects (the number of projects is displayed
as a label for each edge). Assuming that partnership is only continued from
successful previous collaborations, we can conjecture that those projects where
the organization pairs displayed in Fig. 3 were involved can be flagged as having
lasting impact, at least in terms of continuity in research collaborations. The
most important of these projects, ordered by frequency of partnerships, are:
1. PROLEARN (FP6): 16 pairs,
2. ICOPER (eContentplus): 10 pairs,
3. OpenScout (eContentplus): 9 pairs,
4. GRAPPLE (FP7): 8 pairs,
5. STELLAR (FP7), ROLE (FP7), PROLIX (FP6): 5 pairs.
It is evident that the PROLEARN network of excellence that co-kicked off FP6
succeeded in creating and sustaining strong partnerships, while the KALEIDO-
SCOPE network of excellence, which started at the same time as PROLEARN,
did not achieve this despite its much larger consortium.

6 Time Series Analysis


The previous figures all took the current status of collaborations and projects as
a basis for calculating social network metrics. To understand the dynamics of the
projects and their consortium collaborations this section presents the develop-
ment of SNA metrics of the collaboration network over time, starting from 2004
when the first FP6 projects were launched, up to the year 2010 (inclusive). The
years 2011 and 2012 were omitted from the analyses since no new TEL projects
were launched in FP7 and eContentplus after 2010 to date.
60 M. Derntl and R. Klamma

Fig. 4 shows that in FP6 the first set of (eight) projects launched in 2004
introduced 4,199 distinct collaboration connections among 157 organizations in
the TEL landscape. This massive entry number is mainly due to the KALEIDO-
SCOPE network of excellence, which was launched with an extraordinary large
consortium of 83 partners. In Fig. 5 we see that the diameter of the network—i.e.
the longest shortest path through the network—has reached its peak in 2006,
after only 2 years; in 2010, the diameter shrunk to a value of 4, which means
that one or more projects have introduced direct connections between previously
distant partners. It also shows that the average path length has been stable at a
value of around 2.5 since 2006. This means that each organization is on average
connected to each other organization by only two intermediate organizations.
This indicates that the collaboration network is extremely tightly knit.
Until 2010, the number of organizations involved in TEL projects almost
quadrupled (3.9-fold), while the number of project-based collaboration ties be-
tween those organizations slightly more than doubled (2.2-fold) during the same
time window (cf. Fig. 4). This gap can partly be explained by Fig. 6, which
shows that although there has been a steady flow of new projects, these projects
have added fewer and fewer new organizations to the picture, exposing a drop
from 8.1 new organizations per new project in 2006 to a value of 4.8 in 2010.
Fig. 8 demonstrates that the average size of the consortia of newly launched
projects has been relatively stable since 2005, ranging between 10.9 and 14.1.
In contrast, the average share of newly introduced organizations per launched
project has dropped from 66% in 2005 to 40% in 2010. The sharpest drop is
evident for projects that started in the year 2008 (from 62% to 42%); this was
the year when the first six FP7 projects plus three new eContentplus projects
were launched (cf. Fig. 9). At the transition from FP6 to FP7 and eContentplus,
the project consortia apparently resorted to an established core of members.
While Fig. 7 shows that new projects have introduced a relatively stable num-
ber of new collaboration ties to the landscape in recent years, Fig. 10 demon-
strates that the average number of new collaboration ties created by each or-
ganization making its debut in TEL projects has, after an initial fall between
2004 and 2005, increased from 7.9 in 2005 to 17.7 in 2010. Hence, starting to
participate in TEL projects has an increasingly positive effect in terms of new
collaborations with other organizations involved in TEL.
The project participation data shows that of the 34 TEL projects launched
between 2008 and 2010, 20% were coordinated by organizations which had not
participated in any previous (or at that time running) TEL project. The devel-
opment of this percentage over time is plotted in Fig. 11. The sharp increase
in 2007 is likely due to eContentplus, where the focus shifted to e-content and
metadata, and thus new organizations were introduced. The data shows that
even for total “newbie organizations” in TEL it is absolutely feasible to write a
successful project proposal in the coordinator role.
However, the tendency evident in most of the figures in this section points in
another direction; it appears that there is less and less demand for new organi-
zations in the TEL community. One the one hand, this is understandable: if an
The European TEL Projects Community 61

organization launches a new project it is likely to resort to partners it has already


successfully collaborated with, particularly as more competing organizations are
entering the community every year. On the other hand, it shows that project
consortia and collaboration ties between organizations behave like an inertial
mass, which impedes the involvement of new and fresh organizations, and likely
also new ideas and research foci.

1,000 9,330 10,000 6


8,234
5 5 5 5
7,409 5
800 8,000 4 4
6,607
6,047 4
600 4,987 6,000 3
4,199 604 3 2.46 2.52 2.51 2.50 2.50
542 2.18
400 476 4,000 1.74
423 2
371
200 Organizations 2,000 Diameter
257 1
Collaborations Avg. Path Length
157
0 0 0
2004 2005 2006 2007 2008 2009 2010 2004 2005 2006 2007 2008 2009 2010

Fig. 4. Organizational collaboration Fig. 5. Organizational collaboration


network: Nodes and edges network: Network measures
25 20 600
New Projects 525
New Projects
New Organizations per New Project
20 New Collaborations per New Project 500
15
19.6 14 14 400
13
15 12
14 14
13 10 300
12
9
10 8
7 9 200
8 7
8.1 5 76 80 89 84
5 7.1 7.4 56 69
5.9 100
5.5 4.8
0 0 0
2004 2005 2006 2007 2008 2009 2010 2004 2005 2006 2007 2008 2009 2010

Fig. 6. New organizations introduced Fig. 7. New collaboration ties intro-


by newly launched projects duced by newly launched projects
Ratio of New Organizations in Consortia of New Projects New projects per programme per year
Avg. Consortium Size of New Projects 15
14 FP6
100% 50 FP7
85% 13
90% 45 12 eContentplus
80% 67% 40 11
66% 10
70% 62% 35 9 10
60% 30 8 7
42% 44% 40% 7 14
50% 25 13
6 6
40% 20 5
23.0
30% 15 4 8
3 7
20% 14.1 10 5
12.1 12.0 12.6 11.9 2 4
10% 10.9 5 1 3
0% 0 0
2004 2005 2006 2007 2008 2009 2010 2004 2005 2006 2007 2008 2009 2010

Fig. 8. New organizations in consortia Fig. 9. Number of launched projects by


of new projects year and programme
30 26.7 100%
88% 86%
90%
25
80%
17.7 70%
20
15.1 60%
15 12.5 50%
10.8
9.3 40% 50%
10 7.9 43%
30%
New Collaborations per 20% Ratio of Projects with Novice
5 25%
New Organization 23%
10% Participants as Coordinator
0% 11%
0
2004 2005 2006 2007 2008 2009 2010 2004 2005 2006 2007 2008 2009 2010

Fig. 10. New collaboration ties intro- Fig. 11. Novice organizations as
duced by new organizations project coordinators
62 M. Derntl and R. Klamma

7 Key Findings

Funding Bridge. While there are strong ties between FP6 and FP7 in terms
of participating organizations, it was demonstrated that eContentplus acted as a
broker between FP6 and FP7 project consortia. Particularly some Best Practice
Networks like ASPECT or ICOPER, and also Targeted Projects like OpenScout,
have many strong consortium overlaps with both preceding FP6 projects and
succeeding FP7 projects. This pattern is probably simply due to the fact that in
2007 there were neither new project launches in FP6 nor in FP7. On the other
hand it could also be attributed to a plain “research follows money” attitude.
That is, if there had not been funding from eContentplus, organizations would
likely have looked for funding opportunities in TEL related programmes with
different focus between 2006 and 2008. Anyway, eContentplus apparently was
supportive and non-disruptive for the organizational collaboration network in
European TEL. A question for future research would be whether other funding
schemes that do not explicitly carry the “Technology Enhanced Learning” label
have similar effects on the project and collaboration landscape.
Role of Project Type. Integrated Projects (IP) and Networks of Excellence
(NoE) are prominently placed among those projects with the highest impact on
successor projects, whereby this cannot solely be ascribed solely to the larger size
compared to e.g. STREPs. For instance, these projects, along with some large e-
Contentplus consortia, typically also include multiple pairs of organizations that
appear in the network of the most frequent collaborators. This indicates that IPs
and NoEs are very important not only for shaping the research agenda, but also
for creating strong and sustained collaboration ties between TEL organizations.
TEL Family. With every new TEL project, relatively fewer organizations are
penetrating the existing overall collaboration network in TEL projects. Over the
last three years, an average of 40% of the consortia of new projects was not previ-
ously involved in any TEL projects. The sharpest drop in this number occurred
for projects that started in the year 2008 (from 62% to 42%), when the first FP7
TEL projects were launched. It appears that at the transition to FP7, the project
consortia—and ultimately the European Commission—resorted to building on
and funding an established core of organizations, thus strengthening existing col-
laboration bonds. This has lead to a tightly knit family-like community of TEL
organizations, an inertial mass that can impede the involvement of new organi-
zations. This is strengthened by the fact that of the 34 launched TEL projects
since 2008, four out of five are being coordinated by organizations that have al-
ready participated in at least one previous TEL project. In [4], the authors also
conclude from their analyses of FPs 1–4 that the European Research Area builds
on a “robust backbone structure” of frequent collaborators. A similar conclusion
can be found in [5], where the authors identified a core of established actors with
increasing integration over time. In [7] the authors go even further and call this
backbone of partners with long-standing and extremely tight collaboration ties
the “oligarchic core” of the FPs. In the light of these related studies, we can
state that the TEL community exposes similar bonding characteristics as the
The European TEL Projects Community 63

complete Framework Programme networks. Of course, from the EC’s viewpoint


it seems reasonable to fund projects where a large share of the consortium have
previous experience in EC-funded TEL projects. Still, this appears to be a policy
issue that requires attention.

8 Conclusion

This paper has reported analyses, results and implications of the application
of social network analysis on European Commission funded TEL projects to
provide stakeholders with an overview on historic development and the current
state. The three key findings we distilled are that organizations are resourceful in
finding alternative funding opportunities; that integrated projects and networks
of excellence have a central role in shaping the collaboration landscape; and
that the collaboration ties within and across TEL projects in Europe expose
characteristics of oligarchic structures.
There are several limitations in the current data sources and the analyses,
which will have to be addressed in forthcoming work. Most importantly, the
projects dataset currently exclusively contains TEL related projects from FP6,
FP7 and eContentplus. There are many additional sources and projects that
could be included, e.g. the Lifelong Learning Programme, additional projects
from the EC’s Policy Support Programme, the UK JISC funded projects, and
many more. Additionally, several projects have strong associate partnership pro-
grammes and funded sub-projects (e.g. STELLAR theme teams) that could be
integrated into the analyses. Also, we currently have descriptive project meta-
data only and do not consider project deliverables. These would significantly
augment the potential analysis toolbox with text mining, topic modeling, and
information on involved researchers. Finally, the funded projects analyzed in this
paper likely represent only a small fraction of the actual collaboration network,
since the competition in TEL calls is fierce with very low success rates. It would
therefore be worthwhile to include unsuccessful project proposals in the analysis.
To keep interested stakeholders up-to-date with facts and figures from the
TEL projects community we deployed a widget-based dashboard [16] for visual
interaction with the Mediabase data sources in the Learning Frontiers portal2 .
The next update to the TEL projects data set will arrive later this year, when
several new TEL projects will be funded from bids submitted to FP7 ICT Call
8. It remains to be seen how the results of this call will impact the project and
collaboration networks. Projecting the past onto the future, we can expect that
the new projects to be mainly composed of established organizations, with a few
new ones hopefully entering the scene.

Acknowledgments. This work was funded by the European Commission through


the 7th Framework Programme ICT Coordination and Support Action TEL-Map
(FP7 257822).
2
http://learningfrontiers.eu/?q=dashboard
64 M. Derntl and R. Klamma

References
1. Klamma, R., Spaniol, M., Cao, Y., Jarke, M.: Pattern-Based Cross Media Social
Network Analysis for Technology Enhanced Learning in Europe. In: Nejdl, W.,
Tochtermann, K. (eds.) EC-TEL 2006. LNCS, vol. 4227, pp. 242–256. Springer,
Heidelberg (2006)
2. Petrushyna, Z., Klamma, R.: No Guru, No Method, No Teacher: Self-classification
and Self-modelling of E-Learning Communities. In: Dillenbourg, P., Specht, M.
(eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 354–365. Springer, Heidelberg (2008)
3. Voigt, C. (ed.): 4th Evaluation Report – Including Social Network Analysis. De-
liverable D7.5, STELLAR Nework of Excellence (2011)
4. Barber, M., Krueger, A., Krueger, T., Roediger-Schluga, T.: Network of European
Union–funded collaborative research and development projects. Physical Review
E 73 (2006)
5. Roediger-Schluga, T., Barber, M.J.: R&D collaboration networks in the European
Framework Programmes: data processing, network construction and selected re-
sults. International Journal of Foresight and Innovation Policy 4(3/4), 321–347
(2008)
6. Frachisse, D., Billand, P., Massard, N.: The Sixth Framework Program as
an Affiliation Network: Representation and Analysis (2008), http://ssrn.com/
abstract=1117966
7. Breschi, S., Cusmano, L.: Unveiling the texture of a European Research Area:
emergence of oligarchic networks under EU Framework Programmes. International
Journal of Technology Management 27(8), 747–772 (2004)
8. Lozano, S., Duch, J., Arenas, A.: Analysis of large social datasets by community
detection. The European Physical Journal Special Topics 143(1), 257–259 (2007)
9. Scherngell, T., Barber, M.J.: Spatial interaction modelling of cross-region R&D
collaborations: empirical evidence from the 5th EU framework programme. Papers
in Regional Science 88(3), 531–546 (2009)
10. Roediger-Schluga, T., Dachs, B.: Does technology affect network structure? - A
quantitative analysis of collaborative research projects in two specific EU pro-
grammes. UNU-MERIT Working Paper Series 041 (2006)
11. Derntl, M., Klamma, R.: Social Network Analysis of European Project Consortia
to Reveal Impact of Technology-Enhanced Learning Projects. In: 12th IEEE Int.
Conf. on Advanced Learning Technologies, ICALT 2012. IEEE (2012)
12. European Commission: Community Research and Development Information Ser-
vice (CORDIS), http://cordis.europa.eu/home_en.html
13. Derntl, M., Renzel, D., Klamma, R.: Mapping the European TEL Project Land-
scape Using Social Network Analysis and Advanced Query Visualization. In: 1st
Int. Workshop on Enhancing Learning with Ambient Displays and Visualization
Techniques, ADVTEL 2011 (2011)
14. Brandes, U., Erlebach, T. (eds.): Network Analysis. LNCS, vol. 3418. Springer,
Heidelberg (2005)
15. Brin, S., Page, L.: The anatomy of a large-scale hypertextual web search engine.
Computer Networks and ISDN Systems 30, 107–117 (1998)
16. Derntl, M., Erdtmann, S., Klamma, R.: An Embeddable Dashboard for Widget-
Based Visual Analytics on Scientific Communities. In: 12th Int. Conf. on Knowl-
edge Management and Knowledge Technologies, I-KNOW 2012. ACM (2012)
TinkerLamp 2.0: Designing and Evaluating
Orchestration Technologies for the Classroom

Son Do-Lenh, Patrick Jermann, Amanda Legge,


Guillaume Zufferey, and Pierre Dillenbourg

CRAFT, Ecole Polytechnique Fédérale de Lausanne (EPFL)


1015 Lausanne, Switzerland
{son.dolenh,patrick.jermann,pierre.dillenbourg}@epfl.ch,
{legge.amanda@gmail.com,guillaume.zufferey}@simpliquity.com

Abstract. Orchestration refers to the real-time classroom management


of multiple activities and multiple constraints conducted by teachers. Or-
chestration emphasizes the classroom constraints, integrative scenarios,
and the role of teachers in managing these technology-enhanced class-
rooms. Supporting orchestration is becoming increasingly important due
to the many factors and activities involved in the classroom. This pa-
per presents the design and evaluation of TinkerLamp 2.0, a tangible
tabletop learning environment that was explicitly designed to support
classroom orchestration. Our study suggested that supporting orchestra-
tion facilitates teachers’ work and leads to improvements in both the
classroom atmosphere and learning outcomes.

1 Introduction
Due to the technological evolution in schools, the learning process now involves
multiple activities, resources, and constraints in the classroom. Teachers not only
have to prepare lesson plans, accommodate curricula, and teach, but also un-
derstand and manage various technologies such as interactive whiteboards and
computers, and improvise the lesson when appropriate. This real-time manage-
ment of multiple activities with multiple constraints conducted by teachers, also
known as classroom orchestration, is crucial for the materialization of learning.
Orchestration emphasizes the classroom constraints and the teachers’ role
in managing these technology-enhanced classrooms. Although occasionally men-
tioned in the literature [26,6], until recently, orchestration has not received much
attention from the CSCL community [7,15,9]. It has been argued that orches-
tration is important for more technology adoptance in authentic classrooms [8].
Orchestration technologies are tools that assist the teachers in their task of
orchestrating integrated classroom activities. They aim to provide support for
teachers, who will then be able to orchestrate and manage the class on-the-fly,
intervening with students to adapt teaching plans and learning activities. While
a few early examples of technologies designed to support orchestration have
started to emerge [3,1], little work explores the requirements and guidelines for
the design of such technologies in real classroom settings.

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 65–78, 2012.

c Springer-Verlag Berlin Heidelberg 2012
66 S. Do-Lenh et al.

Motivated by the increasing need for better orchestration support in class-


room settings and the lack of design guidelines, we developed TinkerLamp 2.0,
an interactive tabletop learning environment that explores the design space of
orchestration technologies (Figure 1). TinkerLamp 2.0 draws upon TinkerLamp
1.0 [14,28,10], which was designed to support the training of vocational appren-
tices in logistics. TinkerLamp 2.0 introduces new and redesigned features that
explicitly support classroom orchestration as well as new classroom practices
that lead to improvements in both the classroom atmosphere and learning out-
comes.
This paper presents the design, implementation, and evaluation of the Tin-
kerLamp 2.0 learning environment and its supporting orchestration tools. Our
study of the environment, which involved 6 classes and 93 vocational apprentices,
showed that the system facilitated the teachers’ work, making it easier for them
to manage both the class and the learning resources in real-time. Importantly,
it resulted in more opportunities for reflection, higher learning outcomes, bet-
ter support for class-wide activities, and a more playful atmosphere, compared
to two baseline conditions including an identical system without orchestration
support.

(a) (b)

(c) (d)

Fig. 1. The components of TinkerLamp 2.0: (a) Tangible model, (b) TinkerKey, (c)
TinkerBoard, (d) TinkerQuiz
TinkerLamp 2.0: Orchestration Technologies for the Classroom 67

2 Related Work
As argued by [8], the success of a learning system in a technology-enhanced
classroom environment increasingly depends on considering the technology in a
broader context, including classroom orchestration, rather than just focusing on
a positive learning outcome in a lab setting. Orchestration, which refers to the
teachers’ real-time classroom management of multiple activities with multiple
constraints, promotes productive learning in a class ecosystem by using integra-
tive scenarios and empowering teachers [7,15,9,22]. More specifically, it concerns
the integration of activities at multiple social planes within the classroom, such
as individual reading, team argumentation, and plenary sessions [8].
Recent studies have shown the benefits of considering orchestration in the
classroom [22,15]. For example, [15] ran a study with a number of eighth-grade
high school classrooms studying Biology. The results of this study demonstrated
that orchestration, in this case alternating plenary, small group, and dyadic
learning phases, led to higher levels of learning competence than having all ac-
tivities at only one level.
How we can design learning environments and technologies that facilitate
orchestration is still an open question. An early example of orchestration tech-
nologies is the One Mouse Per Child project [3]. It provides the teachers with a
visualization display that shows simplified aggregated data about each of the 40
children in the room. This information is displayed permanently for the teachers
to facilitate their awareness of the class progress and individual statuses with-
out posing queries. Mischief [16] is a teaching system designed to enhance social
awareness between collocated students and support classroom-wide interactions.
Mischief enables the simultaneous interaction of up to 18 students in a classroom
using a large shared display.
nQuire [18] is a system developed to guide personal inquiry learning, sharing
the orchestration responsibility between the teachers and the students. It allows
inquiries to be created, scripted, configured and used, all on-the-fly, by either role.
nQuire incorporates different technological devices and promotes the support
for inquiry activities across individual, group, and class levels at different parts
of the inquiry. Some other research provides logistic support for teachers to
monitor students’ activities by connecting multi-touch tabletops in the classroom
to the teacher’s desk [1] in order to cope with different levels of student expertise
[17], provide task-specific context to the teacher [23], and provide distributed
awareness tools to the tutor [2].
Orchestration technologies are still in their infancy stage. As argued by [9],
most previous research focuses more on the core pedagogical task, at the in-
dividual or group level of collaborative learning. Therefore, research is needed
to explore guidelines specifically designed for the development of learning envi-
ronments that explicitly support teacher orchestration and activities at multiple
levels in the classroom. This paper presents such an attempt, using tabletop
technology to develop our orchestration environment.
Tangible tabletops have been researched and used across many educational
contexts [11,20,27]. Tangible tabletops can be effective in supporting co-located
68 S. Do-Lenh et al.

learning by providing a large shared workspace, increasing group members’


awareness [13]. They also enable more participation and active learning thanks
to their simultaneous interaction capabilities [25,21]. Tangible tabletops support
building learning activities in which users can interact directly with their hands
by touching and manipulating objects. This sensori-motor experience that tan-
gible tabletops offer has been described as beneficial for learning [21], relying on
the idea that they support an enactive mode of reasoning [4,19], and that they
leverage metaphors of object usage and take advantage of the close inter-relation
between cognition and perception of the physical world [12,24].

3 The TinkerLamp 2.0 Environment


TinkerLamp 2.0, an interactive tangible tabletop learning environment, is de-
signed to help logistics apprentices understand theoretical concepts presented at
school by letting them experiment with these concepts on an augmented small-
scale model of a warehouse. In terms of hardware, TinkerLamp 2.0 consists of
a projector and a camera, which are mounted in a metal box suspended above
the tabletop.
The TinkerLamp 2.0 was the result of our evaluation and re-design of the
TinkerLamp 1.0 system which was deployed in several vocational schools for
two years. We conducted field evaluations and controlled experiments of this 1.0
system with nearly 300 students and 8 teachers in several separate studies from
2008 to 2010 [14,28,10,24].
While the TinkerLamp 1.0 focused on supporting group-level activities, Tin-
kerLamp 2.0 introduces a whole new set of functionalities for explicitly sup-
porting classroom-level activities and teacher orchestration, therefore allowing
the continuity of learning throughout the entire classroom. These orchestration
tools include a teacher-exclusive card-based interface, a public display, and a
collection of interactive quizzes. In addition, we redesigned the learning scenario
to include new learning activities not available in the TinkerLamp 1.0 system.
Following, we briefly present the features inherited from TinkerLamp 1.0 be-
fore describing the new orchestration and learning tools in TinkerLamp 2.0.

3.1 Inherited Features: Small-Scale Model and TinkerSheet


Apprentices interact with the TinkerLamp 2.0 through two interaction modal-
ities inherited from TinkerLamp 1.0: a tangible warehouse model and a paper-
based interface, called TinkerSheet (Figure 2).
Users interact with the warehouse model using miniature plastic shelves,
docks, and offices. Each element of this small-scale warehouse is tagged with
a fiducial marker that enables automatic camera object recognition. The model
is augmented with visual feedback and information through a projector in the
lamp’s head. The apprentices can also run simulations on the models, which
compute statistics related to the physical structure of the warehouse such as
the areas used for storing goods, the distance between shelves, etc. The simula-
tions use simple models of customers and suppliers that generate a flow of goods
TinkerLamp 2.0: Orchestration Technologies for the Classroom 69

(a) (b)

Fig. 2. (a) Interacting with TinkerSheet using tokens, (b) The small-scale model next
to a TinkerSheet

entering and leaving the warehouse in real-time. This real-time simulated infor-
mation (e.g. animation of how forklifts approach the shelves, statistics about the
warehouse inventory, etc.) is displayed directly on top of the model and on the
TinkerSheet.
A TinkerSheet is a piece of paper automatically tracked in real-time by fiducial
markers that allow users to control the system (e.g. setting parameters for the
simulation, changing the size of the forklift, etc.). It also serves as a visual
feedback space on which textual or graphical summary information from the
simulation is projected (e.g. the warehouse statistics such as surface areas, degree
of use, etc.). Interaction with a TinkerSheet is primarily performed by using a
small physical token.

3.2 TinkerKey: Orchestration Card for Teacher

We observed that the teacher needs to be empowered in a classroom equipped


with multiple TinkerLamps. With TinkerLamp 1.0, the teacher’s job was limited
to walking around the class and discussing with students. His role was “weak-
ened” by the TinkerLamps, in the sense that the students were too engaged
in the simulation, often ignoring his instructions. This was not ideal for their
learning because, without the presence and guidance of the teacher, the tangi-
ble interface sometimes tempted the apprentices to manipulate too much. This
led to less intensive cognitive effort to understand the solutions, and less useful
reflection and discussion for learning [5,10].
We aimed to support the teacher’s orchestration to alleviate this problem.
When properly supported, the teacher presense could notably increase the re-
flection level, and in turn, learning outcomes of his students with the lamps. For
example, when present, the teacher could pose reflective questions to individual
students and encourage group discussions and comparisons.
We developed TinkerKey, a small paper card used by the teacher to orches-
trate the class (Figure 3). Its purpose is to prevent manipulation temptation by
empowering the teachers. It provides them with special privileges when interact-
ing with the system, enabling them to adapt and improvise the current learning
situation to their ever-changing orchestration plan.
70 S. Do-Lenh et al.

(a) (b)

Fig. 3. (a) The “Allow Simulation” TinkerKey to allow/block simulation for a group,
(b) The “Pause Class” TinkerKey to pause the whole class

The scenario is envisioned as follows. The teacher keeps a set of TinkerKeys


in his hand while touring the classroom as usual. When he needs to intervene
with a group or the class as a whole, he places a card on the group’s table (or
any group’s table, in case of the class TinkerKey). Each TinkerKey triggers a
different functionality in the TinkerLamp, such as changing a state or performing
an action, which affects either the group for which it is used, or the whole class,
thereby helping the teacher improvise the learning activity.
Five TinkerKeys were implemented and tested in TinkerLamp 2.0. The “Al-
low Simulation” TinkerKey aims to prevent the students from running too many
simulations without much reflection or ignoring teacher instructions. By flipping
the card over, the teacher can block the students’ capability to run simulations.
The groups were not authorized to run a simulation without the teacher’s per-
mission, requiring contact with the teacher who could then ask them to predict,
explain, and compare the performance of the current layout with that of the
previous simulation.
The “Pause Class” TinkerKey is used at the class level. It helps the teacher
easily and quickly get full attention from the students in order to give instructions
or change from a group to a class-wide activity. This TinkerKey blanks out all
of the projected feedback from the TinkerLamps on each table. As soon as it is
placed on any group’s table, the ‘pause’ command transfers from that group’s
lamp to the other lamps.
The other three TinkerKeys allow the teacher to intervene with a group and
ask questions more effectively than before. These cards hide or show the statistics
of the warehouse layouts the group has built. This enables the teacher to ask
the students to predict and reflect during the building and simulation session.
The design of the TinkerKeys is lightweight and unobtrusive, making it pos-
sible for the teacher to maintain his usual class behaviors. On the other hand,
the TinkerKey cards supplement the teacher’s abilities by giving him simple but
powerful privileges to better orchestrate the class.
TinkerLamp 2.0: Orchestration Technologies for the Classroom 71

3.3 TinkerBoard: Orchestration Awareness Display


There was a problem of class awareness with the TinkerLamp 1.0. Each group
moved at their own pace of exploration when working with their own Tinker-
Lamp. The teacher had difficulty keeping track of each group’s progress, so the
time he spent with each group was not optimized. We observed that the pattern
of the teacher’s movements, and hence the classroom dynamics, was fairly spon-
taneous and subject to frequent changes. For example, on several occasions, two
or more groups made simultaneous requests, and the teacher could not decide
which group to help first.
Moreover, class-wide activities were not adequately supported with Tinker-
Lamp 1.0. The teachers had no means of displaying the built layouts or interme-
diate steps taken by a group to the whole class, so class debriefings were difficult
to perform. Instead, the teachers asked the apprentices to trace their solutions
on the TinkerSheet and reproduce them on the blackboard, which limited the
debriefing to only those layouts which were traced. While one could argue that
this manual transfer of layouts could be useful in terms of reflection, there was
a discontinuity of media and learning during the long process of transferring the
layouts.

Fig. 4. Teacher using TinkerBoard for a spontaneous debriefing with his class

The TinkerBoard of TinkerLamp 2.0 (Fig.4) tackles this problem of class


awareness and facilitates the conducting of class-wide debriefings.

Class Awareness Display. First, this tool can be used as an awareness display.
It displays the whole class history on a big projection board but in a very mini-
malist manner and requires little intervention and interaction from the teacher.
The information provided by this awareness display can facilitate the teacher’s
orchestration, giving him a mechanism to quickly assess the class progress as a
whole and plan his next action.
TinkerBoard includes a) an event bar showing what activity each group is
doing (building models/doing quizzes/running simulations/etc.) and how inten-
sively, and b) a layout history displaying all of the layouts each group has saved
during the activity.
72 S. Do-Lenh et al.

TinkerBoard can be beneficial to mediate simultaneous help requests in that


the teacher can determine who needs help the most based on the number of
saved layouts (how advanced they are in the activity), as opposed to being
spontaneous like in TinkerLamp 1.0. By looking at the display, he can also tell
if a group is doing too many manipulations. He can then intervene to encourage
more thinking and less manipulating.
This information is also designed to support student’s reflection and social
learning. By looking at the event bar, the students can be more aware of the
activity structure of their group and other groups, and hopefully regulate their
actions. By looking at the layout history, they can compare the different layouts
they have built over time or other layouts from other groups.

Continuity of Activities, Class-Wide Debriefing and Inter-Group


Activities. This is the second aspect of orchestration supported by Tinker-
Board. TinkerBoard enables the access of any intermediate layouts built by the
students, not just the final ‘best’ ones transferred to the blackboard with Tinker-
Lamp 1.0, thereby maintaining the continuity of activity and facilitate class-wide
debriefing. There is a dedicated area on the TinkerBoard, called the Compari-
son Zone, which allows the teacher to explicitly compare different layouts and
statistics from different groups during debriefings, explaining their advantages
and disadvantages. He does this by choosing interesting layouts from specific
groups and displays their statistics on the Comparison Zone for side by side
comparisons. Moreover, TinkerBoard provides support for the teacher to con-
duct a class-wide TinkerQuiz (described below). He can send selected layouts
from TinkerBoard to all of the TinkerLamp groups through the network and
engage them in a playful class-wide competition.

3.4 TinkerQuiz
TinkerQuiz was designed to introduce a new way of moving from group- to class-
level activity using the TinkerLamp, encouraging students to be more reflective
but in a fun and engaging way (Figure 5). The TinkerLamp 2.0 supports four
TinkerQuiz cards. Each card has a different question, involving the comparison

Fig. 5. (a) A group choosing a response with a TinkerQuiz (b) A group cheering after
winning the class quiz
TinkerLamp 2.0: Orchestration Technologies for the Classroom 73

of two warehouse layouts according to a specific criterion. The TinkerQuiz card


is small with different colors and icons on it to give it the feel of a game. When
a quiz is placed under the lamp and started, two graphical layouts appear above
the quiz. A countdown timer also appears, showing how much time remains to
finish the quiz. This is intended to deliver a sense of pressure to the students.
Students interact with the TinkerQuiz like the TinkerSheet, with a small token.
Depending on whether the token is placed on the correct or incorrect answer, it
will submit an answer or show the solution, respectively.
The layouts used for the TinkerQuiz are chosen at run-time either by the
teacher from the TinkerBoard for a between-groups quiz or randomly by the
system among a “museum” of saved layouts for a within-group quiz. This capa-
bility allows the teacher to seamlessly move from a group activity (i.e. each group
doing quizzes locally) to a class activity (i.e. the whole class doing a class-wide
quiz together) just by issuing a command from the TinkerBoard.

4 Evaluation of TinkerLamp 2.0


4.1 Participants and Setup
We conducted an ecologically valid comparison between a baseline paper/pencil
condition, the TinkerLamp 1.0 condition, and two alternative variations of Tin-
kerLamp 2.0 : with and without the TinkerBoard component. A total of 2 teach-
ers and 6 classes were involved in the study: 31 students (2 classes) in the pa-
per/pencil, 30 students (2 classes) in the TinkerLamp 1.0, and 32 students (2
classes) in both TinkerLamp 2.0 conditions.

4.2 Learning Task


We used an authentic learning task that is typically used in the school to teach
different types of surfaces involved in the warehouse design process, e.g. raw
surface, net storage surface, etc. Each group was asked to collaboratively build
models, and then compare and reflect on what they had built to understand the
different types of surfaces. In the paper/pencil condition, they drew warehouse
models on paper using pens, erasers, and rulers. In the TinkerLamp conditions,
the group built the warehouse layouts using the tangible model.

4.3 Task Structure


In total, each classroom trial lasted approximately three hours. The teachers
began class by introducing definitions on the blackboard. Then, the class was
divided into four groups to perform the learning task. In both conditions, the
teacher toured around the room to respond to help requests. At the end of the
learning session, the teacher organized a debriefing session where the conclusions
of each group were discussed. A post-test, consisting of 12 multiple-choice and
1 open-ended question, was used at the end of the class to evaluate the learning
outcomes, in terms of understanding and problem-solving performance.
74 S. Do-Lenh et al.

5 Results
5.1 Learning Outcomes
Statistical tests showed that the TinkerLamp 2.0 system (namely the With-
TinkerBoard condition) resulted in higher results in both understanding score
and problem-solving score than the TinkerLamp 1.0 and paper/pencil condition.
Table 1 summarizes the learning scores of all of the conditions.

Table 1. The average learning outcome scores (and standard deviation)

Paper/pen TinkerLamp 1.0 TinkerLamp 2.0 TinkerLamp 2.0


NoTinkerBoard WithTinkerBoard
Understanding 7.84(2.85) 7.43(2.82) 9.38(2.03) 10.31(1.70)
Problem-solving 5.16(1.70) 5.15(1.78) 6.44(1.65) 6.59(1.53)

Understanding Score. An ANOVA test on a mixed-effect model using group


as random factor (F (3, 21) = 3.98, p < .05) and a pair-wise Tukey test showed
that the scores in the TinkerLamp 2.0 WithTinkerBoard condition were sig-
nificantly higher than both the TinkerLamp 1.0 (z = 3.05, p < .01) and the
paper/pencil (z = 2.60, p < .05) conditions. None of the other pair-wise compar-
isons is significant.
Problem-Solving Score. Similar tests found a significant difference between
the four conditions in terms of problem-solving (F (3, 21) = 4.42, p < .01). The
Tukey contrast showed that the WithTinkerBoard condition was significantly
higher than both the TinkerLamp 1.0 (z = 2.72, p < .05) and the paper/pencil
(z = 2.71, p < .05) conditions; the NoTinkerBoard condition was marginally
higher than both the TinkerLamp 1.0 (z = −2.42, p = .07) and the paper/pencil
(z = −2.41, p = .07) conditions. No other significant difference was found.

5.2 Class Atmosphere and Satisfaction


We distributed a questionnaire to students, asking them to rate the group and
class atmosphere and their satisfaction of the class in general. The students
felt that the presence of TinkerBoard (which implied the presence of class-wide
TinkerQuizz) significantly influenced their perception, when compared with class
without the TinkerBoard, in three aspects (confirmed by Wilcoxon-test) by:
1) encouraging more collaboration within their group (W = 60.5, p < .05),
2) making the class more fun (W = 72, p < .05), and 3) encouraging more
comparison of their group’s layouts with those of other groups (W = 65, p < .05).
This proved that the TinkerBoard fulfilled its goal to bridge the different
activities and facilitate the continuity of learning. It enabled class-wide activities
for a more playful and collaborative classroom by seamlessly transitioning from
the building phase to the debriefing phase and transitioning from group activity
to class activity. This board is more than just a monitoring or awareness tool. It
is a classroom orchestration tool, supporting both teachers and students at the
same time.
TinkerLamp 2.0: Orchestration Technologies for the Classroom 75

5.3 TinkerKey for Empowering Teachers


We observed the teachers using all TinkerKeys throughout the activity in order
to pose questions, encourage the students to reflect, and pause the groups to
call attention to the class (Table 2). The ‘Pause Group’ and ‘Pause Class’ card
was used extensively before every debriefing session or class-wide instruction. It
clearly helped the teachers gain full class attention compared to previous studies.
Two specific TinkerKeys (‘Hide Current Stats’ and ‘Hide Saved Stats’) were
used by the teachers throughout the activity to hide statistics for individual groups
in order to pose a question. After hiding the stats, the teachers encouraged the
students to reflect and discuss the layout before showing them the solution with a
TinkerKey card. They also used the ‘Allow Simulation’ card extensively. We noted
students predicting and discussing about the simulation with the teacher because
the groups were not authorized to run a simulation without him.

Table 2. The number of use of each TinkerKey with TinkerLamp 2.0

TinkerKey Number of uses


NoTinkerBoard WithTinkerBoard
1. Hide Current Stats 10 26
2. Hide Saved Stats 12 11
3. Allow Simulation 27 22
4. Pause Group 10 25
5. Pause Class 4 6

In the interview, we were able to confirm our observations that both teachers
used all TinkerKeys for the purpose that they were designed. Teacher comments
included “I can use the card to request the students to answer questions and confirm
if they’re correct. This allows me to vary the activity according to the group exper-
tise and the time available” and “Instead of losing time telling the students to be
quiet, (with the Pause cards) they have to turn to me and wait for my instructions.”

5.4 TinkerBoard for Class Awareness and Debriefing


The teachers were observed looking at the TinkerBoard often, usually when
they finished discussing with a group. Both teachers confirmed this observation,
and added that the TinkerBoard was not distracting. They said they used the
TinkerBoard to see how much time each group spent building models, running
simulations, and saving layouts, and to balance the pace between groups.
We observed that the TinkerBoard enables discussion at all social levels at
anytime without having to do any extra interactions with the system: the teacher
can discuss with the students just by walking up to the TinkerBoard and refer-
ring to the layouts or events permanently and publicly shown on it. Having
the TinkerBoard in the class led to 5 more spontaneous debriefings during the
activity compared to the other classes.
The class debriefing at the end of the activity was prepared much faster than
the traditional blackboard usage. The teachers simply dragged each group’s
76 S. Do-Lenh et al.

chosen layout into the comparison zone on the TinkerBoard and started the de-
briefing right after. In addition, the layout history was available during the de-
briefing, making references to the intermediate solutions and statistics possible.

5.5 TinkerQuiz for Class-Wide Comparison


Although taking place at the end of the activity for a limited amount of time
(about 10 minutes), the enthusiasm for these quizzes was notable, as the winning
groups always cheered. The students were very excited and the whole classroom
turned into a “field” for playful competition.
Both teachers reported that the use of the TinkerQuiz to move from group-
level to class-level activity was easy. Consistent with our statistics of class at-
mosphere, both teachers said that the class TinkerQuiz clearly increased the
students’ reflection and motivation, and maintained the flow and continuity of
activities compared to the previous version of TinkerLamp. They said that the
students really enjoyed it and were still talking about it in their next class.

6 Discussion and Conclusion


Our evaluations of the TinkerLamp 2.0 system showed that it fulfilled its design
goals. The findings showed that the system provided more options for class-
room orchestration by empowering the teachers (TinkerKey), supporting class
awareness and facilitating both group and class-wide debriefing (TinkerBoard),
as well as encouraging inter-group competition (TinkerQuiz). This orchestration
support offered many opportunities for reflection and discussion. The continual
transition between group- and class-wide activities supported by the Tinker-
Board component seemed to bring a more playful and collaborative atmosphere
into the classroom. Although these results still need to be confirmed with a larger
sample, it is likely that TinkerLamp 2.0 improved student’s learning outcomes
(compared to the other conditions) for these reasons.
Overall, the three orchestration tools presented in this paper are diverse in
terms of technology use and their orchestration goals. However, learning can be
improved if the activities are integrated and exploited at different levels [8]. The
combination of our three tools supported the continuity of learning workflow in
the classroom by giving the same resource (i.e warehouse layouts) different rep-
resentations and circulating them in the classroom. We hence recommend future
work to consider supporting orchestration by developing for the whole learning
workflow with an ecology of resources, rather than a stand-alone application.
We showed that orchestration and reflection are related. Supporting the teacher
with his classroom orchestration led to an improvement in reflection and learn-
ing in the classroom. Providing the teacher with appropriate tools that enable
him to interact with the group and the class more effectively and efficiently is a
way to encourage high-level discussion, at both the group and class level, which
is important for learning.
We support the idea that orchestration technologies need to be flexible and
minimal, among other features. These two principles allow teachers to impro-
vise their actions to the unfolding events without adding more workload. Our
TinkerLamp 2.0: Orchestration Technologies for the Classroom 77

TinkerKey cards allow for flexible management of the classroom: the teacher
can use any TinkerKey at anytime just by picking out the card he needs from
his hand. Due to their minimalist design and light weight, the teacher can easily
carry them while touring the class. Similarly, TinkerBoard is flexible and minimal
in that it enables reflection at anytime and provides basic but critical awareness
about the group’s progress in the class. Using this public display, the teacher can
spontaneously debrief with the class without having to do any extra interactions.
This paper presents our effort in developing TinkerLamp 2.0, a learning
environment that explicitly supports orchestration and can be used in real class-
rooms. The evaluation results in a promising confirmation of our approach. Sup-
porting classroom orchestration not only facilitated the teacher in dealing with
multiple TinkerLamps in the classroom, but also seemed to improve students’
learning outcomes. We hope that our experience gives an early example of how
orchestration technologies can be developed and how they can impact learning
and classroom atmostphere in authentic settings.

Acknowledgments. This research is part of the Dual-T project funded by the


Swiss Federal Office for Professional Education and Technology. We would like
to thank Olivier Guedat, the teachers and students for their support.

References
1. AlAgha, I., Hatch, A., Ma, L., Burd, L.: Towards a teacher-centric approach for
multi-touch surfaces in classrooms. In: ACM ITS 2010, pp. 187–196 (2010)
2. Alavi, H., Dillenbourg, P., Kaplan, F.: Distributed Awareness for Class Orchestra-
tion. In: Cress, U., Dimitrova, V., Specht, M. (eds.) EC-TEL 2009. LNCS, vol. 5794,
pp. 211–225. Springer, Heidelberg (2009)
3. Alcoholado, C., Nussbaum, M., Tagle, A., Gomez, F., Denardin, F., Susaeta, H.,
Villalta, M., Toyama, K.: One mouse per child: interpersonal computer for indi-
vidual arithmetic practice. Journal of Computer Assisted Learning (2011)
4. Bruner, J.S.: Toward a Theory of Instruction. Belknap Press, Cambridge (1966)
5. de Jong, T.: The design of effective simulation-based inquiry learning environments.
In: Proc of Conf. on Learning by Effective Utilization of Technologies: Facilitating
Intercultural Understanding, pp. 3–6 (2006)
6. DiGiano, C., Patton, C.: Orchestrating handhelds in the classroom with SRI’s
ClassSync. In: Proc. of CSCL, pp. 706–707 (2002)
7. Dillenbourg, P., Jarvela, S., Fischer, F.: The evolution of research on computer-
supported collaborative learning. In: Technology-Enhanced Learning, pp. 3–19
(2009)
8. Dillenbourg, P., Jermann, P.: Technology for Classroom Orchestration. In: Khine,
M.S., Saleh, I.M. (eds.) New Science of Learning, pp. 525–552. Springer Sci-
ence+Business Media, New York (2010)
9. Dillenbourg, P., Zufferey, G., Alavi, H.S., Jermann, P., Do-Lenh, S., Bonnard, Q.,
Cuendet, S., Kaplan, F.: Classroom orchestration: The third circle of usability. In:
Proc. of CSCL, vol. 1, pp. 510–517 (2011)
10. Do-Lenh, S., Jermann, P., Cuendet, S., Zufferey, G., Dillenbourg, P.: Task Perfor-
mance vs. Learning Outcomes: A Study of a Tangible User Interface in the Class-
room. In: Wolpers, M., Kirschner, P.A., Scheffel, M., Lindstaedt, S., Dimitrova, V.
(eds.) EC-TEL 2010. LNCS, vol. 6383, pp. 78–92. Springer, Heidelberg (2010)
78 S. Do-Lenh et al.

11. Horn, M.S., Solovey, E.T., Crouser, R.J., Jacob, R.J.: Comparing the use of tangible
and graphical programming languages for informal science education. In: CHI 2009,
pp. 975–984. ACM, New York (2009)
12. Hornecker, E., Buur, J.: Getting a grip on tangible interaction: a framework on
physical space and social interaction. In: CHI 2006, pp. 437–446 (2006)
13. Hornecker, E., Marshall, P., Dalton, N.S., Rogers, Y.: Collaboration, interference:
awareness with mice or touch input. In: CSCW 2008: Proc. of the ACM Conf. on
Computer Supported Cooperative Work, pp. 167–176 (2008)
14. Jermann, P., Zufferey, G., Dillenbourg, P.: Tinkering or Sketching: Apprentices’ Use
of Tangibles and Drawings to Solve Design Problems. In: Dillenbourg, P., Specht,
M. (eds.) EC-TEL 2008. LNCS, vol. 5192, pp. 167–178. Springer, Heidelberg (2008)
15. Kollar, I., Wecker, C., Langer, S., Fischer, F.: Orchestrating web-based collabora-
tive inquiry learning with small group and classroom scripts. In: TEI 2009: Proc.
of the 3rd Int. Conf. on Tangible and Embedded Interaction, pp. 77–84 (2011)
16. Moraveji, N., Kim, T., Ge, J., Pawar, U.S., Mulcahy, K., Inkpen, K.: Mischief: sup-
porting remote teaching in developing regions. In: CHI, pp. 353–362. ACM (2008)
17. Moraveji, N., Morris, M., Morris, D., Czerwinski, M., Henry Riche, N.: Classsearch:
facilitating the development of web search skills through social learning. In: CHI,
pp. 1797–1806. ACM, New York (2011)
18. Mulholland, P., Anastopoulou, S., Collins, T., Feisst, M., Gaved, M., Kerawalla, L.,
Paxton, M., Scanlon, E., Sharples, M., Wright, M.: nquire: Technological support
for personal inquiry learning. IEEE Transactions on Learning Technologies (2011)
19. Piaget, J.: The future of developmental child psychology. Journal of Youth and
Adolescence 3, 87–93 (1974)
20. Price, S., Falcao, T.P., Sheridan, J.G., Roussos, G.: The effect of representation
location on interaction in a tangible learning environment. In: Proc. of TEI, pp.
85–92. ACM, New York (2009)
21. Price, S., Rogers, Y.: Let’s get physical: the learning benefits of interacting in
digitally augmented physical spaces. Comput. Educ. 43(1-2), 137–151 (2004)
22. Prieto, L.P., Villagrá-Sobrino, S., Jorrı́n-Abellán, I.M., Martı́nez-Monés, A.,
Dimitriadis, Y.: Recurrent routines: Analyzing and supporting orchestration in
technology-enhanced primary classrooms. Comput. Educ. 57, 1214–1227 (2011)
23. Roschelle, J., Rafanan, K., Estrella, G., Nussbaum, M., Claro, S.: From handheld
collaborative tool to effective classroom module: Embedding cscl in a broader de-
sign framework. Computers & Education 55(3), 1018–1026 (2010)
24. Schneider, B., Jermann, P., Zufferey, G., Dillenbourg, P.: Benefits of a tangible
interface for collaborative learning and interaction. IEEE Transactions on Learning
Technologies 4(3), 222–232 (2011)
25. Stanton, D., Neale, H., Bayon, V.: Interfaces to support children’s co-present col-
laboration: multiple mice and tangible technologies. In: CSCL 2002: Conf. on Com-
puter Support Collaborative Learning, pp. 342–351 (2002)
26. Tomlinson, C.: The differentiated classroom: responding to the needs of all learners.
Association for Supervision and Curriculum Development (1999)
27. Zuckerman, O., Arida, S., Resnick, M.: Extending tangible interfaces for education:
digital montessori-inspired manipulatives. In: CHI 2005, pp. 859–868 (2005)
28. Zufferey, G., Jermann, P., Do-Lenh, S., Dillenbourg, P.: Using augmentations as
bridges from concrete to abstract representations. In: BCS HCI 2009, pp. 130–139
(2009)
Understanding Digital Competence in the 21st Century:
An Analysis of Current Frameworks

Anusca Ferrari*, Yves Punie, and Christine Redecker

Institute for Prospective Technological Studies (IPTS),


European Commission, Joint Research Centre,
Edificio Expo, C/Inca Garcilaso 3,
41092 Seville, Spain
{Anusca.Ferrari,Yves.Punie,Christine.Redecker}@ec.europa.eu

Abstract. This paper discusses the notion of digital competence and its
components. It reports on the identification, selection, and analyses of fifteen
frameworks for the development of digital competence. Its objective is to
understand how digital competence is currently understood and implemented. It
develops an overview of the different sub-competences that are currently taken into
account and builds a proposal for a common understanding of digital competence.

Keywords: Digital Competence, 21st century skills, Frameworks, Key


Competences.

1 Pinning Down Digital Competence


The rapid diffusion and domestication of technology [1] is transforming a core
competence such as literacy into a 'deictic' concept [2]: rapidly changing in meaning
as new technologies appear and new practices evolve. Today, it is argued, we read,
write, listen, and communicate differently than we did 500 years ago [3]. It is thus not
unreasonable, in our e-permeated society [4], to think of digital competence as a basic
need if we are to function in society [5], as an essential requirement for life [6], or
even as a survival skill [7]. The concept of digital competence is a multi-faceted
moving target. It is interpreted in various ways in policy documents, academic
literature, and teaching/learning and certification practices. Just within the European
Commission, initiatives and Communications refer to Digital Literacy, Digital
Competence, eLiteracy, e-Skills, eCompetence, use of IST underpinned by basic
skills in ICT, basic ICT skills, ICT user skills [8]. Academic papers add to this
already long list of terms with 'technology literacy' [9], 'new literacies' [3], or
'multimodality' [10]. They also underline how digital literacy is intertwined with
media and information literacy [11-14] and is at the core of the 21 century skills [15].
This paper explores how the concept of digital competence is approached in fifteen
selected frameworks. The aim of this collection is to identify and analyse examples
where digital competence is fostered, developed, taught, learnt, assessed or certified
*
The views expressed in this article are purely those of the authors and may not in any
circumstances be regarded as stating an official position of the European Commission.

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 79–92, 2012.
© Springer-Verlag Berlin Heidelberg 2012
80 A. Ferrari, Y. Punie, and C. Redecker

to understand which competences are taken into account. The paper is structured as
follows. After this first introductory chapter, Chapter 2 reports on the main current
academic discourses around digital competence. Chapter 3 summarises the
methodology for the collection of the cases and lists the frameworks that have been
considered. Chapter 4 compares how the different cases define digital competence;
and Chapter 5 maps competence components. Chapter 6 offers some conclusions.

2 Digital Competence Rhetorics

According to the National Council for Curriculum and Assessment (NCCA) [16],
there are three frequently cited arguments for promoting ICT in education. The first
relates to the potential benefits of ICT for teaching and learning, including gains in
students' achievement and motivation. The second acknowledges the pervasiveness of
technologies in our everyday lives. As a consequence, the third argument warns
against low levels of Digital Competence that need to be tackled to allow all citizens
to be functional in our knowledge society [7]. These arguments fuel a series of digital
rhetorics [term elaborated from 17], i.e. received discourses built on an elaborated and
distinctive theoretical or ideological stand. Among the most notable digital rhetorics,
the following inter-twined discourses can be pulled out: the 'digital divide' rhetoric,
the 'digital native' rhetoric, the 'digital competence for economic recovery' rhetoric.
The term ‘digital divide’ came into use in the 90s and alludes to the differences in
access to ICT and the Internet [18]. As argued by Molnar [19], new types of digital
divide have emerged that go beyond access. In this line, Livingstone & Helsper built a
taxonomy of uses defining gradations of digital inclusion as a ladder of participation
[20]. Instead of delimiting a new binary divide – as was the case in the "Falling
through the Net" report [21], which splits haves and have-nots – Livingstone &
Helsper propose a continuum of use, which spreads from non-use of the internet to
low and more frequent use. A third perspective of the digital divide comes from
Erstad, who argues that digital inclusion depends more on knowledge and skills than
on access and use [22]. The second digital rhetoric strand builds on the notion of
'digital natives' introduced by Prensky [23] to bring forward the idea that today's
generation of young people have grown up surrounded by technologies rather than
books and should be taught through technological means rather than traditional ones.
The notion has not gone without criticisms: from the fact that these assertions are
based on no, or anecdotal, empirical evidence [24]; to the fact that the metaphor has
been understood as a claim for the higher digital competence of younger people, who
in fact display a high variety of skills and knowledge regardless of the time spent
online [25]. The third rhetoric discourse highlighted here argues that to fully
participate in life people must be digitally competent [26], and that there is a need to
invest in digital skills enhancement for economic growth and competitiveness [27,
28]. Computer-related proficiency is claimed to be the key to employability and
improved life chances [26]. In the last decade, competences related to technologies
have started to be understood as "life skills", comparable to literacy and numeracy,
therefore becoming "both a requirement and a right" [29].
Understanding Digital Competence in the 21st Century 81

2.1 Digital Competence at the Convergence of Multiple Literacies


ICT usage is becoming more extensive across society: more people are using
technologies for more time and for different purposes. The extensiveness of use is
moreover derived from the digitalisation of society in general, as many of the
activities we undertake have a digital component. As society is becoming digitalized,
the competences needed are becoming manifold. For this reason, Digital Competence
is currently being defined as closely related to several types of literacy [7, 11, 26],
namely: ICT literacy, Internet Literacy, Media Literacy, and Information Literacy.
Analysing the repertoire of competences related to the digital domain requires an
understanding of these underlying aspects, which will be briefly explained here.
ICT literacy is generally understood as computer literacy and refers to the ability to
effectively use computers (hardware and software) and related technologies.
Simonson, Maurer, Montag-Torardi & Whitaker [30] define computer literacy as “an
understanding of computer characteristics, capabilities and applications, as well as an
ability to implement this knowledge in the skilful and productive use of computer
applications". The different definitions of ICT literacy developed in the 80s are all
along the same lines and have survived unaltered for over twenty years [31].
Internet literacy refers to the proficient use of the Internet. Van Deursen [32] points
out that, regardless of the fact that the expression ‘Internet literacy’ refers to a specific
tool or medium, it underlies a basic understanding of computer functioning, and the
ability to understand information, media, and to communicate through the Internet.
For Hofstetter & Sine [33], Internet literacy relates to connectivity, security,
communication and web page development. It should be noted that Internet literacy is
quickly evolving, as nowadays web page development is not as central as the
proficient use of web 2.0 tools is.
Media literacy is the ability to analyse media messages and the media environment
[34]. It involves the consumption and creation of media products for television, radio,
newspapers, films and more recently the Internet. Media education is typically
concerned with a critical evaluation of what we read, hear and see through the media,
with the analyses of audiences and the understanding of the construction of media
messages [13]. It involves communication competences and critical thinking. For
Ofcom (the UK communication regulator), media literacy is "the ability to access,
understand and create communications in a variety of contexts" [35].
Though information literacy has many similarities with media literacy, and is now
extremely relevant for Internet use, it is built on the tradition of librarians and started
as the ability to retrieve information and understand it. The American Library
Association [36] defines it as ‘the ability to recognise when information is needed and
the ability to locate, evaluate, and use the needed information effectively’.

2.2 Digital Competence as a New Literacy


The above definitions of the different literacies and the digital ‘rhetorics’ outlined at
the beginning of this chapter highlight how discourses around digital competence
range from the “tautological to the idealistic”, as Livingstone put it [14], from
defining it as the ability to use a specific set of tools (e.g. internet literacy as the
ability to use the internet) to the understanding of digital competence as an
82 A. Ferrari, Y. Punie, and C. Redecker

unavoidable requirement [29] for life-fulfilment. Though the above literacies have
converged into digital competence, it is more than the sum of its parts: it is not
enough to state that digital competence involves what is required for internet literacy,
ICT literacy, information literacy and media literacy as there are other components
that come into the picture of digital competence. Livingstone [14] states that digital
competence is not user dependent but tools dependent – or, it could be argued,
application dependent. Reading a printed newspaper or an online one is not the same
experience and requires different skills, such as, for instance, the ability to move
through hyperlinked texts. Online text perusal requires a more dynamic approach [37]
and offers an augmented reading experience. Moreover, computers or smart-phones
are generally used through icon-based commands, hence higher cognitive mediation is
required [7], as symbolic utterances refer to a system of signs which may not be
familiar to everyone, and are underpinned by the ability to read images as texts.
Moreover, as Kress [10] argues, changes in the forms and functions of the text –here
including visual and audio texts – make the reader a designer of the reading
experience. Hyper and multimodal texts allow readers' engagement, as they choose
which threads or links to follow, which modes of reading to select. In addition, the
decoding and encoding processes are made at faster speed and texts – blogs,
newspapers articles, Wikipedia entries –encourage the reader to become an author.
Besides, writing is becoming part of the everyday life of the everyday person [38], as
many of us write emails, send SMS, and participate in social networks. In a way,
these practices – including the 'hyper-intensity' of text or facebook messaging – can
be seen as a triumph of the domestication of technologies and their appropriation by
the user [39], who plays an active role, shifting from recipient to producer of
information and/or media content. Users are moreover becoming engaged in activities
they did not necessarily participate in the offline world (an example: the sharing of
news or music through social networks, thus acting as a multiplier of information).

3 The Collection of Digital Competence Frameworks


Due to the different terms and understandings of Digital Competence, a literature
search was performed for each literacy type outlined above. The search engine
Google and the portal 'Google scholar' were chosen, and search items included the
different literacy types linked to Digital Competence and combinations with the word
'frameworks'. In addition, searches were carried out in important educational and
academic databases (ERIC; Scope). These were complemented with: browsing
through curricula in European countries; a review of reports of international
organisations working on ICT and learning (e.g. OECD; UNESCO); a review of EU
reports, initiatives or funding schemes; and suggestions from colleagues or
collaborators. The searches came up with a body of over a hundred cases, from which
all the cases that did not constitute a framework were excluded. Here, a framework is
understood to be an instrument for the development or assessment of the Digital
Competence of a specific target group, according to a set of descriptors of intertwined
competences, thus adapting CEDEFOP’s definition of framework to our scope [40].
Criteria were then established to limit the number of frameworks to be analysed,
namely: fair distribution of target groups; fair geographical distribution;
Understanding Digital Competence in the 21st Century 83

representation of a plurality of perspectives on digital competence; representation of a


plurality of initiative types (from school curricula, to academic papers, to certification
schemes). Fifteen frameworks were finally selected for full reporting and analyses
(see Table 1 for an overview). Of course, it is acknowledged that these cases represent
a partial and qualitative snapshot of how Digital Competence can be translated into
learning outcomes.

Table 1. Overview of Frameworks


Name & Target group Description
ACTIC ACTIC (Acreditación de Competencias en Tecnologías de la
Target Group: all citizens above 16 Información y la Comunicación) certifies ICT competences.
http://www20.gencat.cat/portal/site/actic
BECTA's review of Digital Literacy This review provides a model for learners at primary and
Target group: children up to 16 secondary schools [41].
http://www.timmuslimited.co.uk/archives/117
CML MediaLit Kit The CML (Centre for Media Literacy) establishes a framework
Target group: adults to construct and deconstruct media messages [42].
http://www.medialit.org/cml-framework
DCA DCA (Digital Competence Assessment) is a framework linked to
Target group: 15-16 years old a series of tests for secondary school students [43].
http://www.digitalcompetence.org/
DigEuLit A 2005-2006 project lead by the University of Glasgow and
Target group: general population funded by the European Commission to develop a conceptual
framework for Digital Competence [44].
ECDL ECDL (European Computer Driving Licence) Foundation
Target group: adults delivers worldwide a range of certifications on Computer
literacy. http://www.ecdl.org/programmes/index.jsp
eLSe-Academy The eLSe-Academy - eLearning for Seniors Academy - is an
Target group: senior citizens online environment adapted to the digital competence needs of
senior citizens. http://www.arzinai.lt/else/
eSafety Kit This initiative aims to support children, their parents/tutors and
Target group: 4-12 years old children teachers in safe internet use. www.esafetykit.net
Eshet-Alkalai's framework This conceptual framework details the multiple literacies that are
Target group: general population needed for people to be functional in a digital era [7, 45]
IC3 The Internet and Computing Core Certification by Certiport
Target group: students & job-seekers enhances the knowledge of computers and the Internet.
www.certiport.com/Portal/
iSkills This test from ETS assesses critical thinking and problem-
Target group: adults solving skills in a digital environment [46].
http://www.ets.org/iskills/
NCCA ICT framework – Ireland This framework is a guide to embed ICT as a crosscurricular
Target group: students component in primary and lower secondary education [16].
http://www.ncca.ie/en/Curriculum_and_Assessment/ICT/#1
Pedagogic ICT licence –Denmark The Pedagogical ICT Licence offers Danish teachers the
Target Group: teachers opportunity to upgrade their ICT skills.
www.paedagogisk-it-koerekort.dk
The Scottish ILP The Scottish Information Literacy Project promotes the
Target group: students understanding and development of information literacy in all
education sectors [47]. http://caledonianblogs.net/nilfs/
UNESCO ICT CFT The ICT Competency Framework for Teachers provides
Target Group: teachers guidelines for courses for teachers to integrate ICT in class [48].
84 A. Ferrari, Y. Punie, and C. Redecker

The analysis of the content of the selected frameworks aims to answer the following
questions:
- How is Digital Competence defined or understood in the selected frameworks?
- What are the main competences that are developed in the selected frameworks?

4 Digital Competence: An Encompassing Definition


In the Communication on Key Competences for Lifelong Learning, the European
Commission proposes the following definition of digital competence: "Digital
competence involves the confident and critical use of Information Society
Technology (IST) for work, leisure and communication. It is underpinned by basic
skills in ICT: the use of computers to retrieve, assess, store, produce, present and
exchange information, and to communicate and participate in collaborative networks
via the Internet" [49]. As the concept of Digital Competence is much debated and
multifaceted, as shown above with the discussion of the literature, it comes as no
surprise that two thirds of the selected frameworks provide a definition of digital
competence. The ten definitions presented in the frameworks have been compared
and their main elements have been merged to produce the following encompassing
definition of digital competence:
Digital Competence is the set of knowledge, skills, attitudes, abilities, strategies and
awareness that is required when using ICT and digital media to perform tasks; solve
problems; communicate; manage information; behave in an ethical and responsible
way; collaborate; create and share content and knowledge for work, leisure,
participation, learning, socialising, empowerment and consumerism.
This working definition has been produced by taking into account all the perspectives
of each framework. It can be noted that this definition bears similarities with the
European Commission’s definition. Moreover, the structure of all definitions provided
in the frameworks was found to be quite similar, i.e. assembled on the same building
blocks, namely: learning domains, tools, competence areas and purposes. Thus,
several cases define the learning domains [50] that are developed in their framework:
some frameworks add awareness and strategies to the more expected knowledge,
skills, and attitudes, which are the constituent parts of a competence [51]. Half the
frameworks that provide a definition insist on skills, while a third mentions
awareness. The tools generally include ICTs, only two frameworks explicitly mention
media. Regarding the competence areas that are foreseen in the definitions, certainly
"use" or "performing tasks" recur most, followed by communication and information
management. Finally, the purposes that emerge from this definition are in line with
commonly-agreed ones, see for instance the work on monitoring Digital Competence
carried out in the frame of the Digital Agenda Scoreboard.1 It should be stated,
however, that purposes should not to be taken as a proxy for competences or
competence areas, but should be considered as the context in which the competence
may be applied. Although the different frameworks proposed a quite varied list of

1
http://ec.europa.eu/information_society/digital-
/scoreboard/docs/pillar/digitalliteracy.pdf
Understanding Digital Competence in the 21st Century 85

purposes, we felt that there were two missing elements to the picture: "consuming"
and "user empowerment". Online shopping is spreading, with 40% of EU citizens
buying goods online.2 However, it is of paramount importance that consumers are
aware of the risks connected with online purchases, for instance those resulting from
inadequate security settings. To transact safely [52], there are certain competence
requirements, which are recognised as a priority in the Digital Agenda [53, Action
61]. In addition, it has been noted that social computing practices allow for user
empowerment [54]. As a consequence, we added these two purposes to the working
definition as they were not present in the definitions of the frameworks.

5 Areas of Digital Competence


The NCCA [16] report claims that most approaches to Digital Competence see skills
as tool-dependent: they focus on the practical abilities to use specific software or
hardware. This reinforces common visions of digital literacy or media literacy [14].
Although tool-dependent approaches become outdated in no time, they have the
advantage of describing skills that are specific and easily measurable [16]. Indeed, the
collection provided here presents some frameworks which are oriented at developing
skills more than competences and which are structured around the most-used software
or tools. For instance, the European Computer Driving Licence (ECDL) core
programmes consists of 13 modules which mainly aim to make users able to use a
specific application, though they are vendor neutral, i.e. not tied to any one brand of
software. These modules develop people’s skills in using databases, spreadsheets,
word processing tools, image editing and presentation software, to give but a few
examples. The certification for the "word processing" module includes tasks like
creating a new document, formatting text, creating tables, running the spell-check and
printing a document. In the same vein, and although it measures content topics
together with technology topics, the iSkills test assesses people’s ability to use the
web (email, instant messaging, bulletin board postings, browsers, search engines);
databases (data searches, file management); and software (word processing,
spreadsheet, presentations, graphics). The test is built around the assessment of seven
types of task, namely: define, access, evaluate, manage, integrate, create, and
communicate. An example of a "create" task, as available from the ETS website, is to
create a graph from a series of given data, and then answer questions related to the
interpretation of the graph. Even though this includes a cognitive component – the
interpretation of a graph – the main task is built around a common application, i.e. the
spreadsheet package. IC3 by Certiport provides another example of a tool-related
framework. The exams for this certification are explicitly based on Microsoft
Windows 7 and Office 2010. The framework is built around three modules, namely:
Computing Fundamentals, Key Applications and Living Online. The first module is
based on hardware, software and operating systems, thus reflecting a computer
engineering approach. The second module has topics on word processing,
spreadsheets and presentation software, plus a section covering features common to

2
http://ec.europa.eu/information_society/digital-
agenda/scoreboard/docs/scoreboard.pdf
86 A. Ferrari, Y. Punie, and C. Redecker

all applications. The third module is described as addressing "skills for working in an
Internet or networked environment"3 and is based on the use of distinctly recognisable
tools: online networks, emailing systems, Internet browsers. The section on "the
impact of computing and the Internet on Society" is the only one which goes beyond a
tool-related certification process, and mainly relates to risks connected to the use of
hardware, software and the internet.
It comes as no surprise that the above examples are taken from certification
frameworks, which have to satisfy the need for measurability and assessment. This
aspect could also be reinforced by the requirements of employers, who could demand
abilities in specific hardware/software packages. Although the need for specific skills
for employability could be a possible driver for application-oriented programmes,
tool-related operational skills are also central in eInclusion initiatives. An example is
the eLSe Academy, an eLearning environment aimed at senior citizens interested in
acquiring or further developing their competences in ICT. Even this course is
typically structured on application-based modules: using the learning platform;
writing with a computer (word-processors, including word pads); communicating via
a computer (emails); and so on. Like the IC3 certification, this case is based on the
use of Microsoft Office packages and Windows. The UNESCO framework for
teachers, even though it is embedded in a more complex structure, includes parts
which are tool-oriented. The framework is not about Digital Competence per se, but
rather suggests entrenching ICT in every aspect of educational institutions from
policy to pedagogy to administration, thus proposing an innovative approach to using
technologies in education. However, when detailing the digital competence level
expected of teachers, the implementation guidelines suggest a typical application-
oriented approach [48]. Many frameworks build on a consolidated though relatively
recent tradition. As pointed out by Erstad [55], Digital Competence moved through
three main phases. After a first 'mastery phase' (1960s to the mid 80s) where
technologies were accessed by professionals who knew programming languages,
interfaces became more user-friendly from the mid 80s to the late 90s and were thus
opened up to society. This second 'application phase' gave rise to mass certification
schemes. As technologies became simpler, they also became more necessary, hence
augmenting the population’s needs for specific skills in order to "tame" these new
tools – and therefore triggering courses targeted at these specific needs. Many
eInclusion/eLearning initiatives and digital literacy discourses are built upon this
stance, highlighting access and accessibility and tool-related operational skills as their
core. From the late 90s, we entered a third phase – the reflective phase– in which the
need for critical and reflective skills in the use of technology was widely recognised
[55]. Yet in 2004, the NCCA reported that most definitions and approaches to Digital
Competence did not take into account higher order thinking skills [16]. Our
framework collection cannot confirm this statement, as several of the cases we have
gathered here do in fact recognise the importance of reflective and critical uses.
However, the modes in which this is translated into learning objectives or
competences vary.

3
http://www.certiport.com/portal/common/documentlibrary/
IC3_Program_Overview.pdf
Understanding Digital Competence in the 21st Century 87

The iSkills framework, although it has a central operational component, is an


example of an approach which acknowledges thinking skills for Digital Competence
and at the same time is still based on applications: "ICT literacy cannot be defined
primarily as the mastery of technical skills. The panel concludes that the concept of
ICT literacy should be broadened to include both critical cognitive skills as well as
the application of technical skills and knowledge” [46]. An example might illustrate
how the above-mentioned philosophy is translated into assessment of competences.
As explained above, the framework is built around seven competence areas. One of
these, "Access", implies the collection and/or retrieval of information in digital
environments, and therefore is typically endowed with cognitive and critical needs.
The two sample tests provided on the website4 are based on searches within a
database, on accurate search terms and correct search strategies (for instance, using
Boolean operators or quotation marks). The cognitive dimension is certainly taken
into account, although we are left with the impression that this cognitive and critical
component is not far from an application-oriented skill. In other words, critical and
thinking skills seem to be seen as a means to a specific end, the end being a more
efficient use of computers. A similar competence, i.e. "access to information", can be
found in The Scottish Information Literacy Project, a complex framework where
competences are articulated around levels/target groups. For further and higher
education, the equivalent of the iSkills "access" competence are the following two
competences: "the ability to construct strategies for locating information" and "the
ability to locate and access information". These competences include: the articulation
of information needs, the development of a systematic method to answer to
information needs, the development of appropriate searching techniques (e.g. use of
Boolean searches), the use of appropriate indexing and abstracting services, citations
index and databases and the use of current awareness methods to keep up to date.
Similarities between the two approaches can be found, for instance, in the
development of search techniques to select the appropriate information retrieval
services (selecting, for instance, the appropriate database). However, the Scottish
Information Literacy Project, probably as a consequence of its focus on information
literacy rather than digital competence, involves higher order thinking skills and
cognitive approaches at a more advanced level.
The cognitive dimension is often associated with access to information. Another
case, the DCA, develops a competence which links access to information with
cognitive skills. The DCA is a test which was originally developed for high school
students aged 15-16 and which is currently under development for younger learners.
The cognitive dimension translates into the following learning objectives: being able
to read, select, interpret and evaluate data and information taking into account their
pertinence and reliability. Frameworks for compulsory schooling seem to show a
tendency to raise the cognitive dimension of digital competence. Newmann, in charge
of a review of digital literacy for children aged 0 to 16 for BECTA, in an attempt to
simplify the complex terminology this domain generates, proposes looking at digital
competence as applying critical thinking skills to technology use [41]. According to

4
See http://www.ets.org/s/iskills/flash/FindingItem.html and
http://www.ets.org/s/iskills/flash/ComplexSearch.html
88 A. Ferrari, Y. Punie, and C. Redecker

this reading, digital competence would require both technical skills and critical
thinking skills, which are seen as an attribute of information literacy. In the review,
Newmann clarifies that the focus is more on thinking skills than on technical ones. In
the NCCA framework, "thinking critically and creatively" is one of the four foreseen
areas of learning.5 Access to, and evaluation of, information are two important
learning outcomes. The novelty of this curriculum consists of its other two learning
outcomes; "express creativity and construct new knowledge and artefacts using ICT"
and "explore and develop problem-solving strategies using ICT". The NCCA website
proposes sample learning activities that could be used by teachers in different subjects
to develop these competences, such as organising a digital storytelling project or
recording a field trip using a digital camera.
A recurring competence area is what could be called "Ethics and responsibility"
and includes a safe, legal and ethical use of the Internet in particular and technologies
in general. The IC3 framework displays 3 application-oriented modules, the third one
being called "Living online". After three sections related to applications (Internet,
emails and communication networks), a fourth section is about "The Impact of
Computing and the Internet on Society" and aims to identify how computers are used
in different areas of work, school, and home; the risks of using computer hardware
and software; and how to use the Internet safely, legally, and responsibly. While in
the IC3 framework, this area constitutes only a small part of the syllabus, in the
eSafety Kit this issue holds centre stage. Three of the four envisaged competences are
based around ethics and responsibility, as in fact this framework, developed for
children between the ages of 4 and 12, has the safe use of the internet as its primary
scope. Attention to the emotional aspect of dealing with cyber-bullying is a novelty of
this framework. Ethics and responsibility are also accounted for in the NCCA
framework. As part of the forth competence area ("Understanding the social and
personal impact of ICT"), students should demonstrate an awareness of, and comply
with, responsible and ethical use of ICT.
Several frameworks include "communication" as a competence area. However, it
should be remarked that different frameworks do not necessarily concord in the ways
they translate this competence into learning outcomes. As a matter of fact, a huge
difference can be seen between application-oriented frameworks and more cognitive
approaches, as shown in Figure 1.

Communicate
Disseminate information tailored to
online and off-line a particular audience in an
identities; effective digital format by:
behaviour in chats 1) Formatting a document to make
and instant it useful to a particular group;
messaging;
2) Transforming an email into a
online privacy, succinct presentation to meet
safe online profiles; an audience's needs;
sharing content; 3) Selecting and organizing slides
online and off-line for presentations to different
networking. audiences;
4) Designing a flyer to advertise to
a distinct group of users

Fig. 1. Two different ways to translate the competence "Communicate"

5
Together with "Creating, communicating and collaborating"; "Developing foundational
knowledge, skills and concepts"; and "Understanding the social and personal impact of ICT".
Understanding Digital Competence in the 21st Century 89

The left hand side of Figure 1 deals with online and off-line identities, privacy, and
behaviour. In this framework, the needs for communication in an online environment
are interpreted as cognitive needs. At the same time, there is a focus on privacy and
security. In addition, there is an interest in comparing the online and off-line worlds,
as communicating is a competence that one develops in real as well as virtual
contexts. The framework depicted on the right hand side, on the other hand, perceives
"communication" as the targeting of information to different audiences through
specific software. Therefore, being able to communicate in a digital environment is
seen as the ability to format a document, to transform an email into a PowerPoint-like
presentation, to organise slides and to design a flyer. It goes without saying that being
able to communicate cannot be reduced to the formatting of a text.

6 Conclusions
Several of the frameworks selected for this analysis suggest that technical skills
constitute a central component of Digital Competence. In our opinion, having
technical skills at the core of a digital competence model obscures the multiple facets
of the domain. Digital Competence should be understood, as it is in many
frameworks, in its wider sense. The analysis of the 15 selected frameworks underlines
several aspects – or areas – of Digital Competence, which can be summarized as
follows:

Table 2. Areas of Digital Competence


Area Description
Information Management Identify, locate, access, retrieve, store and organize information
Collaboration Link with others, participate in online networks and
communities, interact constructively
Communication and Sharing Communicate through online tools, taking into account
privacy, safety, and correct online behaviour
Creation of Content and Integrate and re-elaborate previous content and knowledge,
Knowledge construct new knowledge
Ethics and Responsibility Behave in an ethical and responsible way, aware of legal
frames
Evaluation and Problem- Identify digital needs, solve problems through digital means,
solving assess the information retrieved
Technical Operations Use technology and media to perform tasks through digital
tools

Each area presented in the table above has been taken from more than one
framework. We wish to suggest that technical operations should be considered like
any other component of the framework, and not be given the paramount importance
they are now. The analysis of the frameworks suggests yet another rhetoric strand:
digital competence as mainly based on technical operations. However, many
frameworks and initiatives are starting to move away from this perspective and
propose a model for the development of digital competence that takes into account
higher order thinking skills and that fits in a 21st century skills perspective.
90 A. Ferrari, Y. Punie, and C. Redecker

References
1. Silverstone, R., Hirsch, E.: Consuming technologies. Routledge, London/NY (1992)
2. Leu, D.J.: Literacy and technology: Deictic consequences for literacy education in an
information age. Handbook of Reading Research 3, 743–770 (2000)
3. Coiro, J., Knobel, M., Lankshear, C., Leu, D.J.: Handbook of research on new literacies.
Routledge, New York-London (2008)
4. Martin, A., Grudziecki, J.: DigEuLit: Concepts and Tools for Digital Literacy
Development. ITALICS: Innovations in Teaching & Learning in Information & Computer
Sciences 5, 246–264 (2006)
5. Gilster, P.: Digital literacy. John Wiley, New York (1997)
6. Bawden, D.: Origins and Concepts Of Digital Literacy. In: Lankshear, C., Knobel, M.
(eds.) Digital Literacies: Concepts, Policies & Practices, pp. 17–32 (2008)
7. Eshet-Alkalai, Y.: Digital Literacy. A Conceptual Framework for Survival Skills in the
Digital Era. Journal of Educational Multimedia & Hypermedia 13, 93–106 (2004)
8. Ala-Mutka, K.: Mapping Digital Competence: Towards a Conceptual Understanding. In:
JRC-IPTS (2011)
9. Amiel, T.: Mistaking computers for technology: Technology literacy and the digital divide
(2004)
10. Kress, G.: Multimodality: a social semiotic approach to contemporary communication.
Routledge, NY (2010)
11. Bawden, D.: Information and digital literacies: a review of concepts. Journal of
Documentation 57, 218–259 (2001)
12. Horton Jr., F.W.: Information literacy vs. computer literacy. Bulletin of the American
Society for Information Science 9, 14–16 (1983)
13. Buckingham, D.: Media education: Literacy, learning, and contemporary culture. Polity
(2003)
14. Livingstone, S.: The changing nature and uses of media literacy. In: LSE (2003)
15. Rotherham, A.J., Willingham, D.T.: “21st-Century” Skills. American Educator 17 (2010)
16. NCCA: Curriculum Assessment and ICT in the Irish context: a Discussion Paper (2004)
17. Banaji, S., Burn, A., Buckingham, D.: Rhetorics of creativity: a review of the literature
(2006)
18. Irving, L., Klegar-Levy, K., Everette, D., Reynolds, T., Lader, W.: Falling through the Net:
Defining the digital divide. National Telecommunications and Information Administration,
US Deps of Commerce, Washington, DC (1999)
19. Molnár, S.: The explanation frame of the digital divide. In: Proceedings of the Summer
School, Risks and Challenges of the Network Society, pp. 4–8 (2003)
20. Livingstone, S., Helsper, E.: Gradations in digital inclusion: children, young people and
the digital divide. New Media & Society 9, 671 (2007)
21. McConnaughey, J., Lader, W.: Falling through the net II: new data on the digital divide.
National Telecommunications and Information Administration. Department of Commerce,
US Government (1998)
22. Erstad, O.: Educating the Digital Generation. Nordic Journal of Digital Literacy 1, 56–70
(2010)
23. Prensky, M.: Digital Natives, Digital Immigrants. On the Horizon 9 (2001)
24. Bennett, S., Maton, K., Kervin, L.: The ‘digital natives’ debate: A critical review of the
evidence. British Journal of Educational Technology 39, 775–786 (2008)
25. Hargittai, E.: Digital Na(t)ives? Variation in Internet Skills and Uses among Members of
the “Net Generation”. Sociological Inquiry 80, 92–113 (2010)
Understanding Digital Competence in the 21st Century 91

26. Sefton-Green, J., Nixon, H., Erstad, O.: Reviewing Approaches and Perspectives on
“Digital Literacy”. Pedagogies: An International Journal 4, 107–125 (2009)
27. Hartley, J., Montgomery, M., Brennan, M.: Communication, cultural and media studies:
The key concepts. Psychology Press (2002)
28. European Commission: Europe 2020: A strategy for smart, sustainable and inclusive
growth. COM (2010) 2020 (2010)
29. OECD: Learning to change (2001)
30. Simonson, M.R., Maurer, M., Montag-Torardi, M., Whitaker, M.: Development of a
standardized test of computer literacy and a computer anxiety index. Journal of
Educational Computing Research 3, 231–247 (1987)
31. Oliver, R., Towers, S.: Benchmarking ICT literacy in tertiary learning settings, Citeseer,
pp. 381–390
32. Deursen, A.J.A.M.: Internet skills: vital assets in an information society (2010)
33. Hofstetter, F.T., Sine, P.: Internet literacy. Irwin/McGraw-Hill (1998)
34. Christ, W.G., Potter, W.J.: Media literacy, media education, and the academy. Journal of
Communication 48, 5–15 (1998)
35. Ofcom: Media Literacy Audit: Report on media literacy amongst children. Ofcom (2006)
36. America Library Association: Presidential Committee on Information Literacy. ALA
(1989)
37. OECD: PISA 2009 Results: What Students Know and Can Do. Students performance in
reading, mathematics and science. OECD, Paris (2010)
38. Rainie, L., Purcell, K., Smith, A.: The social side of the internet. Pew Research Centre
(2011)
39. Silverstone, R.: Domesticating domestication: Reflections on the life of a concept. In:
Berker, T., Hartmann, M., Punie, Y., Ward, K.J. (eds.) Domestication of Media and
Technology, pp. 229–248. Open University Press, Maidenhead (2006)
40. CEDEFOP: Terminology of European education and training policy. A selection of 100
key terms. Office for Official Publications of the European Communities (2008)
41. Newman, T.: A review of digital literacy in 0 – 16 year olds: evidence, developmental
models, and recommendations, Becta (2008)
42. Thoman, E., Jolls, T.: Literacy for the 21st Century. An Overview & Orientation Guide To
Media Literacy Education. In: CML (2003)
43. Calvani, A., Cartelli, A., Fini, A., Ranieri, M.: Models and instruments for assessing
digital competence at school. Journal of e-Learning and Knowledge Society 4 (2009)
44. Martin, A.: Literacies for the Digital Age. In: Martin, A., Madigan, D. (eds.) Digital
Literacies for Learning, Facet, London, pp. 3–25 (2006)
45. Eshet-Alkalai, Y., Chajut, E.: You can teach old dogs new tricks: The factors that affect
changes over time in digital literacy. Journal of Information Technology Education 9, 173–
181 (2010)
46. International ICT Literacy Panel: Digital Transformation. A Framework for ICT Literacy.
ETS (2007)
47. Crawford, J., Irving, C.: The Scottish Information Literacy Project and school libraries.
Aslib Proceedings (2010)
48. UNESCO: Unesco ICT Competency Framework for Teachers (2011)
49. European Parliament and the Council: Recommendation of the European Parliament and
of the Council of 18 December 2006 on key competences for lifelong learning. Official
Journal of the European Union L394/310 (2006)
50. Bloom, B.S.: Taxonomy of Educational Objectives. In: Bloom, B.S., et al. (eds.) The
Classification of Educational Goals. Longmans, London (1964) (printed in U.S.A)
92 A. Ferrari, Y. Punie, and C. Redecker

51. Westera, W.: Competences in education: a confusion of tongues. Journal of Curriculum


Studies 33, 75–88 (2001)
52. Lusoli, W., Bacigalupo, M., Lupiañez, F., Andrade, N., Monteleone, S., Maghiros, I.: Pan-
European survey of practices, attitudes & policy preferences as regards personal identity
data management. In: JRC-IPTS (2011)
53. European Commission: A Digital Agenda for Europe. COM(2010)245 final (2010)
54. Ala-Mutka, K., Broster, D., Cachia, R., Centeno, C., Feijóo, C., Haché, A., Kluzer, S.,
Lindmark, S., Lusoli, W., Misuraca, G., Pascu, C., Punie, Y., Valverde, J.A.: The Impact
of Social Computing on the EU Information Society and Economy. In: JRC-IPTS (2009)
55. Erstad, O.: Conceptions of Technology Literacy and Fluency. In: Penelope, P., Eva, B.,
Barry, M. (eds.) International Encyclopedia of Education, pp. 34–41. Elsevier, Oxford
(2010)
How CSCL Moderates the Influence of Self-efficacy
on Students’ Transfer of Learning

Andreas Gegenfurtner1, Koen Veermans2, and Marja Vauras2


1
TUM School of Education, Technical University of Munich, Munich, Germany
andreas.gegenfurtner@tum.de
2
Centre for Learning Research, University of Turku, Turku, Finland
{koen.veermans,marja.vauras}@utu.fi

Abstract. There is an implicit assumption in learning research that students


learn more deeply in complex social and technological environments. Deep
learning, in turn, is associated with higher degrees of students’ self-efficacy and
transfer of learning. The present meta-analysis tested this assumption. Based on
social cognitive theory, results suggested positive population correlation esti-
mates between post-training self-efficacy and transfer. Results also showed that
effect sizes were higher in trainings with rather than without computer support,
and higher in trainings without rather than with collaboration. These findings
are discussed in terms of their implications for theories of complex social and
computer-mediated learning environments and their practical significance for
scaffolding technology-enhanced learning and interaction.

Keywords: Computer-supported collaborative learning, self-efficacy, transfer


of learning, training, meta-analytic moderator estimation.

1 Introduction

Self-efficacy refers to beliefs in one’s capabilities to organize and execute the courses
of action required to produce given attainments [1]. Transfer of training is the use of
newly acquired knowledge and skills [2,3]. Research indicates that both self-efficacy
and reflect students’ deep learning [4,5]. There is an implicit assumption in the learn-
ing sciences that deep learning is more likely to occur in complex social and technol-
ogical environments [4]. If it is true that deep learning is associated with higher
degrees of self-efficacy [1] and transfer [2,3], then it follows that estimates of the
relationship between self-efficacy and transfer of training should be higher in those
conditions that afford computer supported collaborative learning (CSCL), because of
the positive effects of technology enhancement and social interaction. However, to
date, no study has examined the predictive validity of this assumption. As a remedy to
this gap, the present meta-analysis sets out to investigate whether higher population
correlation estimates between self-efficacy and transfer are found in training
conditions that afford computer support and collaboration when compared with other
training conditions.

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 93–102, 2012.
© Springer-Verlag Berlin Heidelberg 2012
94 A. Gegenfurtner, K. Veermans, and M. Vauras

1.1 Self-efficacy and Transfer of Training


Efficacy beliefs are among the most widely documented predictors of achievement,
which has been shown in domains including sports, work, and education [2,6-7].
According to social cognitive theory [1], people with high self-efficacy set high and
demanding goals; these goals create negative performance discrepancies to be mas-
tered [1]. Expectations about the perceived efficacy of one’s capability to master
those discrepancies regulate whether effort is initiated, how much continuous effort is
expended, and whether effort is maintained or even increased in face of difficulties
during goal attainment. Because the power of self-efficacy to predict task achieve-
ment has been so widely documented [1,2,6-10], it seems reasonable to assume that
self-efficacy also predicts the initiation, expenditure, and maintenance of efforts to-
ward transfer of training.
If it is true that efficacy beliefs predict sufficient execution of effort to achieve
successful outcomes, it follows that efficacy beliefs should also predict successful
transfer of training. However, the literature shows mixed evidence. For example,
some investigations showed high correlation estimates between self-efficacy and
training transfer [11], while other investigations suggested that the magnitude of this
relationship is negligible [12]. One possible explanation for the mixed evidence is the
influence of sampling error and error of measurement [13] that may have induced
biases on the true score population correlation. Therefore, one aim of the present
study was to use meta-analytic methods to inquire whether performance self-efficacy,
after controlling for sampling error and error of measurement, exhibits a stable influ-
ence on transfer and whether this relationship would be higher after training than
before training. Another possible explanation for the mixed evidence is that popula-
tion correlation estimates have been moderated by different study conditions. Identifi-
cation of boundary conditions has important implications for testing the predictive
validity of social cognitive theory [1,2,8,14]. Therefore, a second aim of the study
was to identify and estimate the boundary conditions under which self-efficacy
and transfer correlate. Inquiring into these characteristics as boundary conditions
is significant, because it enables accounting for artifactual variance in the total
variance of a correlation, which, in turn, may explain some of the disagreements in
the existing literature. Two boundary conditions were analyzed: computer support and
collaboration.

1.2 Computer-Supported Collaborative Learning


The rationale for choosing CSCL as boundary conditions was derived from a belief in
the learning sciences that “deep learning is more likely in complex social and tech-
nological environments” [4]. Deep learning, in turn, is related to higher degrees of
transfer [2,3] and self-efficacy [1]. If these assumptions hold, it follows that
population correlation estimates of the relationship between self-efficacy and training
transfer should be higher in those conditions that afford CSCL. Conditions for CSCL
can be examined as (a) computer support and (b) collaboration. For the purpose
of this article, computer support was defined as technological material in learning
environments intended to promote understanding [15]; collaboration was broadly
How CSCL Moderates the Influence of Self-efficacy on Students’ Transfer of Learning 95

defined as the working-together of two or more individuals to attain the shared train-
ing goals and task at hand [16]. We acknowledge substantial variation in how the term
‘collaboration’ is defined in the literature [17-18]. We use the term ‘collaboration’
here for the sake of simplicity and do acknowledge gradual nuances in how key con-
ditions of the nature of joint working (e.g., shared goals, co-construction of know-
ledge, co-regulation, etc.) are reflected in prior literature to capture different sociop-
sychological processes of interpersonal coordination and their relation to actualizing
motivation [18-20]. An example of training having both computer support and colla-
boration is [21], who trained participants with a collaborative computer game. An
example of a training having computer support but no collaboration is [22], in which
they trained participants to use computer software individually and without social
interaction. An example of a training including no computer support but collaboration
is [23]’s description of nursing team training, which included group discussions,
brainstorming, and peer assessment. Finally, an example of a training program includ-
ing neither computer support nor collaboration was [24], in which participants were
trained in a speed-reading skill individually with paper handouts. If it is true that
complex social and technological environments promote self-efficacy and transfer [1-
4], then it follows that population correlation estimates should be higher in conditions
with computer support rather than in conditions without and in conditions with colla-
boration rather than without. Importantly, population correlation estimates should be
highest in conditions affording both computer support and collaboration. Figure 1
illustrates these conditions. The top-left quadrant represents training conditions that
neither includes computer support nor collaboration; it is thus assumed to have no
particular positive effects. The bottom-left quadrant represents training conditions that
include computer support but no collaboration. The top-right quadrant represents
training conditions that include collaboration, but no computer support. Finally, the
bottom-right quadrant represents training conditions that include both computer sup-
port and collaboration.

Does the training design include


collaboration among trainees?

No Yes
include computer support?
Does the training design

No

o +
Yes

+ ++

Fig. 1. Hypothesized effects of conditions on the relation between self-efficacy and transfer
96 A. Gegenfurtner, K. Veermans, and M. Vauras

1.3 The Present Study—Hypotheses


In summary, the focus of the present study was the relationship between performance
self-efficacy and transfer of training. The first aim was to cumulate previous research
in order to correct the size of true score population correlations. A second aim of the
study was to estimate the moderating effects of computer support and collaboration.
Two hypotheses were formulated. Based on social cognitive theory [1], we assumed
that transfer of training would be positively related with performance self-efficacy
(Hypothesis 1). Based on the assumption that deep learning is more likely to occur in
complex social and technological environments [4], we hypothesized that the relation-
ship between self-efficacy and transfer would be more positive in training conditions
affording computer support and collaboration (Hypothesis 2).

2 Method

2.1 Literature Searches and Criteria for Inclusion

To test these hypotheses, we used meta-analytic methods [14]. Studies that reported
correlations between post-training self-efficacy and transfer of training were located.
To be included in the database, a study had to report an effect size r or other effect
sizes that could be converted to r (β coefficient; Cohen’s d; F, t, or Z statistics).
Because the focus of inquiry was on self-efficacy as an individual capacity [1], the
database included studies that reported data on individuals. Studies reporting data on
group efficacy were omitted. Studies on children as well as animal studies were also
excluded, because they represent different premises on training and work perfor-
mance. Using these inclusion criteria, the literature was searched in three ways. First,
the PsycINFO, ERIC, and Web of Science databases were searched using the key-
words self-efficacy, behavior change, training application, training use, and transfer
of training. In addition, a manual search of journal issues covering a 25-year period
(from January 1986 through December 2010) was conducted. A total of 29 articles,
book chapters, conference papers, and dissertations that contributed at least one effect
size to the meta-analysis were included in the database. A full list of all included stu-
dies is available from the first author. The 29 studies offered a total of k = 33 inde-
pendent data sources. Total sample size was N = 4,203 participants.

2.2 Recorded Variables

To answer the study hypotheses, different characteristics were tabulated from the
selected research literature. Specifically, each study was coded for effect size esti-
mates, computer support, and collaboration. Effect size estimates included Pearson
product-moment correlation r of the self-efficacy–transfer relationship, Cronbach’s
reliability estimate α of the independent variables (self-efficacy), and Cronbach’s
reliability estimate α of the dependent variable (transfer). We also coded the first
author, publication year, the number of participants, their age (in years), and gender
How CSCL Moderates the Influence of Self-efficacy on Students’ Transfer of Learning 97

(percentage of females). Computer support for learning afforded during training was
coded as 1 = computer support and 0 = no computer support. Collaboration among
participants afforded during training was coded as 1 = collaboration and 0 = no colla-
boration. Two independent raters first coded fifteen of the studies. Because intercoder
reliability was generally high (Cohen’s κ = .91), one rater continued to code the
remaining studies. If a study reported more than one effect size, a single composite
variable was created to comply with the assumption of independence. As an exception
to this rule, linear composites were not created for the theoretically predicted modera-
tor variables, as composite correlations would have obscured moderator effects and
prohibited further analysis.

2.3 Meta-analytic Methods Used


Analysis occurred in two stages. A primary meta-analysis aimed to estimate the true
score population correlation ρ of the pre- and post-training relationship between per-
formance self-efficacy and transfer of training. A meta-analytic moderator estimation
then aimed to identify moderating effects in those relationships.
The primary meta-analysis was done using the methods of artifact distribution me-
ta-analysis of correlations [14]. These methods provide an improvement from earlier
statistical formulae when information such as reliability estimates is only sporadically
reported in the original studies. First, study information was compiled on three distri-
butions: the distribution of the observed Pearson’s r of the transfer–self-efficacy
relationship, the distribution of Cronbach’s α of the independent variable, and the
distribution of Cronbach’s α of the dependent variable. Next, the distribution of Pear-
son’s r was corrected for sampling error. Note that the correction was conducted us-
ing a weighted average, not Fisher’s z transformation, since the latter was shown to
produce upwardly biased correlation estimates. The distribution corrected for sam-
pling error was then further corrected for error of measurement using the compiled
Cronbach’s α reliability estimates. This last step provided the final estimate of the
true score population correlations ρ between self-efficacy and transfer. Finally, stan-
dard deviations of the corrected observed correlation rc and of the population
correlation ρ were calculated; these were used to derive the percentage of variance
attributable to attenuating effects, the 95% confidence interval around rc, and the 80%
credibility interval around ρ.
The meta-analytic moderator estimation followed the primary meta-analysis.
Theory-driven nested sub-group analyses were used to estimate the moderating ef-
fects of computer support and collaboration. Nested sub-group analysis assumes that
the moderator variables are independent and additive in their effects [14]. A criticism
of the use of sub-groups is that it reduces the number of data sources per analysis,
resulting in second-order sampling error. Although the present study contained a large
number of data sources and participants, the possibility of second-order sampling
error cannot be completely ruled out. This is therefore indicated when warranted for
interpreting the results.
98 A. Gegenfurtner, K. Veermans, and M. Vauras

3 Results

3.1 Primary Meta-analysis


Table 1 summarizes the number of studies, participants, and participant characteristics
by condition. The mean estimates in Table 1 are age in years and percentage of
females. Across all conditions, the uncorrected correlation coefficient r between per-
formance self-efficacy and transfer of training is 0.34 (k = 33, N = 4,158). The
population correlation estimate corrected for sampling error and error of measurement
is ρ = 0.39 (SDρ = 0.23; 80% CV = .10; .68). This estimate is in the positive direction,
thus supporting Hypothesis 1. The difference between r and ρ represents a depression
of the true score population correlation through sampling error and error of measure-
ment by 14.7%.

Table 1. Number of studies, participants, and participant characteristics by condition1

Age Gender
Conditions k N M SD M SD
Computer support and collaboration 7 730 27.28 04.55 39.60 27.48
Computer support, but no collaboration 7 1,044 26.42 09.90 61.43 17.90
No computer support, but collaboration 17 2,172 31.51 10.11 50.88 15.38
No computer support, no collaboration 2 257 20.70 00.99 21.87 29.89

3.2 Meta-analytic Moderator Estimation

The specific hypothesis was that the relationship between self-efficacy and transfer is
moderated by computer support and collaboration. Four conditions were evaluated:
trainings with computer support and collaboration (condition 1), trainings with
computer support but no collaboration (condition 2), trainings with collaboration but
no computer support (condition 3), and trainings with neither computer support nor
collaboration (condition 4). There were no systematic age [χ2 (3,25) = 4.17, ns] or
gender [χ2 (3,21) = 2.77, ns] differences between conditions (computer support, no
computer support, collaboration, no collaboration). A nested sub-group analysis of
computer support and collaboration as confounding moderator variables signaled two
trends. First, computer support and collaboration were highly correlated, with Spear-
man’s ρ = .44 (95% CI = .43; .46). Second, effect sizes were highest when the
training was computer-supported. Third, effect sizes were twice as high in computer-
support trainings without collaboration (condition 2) compared to computer-supported
trainings with collaboration (condition 1). Table 2 summarizes the results. However,
unequal sample sizes and a small cell sizes for condition 4 warrant caution when
interpreting these results.

1
k = number of studies, N = sample size, M = mean, SD = standard deviation.
How CSCL Moderates the Influence of Self-efficacy on Students’ Transfer of Learning 99

Table 2. Nested moderator effects of computer support and collaboration


Conditions ρ SDρ 80% CV
Computer support and collaboration 0.31 0.03 0.27; 0.35
Computer support, but no collaboration 0.62 0.07 0.53; 0.71
No computer support, but collaboration 0.30 0.04 0.25; 0.35
No computer support, no collaboration 0.25 0.01 0.24; 0.26

4 Discussion

One aim of this meta-analysis was to cumulate the research of the past 25 years to
correct the relationship between performance self-efficacy and transfer of training for
sampling error and error of measurement. A second aim was to estimate the moderat-
ing effects of computer support and collaboration. The heterogeneity and disagree-
ment in the training literature ultimately led this study to seek a better understanding
of whether, to what extent, and under which conditions efficacy beliefs influenced
transfer.
The results of the primary meta-analyses suggested positive relationships between
post-training self-efficacy and training transfer. These estimates provide support for
Hypothesis 1 and are in line with previous literature reviews [2,7,8,24]. These find-
ings empirically support the theoretical assumption that efficacy beliefs influence
transfer [1-3], and are consistent with earlier conceptual frameworks in the training
literature, such as the integrative model of motivation to transfer training [25].
The results of the meta-analytic moderator estimation suggested systemic effects of
computer support and collaboration [2,3,24]. Specifically, computer-supported colla-
borative learning does not per se promote the relationship between self-efficacy and
transfer. The results showed that computer support played a more significant role than
collaboration among trainees. Trainings affording CSCL were not generally more
effective in promoting efficacy beliefs and transfer than trainings not affording CSCL
(see estimates in Table 2). One possible explanation for this unexpected finding may
be the form of collaboration [15-16] in the individual study reports. We had no infor-
mation on how social interaction emerged in the training situations, as the primary
studies reported correlation estimates only but did not engage in analyzing interaction
with methods currently available [26]. Nor had we information on the degree that the
collaborative learning situations were scaffolded or scripted in the original studies.
Without sufficient guidance and scaffolding of collaboration activities among training
participants, efforts toward collaboration may result in unequal or heterogeneous par-
ticipation [26], non-reciprocal interpretations of the learning situation [27], and/or
lack of co-regulation [18-20]. Future research may take this meta-analytic evidence to
test designs for scaffolding collaboration in technology-rich environments intended to
promote self-efficacy and transfer. In summary, analysis of the moderating effects of
computer support and collaboration illustrate boundary conditions for self-efficacy
and transfer in professional training.
Results of this study may have some practical value for the scaffolding of collabor-
ative learning. Specifically, the low confounded moderator effect of computer support
100 A. Gegenfurtner, K. Veermans, and M. Vauras

and collaboration tends to highlight the danger of ignoring adequate guidance and
scaffolding of participatory interactions among trainees in the learning environment.
This finding is reported in the literature elsewhere [18,27] and is now reiterated with a
special emphasis of how important scaffolded collaboration is for promoting self-
efficacy and transfer [2,5]. It can be speculated that the provision of networks can
scaffold trainees during training and that post-intervention enhancement of contacts
among trainees could facilitate transfer after training. Methodologically, it would be
interesting to follow these educational interventions with different methods, including
social network analysis, to trace how they influence processes of sharing and co-
construction during collaborative team learning and how they promote transfer to
typically practice-bound situations at work [7-8,24]. Application to different profes-
sional settings could further elucidate the generalizability of the findings in various
contexts, including, but not limited to, technology-enhanced medical visualizations
[28-32] and computer game-based learning [19].
This study has some limitations that should be noted. One limitation is that the
population correlation estimates were corrected for sampling error and error of mea-
surement. This decision was based on the frequent reporting and availability of sam-
ple size and reliability information. However, the original research reports may be
affected by additional biases, such as extraneous factors introduced by study proce-
dure [14]. Although the estimation of moderators sought to lessen this bias, the true
population estimates may be somewhat greater than those reported here. An addition-
al limitation is that some of the relationships in the nested subgroups were based on
small sample sizes. However, some authors have noted that correcting for bias at a
small scale mitigates sampling error compared to uncorrected estimates in individual
studies [14]. Still, although most of the cells contained sample sizes in the thousands,
some did contain fewer, which indicates underestimation of sampling error in those
few cases and that, therefore, computer-supported collaborative training may show
more positive correlation estimates. Finally, the study reports two moderator va-
riables. Although an analysis of the relationship between performance self-efficacy
and transfer of training under different boundary conditions clearly goes beyond pre-
vious meta-analytic attempts, selection of the boundary conditions was, of course,
eclectic and exclusively driven by an interest to better understand technological and
social affordances in CSCL environments for motivation and transfer [2,16,17,33,34].
More conditions exist that would warrant inclusion in the meta-analysis and in turn
raise concerns of the generalizability of the moderating effects. However, this limita-
tion can be addressed only by additional original research reports that systematically
vary different study conditions.
In conclusion, self-efficacy and transfer were assumed to be more positive in
complex social and technological environments [4]. The present meta-analytic study
sought to test the predictive validity in an examination of the population correlation
estimates between self-efficacy and transfer in computer-supported and collaborative
training conditions. This examination was done by using meta-analysis to summarize
25 years of research on post-training self-efficacy, by cumulating 33 independent data
sources from 4,203 participants, and by examining two confounded moderator va-
riables (computer support and collaboration) on the self-efficacy–training transfer
How CSCL Moderates the Influence of Self-efficacy on Students’ Transfer of Learning 101

relationship. The findings seem to imply that computer support is more significant for
promoting self-efficacy and transfer than is collaboration. Future research is encour-
aged to extend these first steps reported here to the examination of social and technol-
ogical conditions moderating self-efficacy and transfer in other educational and
learning settings.

References
1. Bandura, A.: Self-Efficacy: The Exercise of Control. Freeman, New York (1997)
2. Gegenfurtner, A.: Motivation and Transfer in Professional Training: A Meta-Analysis of
the Moderating Effects of Knowledge Type, Instruction, and Assessment Conditions.
Educ. Res. Rev. 6, 153–168 (2011)
3. Gegenfurtner, A.: Dimensions of Motivation to Transfer: A Longitudinal Analysis of Their
Influences on Retention, Transfer, and Attitude Change. Vocat. Learn. 6 (in press)
4. Sawyer, R.K.: The New Science of Learning. In: Sawyer, R.K. (ed.) Cambridge Handbook
of the Learning Sciences, pp. 1–16. Cambridge University Press, Cambridge (2006)
5. Segers, M., Gegenfurtner, A.: Transfer of Training: New Conceptualizations through
Integrated Research Perspectives. Educ. Res. Rev. 8 (in press)
6. Trost, S.G., Owen, N., Bauman, A.E., Sallis, J.F., Brown, W.: Correlates of Adults’ Partic-
ipation in Physical Activity: Review and Update. Med. Sci. Sport. Exec. 34, 1996–2001
(2002)
7. Gegenfurtner, A., Vauras, M.: Age-Related Differences in the Relation between Motiva-
tion to Learn and Transfer of Training in Adult Continuing Education. Contemp. Educ.
Psychol. 37, 33–46 (2012)
8. Van Dinther, M., Dochy, F., Segers, M.: Factors Affecting Students’ Self-Efficacy in
Higher Education. Educ. Res. Rev. 6, 95–108 (2011)
9. Gegenfurtner, A., Veermans, K., Vauras, M.: Effects of Computer Support, Collaboration,
and Time Lag on Performance Self-Efficacy and Transfer of Training: A Longitudinal
Meta-Analysis. Educ. Res. Rev. 8 (in press)
10. Zimmerman, B.J.: Attaining Self-Regulation: A Social Cognitive Perspective. In: Boe-
kaerts, M., Pintrich, P.R., Zeidner, M. (eds.) Handbook of Self-Regulation, pp. 13–35.
Academic Press, San Diego (2000)
11. Junttila, N., Vauras, M., Laakkonen, E.: The Role of Parenting Self-Efficacy in Children’s
Social and Academic Behavior. Eur. J. Psychol. Educ. 22, 41–61 (2007)
12. Yi, M.Y., Davis, F.D.: Developing and Validating an Observational Learning Model of
Computer Software Training and Skill Acquisition. Inf. Sci. Res. 14, 126–169 (2003)
13. Brown, T.C., Warren, A.M.: Distal Goal and Proximal Goal Transfer of Training Interven-
tions in an Executive Education Program. Hum. Res. Dev. Q. 20, 265–284 (2009)
14. Hunter, J.E., Schmidt, F.L.: Methods of Meta-Analysis: Correcting Error and Bias in
Research Findings. Sage, Thousand Oaks (2004)
15. Kanfer, R.: Self-Regulation Research in Work and I/O Psychology. Appl. Psychol. Int.
Rev. 54, 186–191 (2005)
16. Lehtinen, E., Hakkarainen, K., Lipponen, L., Rahikainen, M., Muukkonen, H.: Computer
Supported Collaborative Learning. In: Van der Meijden, H., Simons, R.J., De Jong, F.
(eds.) Computer Supported Collaborative Learning Networks in Primary and Secondary
Education. Project 2017. Final Report. University of Nijmegen (1999)
102 A. Gegenfurtner, K. Veermans, and M. Vauras

17. Dillenbourg, P.: What Do You Mean by Collaborative Learning? In: Dillenbourg, P. (ed.)
Collaborative Learning: Cognitive and Computational Approaches, pp. 1–19. Elsevier,
Oxford (1999)
18. Volet, S., Vauras, M., Salonen, P.: Psychological and Social Nature of Self- and Co-
Regulation in Learning Contexts: An Integrative Perspective. Educ. Psychol.-US 44, 1–12
19. Siewiorek, A., Gegenfurtner, A., Lainema, T., Saarinen, E., Lehtinen, E.: The Effects of
Computer Simulation Game Training on Participants’ Opinions on Leadership Styles
20. Vauras, M., Salonen, P., Kinnunen, R.: Influences of Group Processes and Interpersonal
Regulation on Motivation, Affect and Achievement. In: Maehr, M., Karabenick, S., Urdan,
T. (eds.) Social Psychological Perspectives, Emerald, New York. Advances in Motivation
and Achievement, vol. 15, pp. 275–314 (2008)
21. Day, E.A., Boatman, P.R., Kowollik, V., Espejo, J., McEntire, L.E., Sherwin, R.E.:
Collaborative Training with a More Experienced Partner: Remediating Low Pretraining
Self-Efficacy in Complex Skill Acquisition. Hum. Factors 49, 1132–1148 (2007)
22. Gibson, C.B.: Me and Us: Differential Relationships Among Goal-Setting Training,
Efficacy and Effectiveness at the Individual and Team Level. J. Org. Behav. 22, 789–808
(2001)
23. Karl, K.A., O’Leary-Kelly, A.M., Martocchio, J.J.: The Impact of Feedback and Self-
Efficacy on Performance in Training. J. Org. Behav. 14, 379–394 (1993)
24. Gegenfurtner, A., Festner, D., Gallenberger, W., Lehtinen, E., Gruber, H.: Predicting
Autonomous and Controlled Motivation to Transfer Training. Int. J. Train. Dev. 13,
124–138 (2009)
25. Gegenfurtner, A., Veermans, K., Festner, D., Gruber, H.: Motivation to Transfer Training:
An Integrative Literature Review. Hum. Res. Dev. Rev. 8, 403–423 (2009)
26. Puntambekar, S., Erkens, G., Hmelo-Silver, C.: Analyzing Interactions in CSCL: Methods,
Approaches, and Issues. Springer, Berlin (2011)
27. Järvelä, S.: The Cognitive Apprenticeship Model in a Technologically Rich Learning
Environment: Interpreting the Learning Interaction. Learn. Instr. 5, 237–259 (1995)
28. Gegenfurtner, A., Lehtinen, E., Säljö, R.: Expertise Differences in the Comprehension of
Visualizations: A Meta-Analysis of Eye-Tracking Research in Professional Domains.
Educ. Psychol. Rev. 23, 523–552 (2011)
29. Helle, L., Nivala, M., Kronqvist, P., Gegenfurtner, A., Björk, P., Säljö, R.: Traditional
Microscopy Instruction versus Process-Oriented Virtual Microscopy Instruction: A
Naturalistic Experiment with Control Group. Diagn. Pathol. 6, S8 (2011)
30. Gegenfurtner, A., Jarodzka, H., Seppänen, M.: Promoting the Transfer of Expertise with
Eye Movement Modeling Examples
31. Gegenfurtner, A., Seppänen, M.: Transfer of Expertise: An Eye-Tracking and Think-Aloud
Experiment Using Dynamic Medical Visualizations
32. Gegenfurtner, A., Siewiorek, A., Lehtinen, E., Säljö, R.: Assessing the Quality of
Expertise Differences in the Comprehension of Medical Visualizations. Vocat. Learn. 6 (in
press)
33. Veermans, M.: Individual Differences in Computer-Supported Inquiry Learning – Motiva-
tional Analyses. Painosalama, Turku (2004)
34. Järvelä, S., Volet, S., Järvenoja, H.: Research on Motivation in Collaborative Learning:
Moving beyond the Cognitive-Situative Divide and Combining Individual and Social
Processes. Educ. Psychol.-US 45, 15–27 (2010)
Notebook or Facebook? How Students Actually Use
Mobile Devices in Large Lectures

Vera Gehlen-Baum and Armin Weinberger

Educational Technology, Saarland University, Saarbrücken, Germany, P.O. Box 151150


{v.gehlen-baum,a.weinberger}@mx.uni-saarland.de

Abstract. In many lectures students use different mobile devices, like


notebooks or smartphones. But the lecturers often do not know to what extent
students use these devices for lecture-related self-regulated learning strategies,
like writing notes or browsing for additional information. Unfortunately mobile
devices also bear a potential for distraction. This article shows the results of
observational study in five standard lectures in different disciplines and
compares it to students’ responses on computer use in lectures. The results
indicate a substantial divergence between students’ subjective stances on how
they use mobile devices for learning in lectures and the actual observed, often
lecture-unrelated behavior.

Keywords: Lectures, mobile devices, media use.

1 Mobile Devices – Learning Opportunities or Distractions?

More and more students use mobile devices in lectures, either actively, i.e. for writing
something down, or passively, i.e. with the mobile device being switched on and
stared at, but without any other notable human-machine interaction. To what extent
using mobile devices in lectures fosters learning is highly debated. On one hand, mo-
bile devices could support students in their self-directed learning [1] as students get
the chance to search for answers or to take notes on the slides. On the other hand,
there is a chance of distraction, when students use the mobile devices for lecture-
unrelated activities like posting on Facebook or sending Emails to friends [2, 3].
Unfortunately, lecturers do not know what their students use the mobile devices for
since their screens are too small to observe or turned away from the teacher. With
very little research on this issue there is hardly any understanding on whether note-
books should be allowed, banned or more actively integrated into lectures. To reduce
distraction and to make full use of enhancing learning experiences in lectures through
notebooks, gathering information on “lecture-related“ and “lecture-unrelated” activi-
ties with notebooks seems an important first step.
In this article, we discuss general principles of active learning and how technolo-
gies can foster those principles in lectures before taking a look at how students
actually use mobile devices in lectures and what students think or say they do with
mobile devices in large lectures.

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 103–112, 2012.
© Springer-Verlag Berlin Heidelberg 2012
104 V. Gehlen-Baum and A. Weinberger

2 Active and Self-directed Learning

The lecture format is often criticized for fostering passive behavior and for being
inapt for maintaining student focus [4]. Findings of lecture research show that stu-
dents are required to listen and make notes most of the time [5]. Ideally, students en-
gage in a series of cognitive and metacognitive activities in a focused and active way
[6], such as processing and linking what is being taught to prior knowledge, elaborat-
ing the learning material with examples, taking notes and monitoring these learning
activities to ward off distractions and to continuously examine one’s understanding
[7]. But, there are also other, lecture-unrelated activities like talking to the neighbor,
doing homework or sleeping which can be observed in lectures. Students sometimes
have difficulties to identify and focus on the most important aspects of a lecture [7].
Especially students with little prior knowledge and dysfunctional learning strategies
find it hard to continuously focus on the most important aspects of a 90 minute
lecture.
There is a chance that mobile devices increase that problem and distract students.
Both, passive and active use of mobile devices may consume learners’ cognitive re-
sources and draw attention away from what is being taught. Passive use may convey
stimuli that “catch the eye”; active, lecture-unrelated use may indicate that learners
are pursuing other goals than learning [8]. Even with the intention to use the notebook
for learning purposes there is a chance that part of the students’ attention is consumed
by online activities, e.g. by visual indicators of friends being online. So, in order to
ignore or minimalize the effects of these kinds of distractions, it seems important to
know how students monitor their own learning [e.g. 9, 10].

3 Mobile Devices to Foster Learning in Large Lectures

Mobile devices could foster self-regulated learning, but advanced technology is often
paired with simplistic pedagogical models [11]. There is a risk that students use their
mobile devices for lecture-unrelated activities and therefore attempt to multitask dur-
ing the lecture. Based on the idea that the primary task in lectures is to process new
information, multitasking here means to apply some of the cognitive resources to
additional tasks. As the working memory is limited, lecture-unrelated multitasking in
particular could have a negative impact on learning [12].
Fried [2] tested 137 psychology students over 20 lecture sessions with surveys re-
garding their use of notebooks in class and distraction in lectures and compared them
with the results of the American College Test (ACT) and high-school rank (HSR).
Her goal was to show that multitasking distraction by notebooks during lectures
would lead to lower learning results in the standardized tests. Almost two thirds of the
students (64.3 %) reported to use their computers at least once during the sessions and
multitasked an average of 17 min per session (75 minutes). Using notebooks corre-
lated negatively with students’ focus and test results. Fried discussed the limitations
of the findings, given that only self-reported responses where included, based on the
assumption that due to social desirability effects students would underreport the
number of minutes students spend on multitasking.
Notebook or Facebook? How Students Actually Use Mobile Devices in Large Lectures 105

Also Kraushaar and Novak [3] found that distractive multitasking behavior has
negative effects on academic performance. Kraushaar and Novak [3] studied the
notebook use of 55 students in 30 standard lectures (á 75 minutes) of one course by
using a questionnaire and installing spyware on students’ computers. They catego-
rized notebook use into productive (course-related) and distractive activities. The
distractive activities were further divided into surfing, email, instant messaging (IM),
PC-operations and miscellaneous. They confirmed that spending more time with dis-
tractive multitasking leads to lower academic performance. But for the subcategories
this could only be found for using IM during the lecture. One possible explanation for
this result is that the spyware did not register for how long the student actively used
the distractive environment. While it is possible that some students just opened a page
and started listening to the lecture again, synchronous social tasks like IM lead to
more distraction as it requires continuous attention.

4 Research Questions

Even though former research indicated that multitasking with notebooks in lecture has
a bad influence on learning performance [2] so far little is known about how frequent-
ly and which kind of mobile devices are used in standard lectures.

─ RQ1: Which kinds of mobile devices are used how often by students during large
lectures?

So far, studies were mainly conducted in single courses over a longer time period with
the students knowing that their use of notebooks is assessed by questionnaire or spy-
ware [3]. There is need to complement this research with covert observational data,
i.e. with students being unaware of the fact that their activities are being observed.
There is also need to investigate to what extent mobile devices require all of students’
attention or are rather used as a background medium as Kraushaar & Novak [3]
suspect.

─ RQ2: Which kind of activities do students engage in with their mobile devices in
large lectures?

As their impression on their time and aim of using mobile devices could give an
insight of how well learners manage to self-regulate learning activities when bringing
mobile devices to the lecture students self-reported data could show differences be-
tween what they think they do with mobile devices and what they actually do. If
students have metacognitive deficits regarding self-assessing and monitoring their
learning processes, there should be differences regarding their self-report on their
intention and time spending on mobile devices to what will be observed during
lecture.

─ RQ3: What reasons do students self-report for bringing mobile devices to the
classrooms and how do they actually use mobile devices?
106 V. Gehlen-Baum and A. Weinberger

5 Method

5.1 Participants
We conducted the study in five standard lectures of education (two lectures), comput-
er science (two lectures) and economics (one lecture) collecting data by questionnaire
and observation. We gathered 664 student questionnaires of which 331 students re-
ported using technology in the lecture. Some of them used their laptop as well as their
smartphone.

Table 1. Observed and self-reported use of mobile devices in lectures


Education Computer science Economics
Observed (n = 26) / Observed (n = 38) / Observed (n = 27) /
self-reported self-reported self-reported
technology usage technology usage technology usage
(n = 62) (n = 136) (n = 171)
Notebook 25 / 20 31 / 60 25 / 53
Smartphone 1 / 42 7 / 76 2 / 118

While all questionnaires were used to analyze if student used mobile devices, for
further analysis we just report data of those students that stated using mobile devices.
We also covertly observed a total of 81 students with notebooks and 10 with smart-
phones. Table 1 shows the distribution of mobile devices as observed across lectures
in education, computer science and economics.

5.2 Procedure
Before the lecture started, the five to seven investigators chose their seats, so they
could observe at least one, but most of the time two to four different notebooks or
smartphones users. They sat next or behind the students they observed, so that they
saw just the screen but did not get further information about the observed student.
Also the investigators tried not to be seen during the observation in order to obtain
actual student practices. When the lecture started, the lecturers told their audience that
an investigation about lecture activities is taking place and that at the end of this lec-
ture a questionnaire will be handed to them. The fact that an observation took place as
well was only mentioned after the lecture. The investigators started making notes
every 30 seconds on the prepared sheets when the lecturer started talking to the class.

5.3 Instruments
Observation. A lecture of 90 minutes was divided in 180 segments, so that every 30
seconds the observer took a look at the observed mobile device and marked the ob-
served activity. The activities were classified into lecture-related activities, like mak-
ing notes or seeing lecture slides and lecture-unrelated activities like using social
Notebook or Facebook? How Students Actually Use Mobile Devices in Large Lectures 107

networks, seeing online web-sites with non-course materials and watching videos (see
Table 2). Also when students downloaded something or the screensaver was acti-
vated, it was noted down on an observer sheet, but these kinds of ambivalent activities
are not further discussed in this paper.
The activities were further divided into “active” and “passive”. When students
typed something, obviously read an online article or used their mouse on a web site
the activity was marked active as the focus was on the mobile device at that time.
Passive use was coded whenever the focus was on the lecturer and his presentation
and there were no activities on the mobile device, i.e. the device was switched on, but
not interacted with. So, the distinction between active and passive use of mobile de-
vices does not concern whether a device is switched on or off, but whether the student
is interacting with and focusing on the device (active) or the lecturer, someone else or
something other than the mobile device (passive).

Table 2. Categories of observed activities


Lecture-related Lecture-unrelated ambivalent activities
activities activities (not reported)
Browsing the internet for
Slides Lecture-unrelated websites
unidentified information
Lecture-unrelated docu-
Taking notes Downloading something
ments
Lecture-related
Social networks Doing some exercises
websites
Lecture-related Browsing the University
Email
documents website
Chat Desktop/screensaver
Games
Newspaper

As informing the students about the observation beforehand could influence their
behavior, we told them after the study and used the DGPS recommended practice on
ethics as a guideline. Also, we made sure that no connection between the question-
naires and the observations can be established, as the observers did not note down
personal aspects like names, numbers etc.
Questionnaire. The questionnaire was designed to get a more accurate impression on
students’ use of mobile devices as well as their own impression what they used them
for. The students should indicate which mobile devices they used and which lecture-
related or –unrelated activities they engaged in, like searching further information on
the lecture, taking notes, playing computer games or surfing social networking sites
like Facebook. This was indicated by a five point scale with values from “not at all”
to “very much”. In addition, students were asked to indicate for what purpose they
used a specific tool. The answers to these open items were coded and categorized into
the same categories as the observational data and then divided into the subcategories
of “lecture-related” and “lecture-unrelated” activities. New categories were defined if
108 V. Gehlen-Baum and A. Weinberger

the answers did not fit into one of the predefined categories, e.g. for yet uncharted
forms of distractions or communication. After categories had been established, inter-
rater reliability was being assessed.

6 Results

With regard to RQ1 on which kind of mobile devices student use in large lectures, the
questionnaire data indicate that half (49.85%) of the audience is using a mobile device
at least once in a lecture. The number varied during the lectures as sometimes students
came later or left early so not all students who attended the lecture filled out a ques-
tionnaire (n = 664).

Fig. 1. Self-reported use of mobile devices in lectures (questionnaire data)

The usage of smaller devices couldn’t be counted as it is hard to see smartphones


when you are not close to them. But the observed use of notebooks indicates that the
frequency of using devices as participants indicated (n = 133) (see Figure 1) is consis-
tent with what was observed by the researchers (n = 112). Also here the number va-
ries between different measurement points, as some students store away their device
for some time during the lecture. This result was also found during the observation of
the 81 students with notebooks.
With regard to RQ2, the observations indicate that students were engaged two
times more often (51.70%) in surfing lecture-unrelated web-sites and documents than
in lecture-related activities (see Figure 2). Whereas active use of mobile devices was
stronger associated with lecture-unrelated activities (n = 4015), like communicating
through Facebook, lecture-related activities was mostly passive (n = 1460), like look-
ing at the slides of the lecture.
Notebook or Facebook? How Students Actually Use Mobile Devices in Large Lectures 109

Fig. 2. Observed frequency of active and passive use of lecture-related and -unrelated activities
(30 sec intervals)

We differentiated between several categories of active use of mobile devices. The


most frequent lecture-related activities on mobile devices are taking a look at the pre-
sented slides and taking notes. Students rarely browse for lecture-related websites,
however (see Table 3).

Table 3. Frequency of lecture-related and -unrelated activities during the lecture (30 second
intervals)

lecture-related activities
slides 351
notes 218
lecture-related
websites 81

lecture-unrelated activities
lecture-unrelated
websites 1445
social networks 679
email 153
chat 76
games 839
videos 170
110 V. Gehlen-Baum and A. Weinberger

The most frequent lecture-unrelated activities revolve around surfing the web.
Students frequently visit lecture-unrelated websites, like sports sites, different forums
or search for downloading some content. Also the use of social networks was com-
mon during the lectures with most of the students being a member of Facebook. Some
of these activities, e.g. visiting Facebook, implied switching between active and pas-
sive activities for some of the students. These students focused on the lecture, but
checked the open Facebook page from time to time.
Other lecture-unrelated activities, like watching videos, were far more enthralling
and time consuming as students constantly focused on the notebook screen. Wearing
headphones during the lecture seemed to clearly indicate that phenomenon. For in-
stance, one student started watching a TV show when the lecture began, then finished
some lecture-unrelated homework and started watching another episode of said TV
show before leaving the lecture early.
The duration of playing online games varied between the different games. There
were students taking a look at their online simulation games, e.g. Farmville, regularly
in the lecture, while other students were playing installed games during the whole
lecture.
With regard to RQ3 on how students self-reported on how they use their mobile
devices during a lecture, most students stated that they mainly use their notebook or
smartphone for lecture-related activities like taking a look at lecture slides or taking
notes (see Table 4).

Table 4. Observed vs. self-reported use of mobile devices


Number of ob-
Number of stu-
served students ac-
dents self-reporting
tively using mobile
using mobile devices
devices during the
during the lecture
lecture
Lecture-related
activities
lecture slides 18 (19.6%) 46 (37.1%)
taking notes 38 (41.3%) 51 (41.1%)
searching 36 (39.1%) 47 (37.9%)
Lecture-
unrelated activities
social networks 44 (47.8%) 19 (15.3%)
chat 10 (10.9%) 4 (3.2%)
emails 22 (23.9%) 15 (12.1%)
communication 28 (22.6%)
unrelated web-
56 (60.9%)
sites
games 14 (15.2%) 7 (5.6%)
video 7 (7.6%) 0 (0%)
distraction 51 (41.1%)
Notebook or Facebook? How Students Actually Use Mobile Devices in Large Lectures 111

The percentage of students observed taking notes is corresponding to the self-


reported one. Also many students mentioned to search for additional information on
the lecture to attain deeper understanding. The overall observed number of students
doing further research on the internet is rather low (252) compared to other activities
like taking notes (805) or social networks (960). But the number of students which
report using mobile devices for looking at lecture slides differs from the observed
number. Although 37.1% of the students report to display lecture slides on their
screens, only 19.6% were actually observed doing so.
In general, the number of students observed doing lecture-unrelated activities is
always higher than the self-reported one, although 51 students indicated that they use
mobile devices for some sort of distraction in general.

7 Discussion

The study shows that half of the University students use their mobile devices in lec-
tures. While most of the students use smartphones or notebooks, other mobile devices
like tablets are not very common in lectures today. The observational data show that
most of the students use their mobile devices for lecture-unrelated activities, mostly
for surfing on lecture-unrelated websites; this is consistent with prior findings [3].
Other, highly frequent lecture-unrelated activities observed are communicating
through social networks and emails. These kinds of lecture-unrelated activities pose a
risk of distraction which could hamper learning activities and therefore impoverish
learning results [2]. Analyzing active and passive use shows that most of the lecture-
related materials, like online slides, do not foster active behavior, e.g. taking notes. In
fact, lecture-related use of mobile devices is mostly passive. In contrast, lecture-
unrelated activities, like using games or social networks, are typically active. Still, not
all of the lecture-unrelated behavior was active. Obviously, students seem to manage
some degree of multitasking with passive lecture-unrelated activities, which may not
have adverse effects on learning [3]. Because of collecting the data in real classroom
scenarios this study aimed to describe how students use their mobile devices. We are
currently analyzing differences between students with mobile devices and those with-
out. Some of this data will be presented at the conference. Also, future research may
need to inquire how learners are actually dealing with these kinds of distractions suc-
cessfully with regard to cognitive load and learning outcomes.
There are interesting divergences between observational and questionnaire data.
Students may have no good explicit explanation for bringing computers to lectures or
may hide their true, lecture-unrelated intentions. Chances are that due to weak self-
monitoring strategies students do not entirely realize how much time they spend on
lecture-unrelated activities. Perhaps they sometimes do not realize their shift of atten-
tion at all.
A lot of Universities install wireless Lan in their lecture halls to give students the
possibility to use mobile devices for learning and research. Our results indicate that
nearly half of the students accept that offer during lectures – most of them with their
smartphone. Even though a lot of students use them not in the intended way, it may be
112 V. Gehlen-Baum and A. Weinberger

problematic to banish these small mobile devices from lecture halls. Instructional
approaches are necessary to help students use mobile devices in lectures intentionally
for lecture-related activities [2]. Our future research addresses this issue by suggesting
to not ban, but design for involving the devices students bring to the lecture [13]. In
such a scenario using an audience response system called Backstage, lecturers would
ask students to answer questions as with proprietary clicker systems and allow for
students to post lecture-related questions, comments and answers. In this way, stu-
dents might be facilitated to more actively engage in and better monitor lecture-
related activities.

References
1. Greene, J.A., Azevedo, R.: The Measurement of Learners’ Self-Regulated Cognitive
and Metacognitive Processes While Using Computer-Based Learning Environments.
Educational Psychologist 45(4), 203–209 (2010)
2. Fried, C.B.: In-class laptop use and its effects on student learning. Computers &
Education 50(3), 906–914 (2008)
3. Kraushaar, J.M., Novak, D.C.: Examining the Effects of Student Multitasking with
Laptops during the Lecture. Journal of Information Systems Education 21(2), 11 (2010)
4. Tippelt, R.: Vom projektorientierten zum problembasierten und situierten Lernen - Neues
von der Hochschuldidaktik? In: Reiber, K., Richter, R. (eds.) Entwicklungslinien der
Hochschuldidaktik. Ein Blick zurück nach vorn, pp. 135–157. Logos, Berlin (2007)
5. Lindroth, T., Bergquist, M.: Laptopers in an educational practice: Promoting the personal
learning situation. Computers & Education 54(2), 311–320 (2010)
6. Renkl, A.: Aktives Lernen = gutes Lernen? Reflektion zu einer (zu) einfachen Gleichung.
Unterrichtswissenschaft 39, 194–196 (2011)
7. Grabe, M.: Voluntary use of online lecture notes: Correlates of note use and note use as an
alternative to class attendance. Computers & Education (2005)
8. Yantis, S.: Stimulus-driven attentional capture. Current Directions in Psychological
Science 2(5), 156–161 (1993)
9. Garner, J.K.: Conceptualizing the relations between executive functions and self-regulated
learning. The Journal of Psychology 143(4), 405–426 (2009)
10. Hasselhorn, M.: Metacognition und Lernen. In: Nold, G. (ed.) Lernbedingungen und
Lernstrategien, pp. 35–64. Gunter Narr Verlag, Tübingen (2000)
11. Roschelle, J.: Keynote paper: Unlocking the learning value of wireless mobile devices.
Journal of Computer Assisted Learning 19(3), 12(3), 260–272 (2003)
12. Ericsson, K.A., Kintsch, W.: Long-term working memory. Psychological Review 102(2),
211–245 (1995)
13. Gehlen-Baum, V., Pohl, A., Weinberger, A., Bry, F.: Backstage – Designing a
Backchannel for Large Lectures. In: Ravenscroft, A., Lindstaedt, S., Delgado Kloos, C.,
Hernández-Leo, D. (eds.) EC-TEL 2012. LNCS, vol. 7563, pp. 459–464. Springer,
Heidel-berg (2012)
Enhancing Orchestration of Lab Sessions
by Means of Awareness Mechanisms

Israel Gutiérrez Rojas1,2, Raquel M. Crespo García1, and Carlos Delgado Kloos1
1
Universidad Carlos III de Madrid, Avda. Universidad 30, E-28911 Leganés, Spain
2
Institute IMDEA Networks, Avda. del Mar Mediterráneo 22, E-28918 Leganés, Spain
{igrojas,rcrespo,cdk}@it.uc3m.es

Abstract. Orchestrating learning is a quite complex task. In fact, it has been


identified as one of the grand challenges in Technology Enhanced Learning
(TEL) by the Stellar Network of Excellence. The objective of this article is to
provide teachers and students with a tool to help them in their effort of
orchestrating learning, that makes use of awareness artefacts. Using this
powerful mechanism in lab sessions, we propose four different aspects of
orchestration as the target for improvement: the management of the resources
in the learning environment; the interventions of the teacher and provision of
formative feedback; the collection of evidences for summative assessment; and
the re-design of the activity, adjusting some parameters for future enactments.
The proposal has been tested in a real course of Multimedia Applications with
junior students (3rd course), measuring the benefits for the orchestration.

Keywords: Orchestration, awareness, lab session, problem-based learning,


formative assessment.

1 Introduction
Many Higher Education courses, ranging from engineering to social sciences, have a
practical component; that is, the structure of the course is composed by theoretical
sessions (lectures) and practical sessions in the computer room, where the students
have a computer available to work in the proposed hands-on activity. These face-to-
face sessions at the computer lab (henceforth called lab sessions) have a common
structure: the teacher proposes a practical task (or set of tasks) to the students; the
students work on the proposed task by themselves in their computer; when they
encounter a difficulty that cannot overcome by themselves, they raise hand in order to
indicate the teacher they have a question; the teacher moves around the room solving
questions of the students who raised hand. There are other aspects of the lab session
that are specific to the activity in particular: the students work in the computer
individually, in pairs or in a group (individual/collaborative activity), there could be
one or several teachers (or teaching assistants), the development of the activity
could have implications for summative assessment or just have a pure formative
component, etc.
In some countries, the education budget has been cut out consequence an increase
of the students/teacher ratio in lab sessions. When the ratio is higher than 20, several

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 113–125, 2012.
© Springer-Verlag Berlin Heidelberg 2012
114 I. Gutiérrez Rojas, R.M. Crespo García, and C. Delgado Kloos

problems emerge, intrinsically related to the orchestration of the lab session: the
teacher is not able to provide feedback to the students at the same rate that new
questions appear; due to the scarcity of the teacher resource, the students compete for
the attention of the teacher (e.g., they stand up and wait near to the teacher while she
is attending other students, and when she finishes the explanation they grab her
attention to help them); the order in which the teacher provide feedback to the
students is unfair, regarding parameters like waiting time of the students or progress
in the assignment; the teacher has not enough time to check the progress of all the
students during the session, mainly the ones who did not ask for help.
In order to mitigate these problems, a tool has been designed that provides teachers
with awareness mechanisms in lab sessions. Using the awareness information,
teachers gain knowledge about the state of the class: progress of all the students,
students who asked for help and when, etc. And they are able to enhance the
orchestration of the lab session in several ways: the management of the resources in
the learning environment (e.g., feedback time); the interventions and provision of
formative feedback; the collection of evidences for summative assessment; and the re-
design of the activity, adjusting some parameters for future enactments.
The rest of this article is structured as follows: the next section introduces relevant
research about orchestrating learning and awareness; section 3 will be devoted to
defining the proposed tool that provides awareness mechanisms to the teacher; in
section 4 a validation of such a tool will be presented, based on an experiment in a
real setting; finally, in section 5 the conclusions of this work are described as well as
some lines of future work related to them.

2 Relevant Literature

Orchestrating learning is a quite complex task. In fact, it has been identified as one of
the grand challenges in Technology Enhanced Learning (TEL) by the Stellar Network
of Excellence [1]. Moreover, the concept of “orchestrating learning” has different
definitions in the TEL community and therefore a different meaning depending of the
authors of a publication. In [2], a literature review of orchestrating learning in TEL is
carried out; emerging from the review, a conceptual framework is defined consisting
of 5+3 aspects of orchestration: 5 aspects about what orchestration is and 3 aspects
about how orchestration has to be implemented. Regarding the orchestration
definition, the aspects described were (1) design/planning of the learning activities,
(2) regulation/management of these activities, (3) adaptation/flexibility/intervention
(adaptation of the learning flow to emergent events), (4) awareness/assessment of
what happens in the learning process and (5) the different roles of the teacher
and other actors. Regarding how the orchestration should be done, (a)
pragmatism/practice as opposed to TEL-expert, (b) alignment/synergy to the intended
learning outcomes and (c) models/theories that guide the learning orchestration, were
the identified aspects. In this work, we have made use of the 5+3 aspects framework
in order to structure the contributions to enhance the orchestration by means of
awareness mechanisms.
Enhancing Orchestration of Lab Sessions by Means of Awareness Mechanisms 115

The concept of awareness in the field of Computer Supported Collaborative Work


(CSCW) refers to exchange of information among several workers that work on a
collaborative activity, regarding status, activity and availability. In this work, we
apply these same principles to the context of teaching and learning, considering the
awareness as a mechanism that could be used to deal with a complex learning
scenario built over different orchestration aspects because it permits teachers and
students to get to know better these aspects.
In [3], Alavi et at. use the concept of awareness in the same way in the context of
technology-enhanced learning. They analyse the interactions between teacher
assistants and learners in recitations sections (sessions of problem-based activities
with teaching assistants), and make use of lamps as distributed awareness
mechanisms. The lamps are used by students in order to indicate progress (lamp
colour) and to request feedback from the teachers (lamp blinking). While the
interactions in a recitation section are quite similar to those in lab sessions, both
works take a very different approach to overcome the same orchestrations problems:
in Alavi’s work they focus on using Human Computer Interaction (HCI) aspects (e.g.,
ambient displays for distributed awareness); instead, our work stresses the importance
of recording students’ traces and process them to create information useful for the
teacher. Therefore, both approaches have advantages: in Alavi’s work, the groups of
students are aware of their colleagues progress and problems; in our work, the
information is processed to offer the teacher a personalised view of the interaction
data, and the information could be used after the class to review the session (note: in
the context of our work, it is assumed that the assignments are delivered to the
students in the form of a web page, which the students interact with). Finally, it is
very relevant the stress of both works regarding the importance of the space in the
orchestration of face-to-face activities.
In [4], Dong and Hwang introduce the PLITAZ (Pause Lecture, Instant Tutor-
Tutee Match, and Attention Zone) system that minimizes learning progress
differences. This work is contextualised by the use of software teaching classes that
alternate lecture and practice phases. During the practice phase, two strategies are
used to attend students’ problems: tutor-tutee match (when a student finishes the
practice, she is asked to be a reviewer, and after acceptance, she is commanded to
help a peer with problems in this practice) and attention zone (students with problems
in the practice surrounded by others with the same problem, should be attended first
by the teacher in order to prevent isolation). Therefore, in Dong and Hwang’s work
the space is also a very important factor. The awareness strategy followed by them is
very similar to ours, but their main objective is different because they try to minimize
the difference of progress among students and our objective is to enhance
orchestration.
Regarding pedagogical concepts relevant to this research, we are going to focus on
problem-based learning since it is the methodology used in the lab sessions.
Collaborative problem-based learning (PBL), usually considered an active learning
methodology [5], is an instructional method commonly used for teaching engineering
courses: students are organized in small groups and presented with a challenging
problem to solve. In this article, Prince concludes that students do not get better
116 I. Gutiérrez Rojas, R.M. Crespo García, and C. Delgado Kloos

assessment results with PBL but they are more motivated and could develop higher-
level skills like information retention, problem solving and critical thinking. Barrows
[6] presents a taxonomy for problem-based learning and a set of four educational
objectives addressed by PBL, much related to this work: (a) the knowledge about the
context; (b) the practice and feedback; (c) self-directed skills; and (d) motivation and
challenge.
Finally, the concept of formative assessment is also relevant to our research, since
the objective of the interactions in the lab sessions is to provide formative feedback to
the students. It is stressed the importance of such feedback for awareness of students
[7] and teachers [8].

Fig. 1. Awareness system technical architecture

3 Using Awareness Mechanisms for Orchestration

As stated before, we are going to make use of the 5+3 aspects framework described in
[2] to classify the different orchestration aspects that we are going to enhance with
awareness mechanisms. Four aspects have been identified as part of the orchestration
including: the management of the resources in the learning environment; the
interventions of the teacher and provision of formative feedback; the collection of
evidences for summative assessment; and the re-design of the activity, adjusting
some parameters for future enactments.
Enhancing Orchestration of Lab Sessions by Means of Awareness Mechanisms 117

3.1 Awareness System Technical Architecture


The awareness system we have built assumes the context of problem-based learning
in lab sessions, being the assignment of the session delivered to the students as a web
page. The system is composed of two parts: one embedded in the web page of the
assignment and the other one a tablet web interfaces for the teacher. Regarding
technologies (as shown in Figure 1), we have used websockets [9] in order to
implement real-time communication of events among students and teachers, as the
clients are web browsers. For an easy websocket implementation we used nodejs [10]
in the back-end (a server-side JavaScript solution) that is able to manage multiple
connections opened with the browsers without performance problems. For the data,
we have used mongodb[11], a No-SQL database that uses JavaScript as the script
language and JSON as data format. In this way, JavaScript and JSON are used in all
the stacks (client, server, database) facilitating the integration of the developed
components.

3.2 Awareness System Interfaces


In a previous work [12], a exploratory research was presented, consisting of the usage
of websocket notifications as a communication backchannel in lab sessions. It also
introduced a preliminary version of the students and teacher interfaces. Nevertheless,
the design of both interfaces has been changed, taking into account the feedback
provided by teachers, students and fellow researchers.
The assignment of the session is a problem-based assignment, composed of several
parts or sections. The developed client component for the students (henceforth,
student component) analyses the web page of the assignment detecting the sections in
the document, and constructs a table of contents. At the beginning of the session, it
presents a very simple interface to the students, composed of the following two parts:
the main and the aside components.
In the main part of the page (right side of Figure 2), the first section of the assignment
is presented (as the “current section” for the students to start with), as well as
the references section if existing (list of theoretical references that could be useful for
the students during the session); when the students indicate progress (i.e., finished the
current section), the next section of the assignment is presented.
On the left side of the screen (aside) there is a fixed part divided in three main
regions: (1) in the top: a table of content for the assignment, containing all the
sections in the assignment and little circle indicating their status (green: completed,
amber: in progress, grey: not initiated); the status circle is clickable for indicating
progress (on the current section when it is finished, and on the last finished to undo
the progress); at the bottom of this region there is a progress bar indicating the
progress of the students in the assignment; (2) in the middle: a red button used to ask
a question to the teacher (equivalent to raise hand); when help is requested, the
students is prompted to describe the question and the button turns blue; now, the
button can be used to indicate that the doubt has been solved (by the teacher or
the students themselves) and the student is prompted to describe the answer to her
solved question; the position in the queue is also shown to the student because this
information is known when raising hand (since you can observe the other students
118 I. Gutiérrez Rojas, R.M. Crespo García, and C. Delgado Kloos

that raised hand too); (3) in the bottom: the name of the students working in the PC;
when accessing to the assignment, the first thing they have to do is entering their
student id; in this part there is also a button to change the students in case it was
necessary.

Fig. 2. Students assignment interface

Regarding the tablet web interface for the teacher, it is composed of two parts. The
first one (general view, shown in Figure 3) is a representation of the physical
classroom where the lab session is carried out. It shows a set of icons representing the
PCs in the classroom, which are used as the context for the information of the
students working on the PC. In the icons, several types of awareness information are
shown. Firstly the background colour of the icon indicates: grey, PC not in use; blue,
students working in the PC; or orange to red, the students in the PC asked for help
(the colour starts being orange and turns gradually into red in 10 minutes). Secondly,
the number in the middle of the icon indicates the current section of the assignment
for the students in that PC. Finally, a square in red around the icon of the PC,
indicates that these are the students that have been waiting for longer time.
There are also two indicators on the top of the screen: the one on the top-left corner
indicates informs about the state of the connection to the server (red: not connected,
green: connected); the one on the top-right corner indicates that the teacher has some
students waiting for help (red-BUSY: indicates that some students asked for help,
green-FREE: indicates that there are no students waiting for help).
Enhancing Orchestration of Lab Sessions by Means of Awareness Mechanisms 119

Fig. 3. Teacher interface: general view

The detailed view (shown in Figure 4) appears in the tablet interface when the
teacher touches the icon of a PC that is being used by students. In this view, the
pictures and names of the students are shown as well as the description of the last
question that they asked. Besides that, there is a timer and a button for the teacher to
indicate that she is providing feedback to these students. The timer is used for the
teacher to be aware of the time devoted in this feedback interaction. When the teacher
finishes her intervention, she pushes the button again to stop the timer (or simply push
the back button to return to the general view).
120 I. Gutiérrez Rojas, R.M. Crespo García, and C. Delgado Kloos

Fig. 4. Teacher interface: detailed view

3.3 Awareness System Workflows


The workflow of the teacher during the session consists of the following steps:

1. connect to the awareness system, indicating the course and session


2. while there is no students waiting for help, she can observe the progress of the
students in the general view, and provide feedback to the students proactively
3. when one or more students ask for help, the BUSY/FREE indicator turns red,
and a red square indicates the next step (i.e., the students that has been waiting
longer for help)
4. when the teacher decides to provide feedback to a group of students, she enters
in the detailed view to check the name and pictures of the students and the
question they have asked
5. when the feedback starts she presses the button to activate the timer
Enhancing Orchestration of Lab Sessions by Means of Awareness Mechanisms 121

6. once the feedback has been provided, she presses the button again and checks the
general screen for the next group to be attended
The workflow of the students during the session consists of the following steps:

1. connect to the assignment and enter the student id for the system to identify
them
2. start working on the first section of the assignment
3. when they finish a section, indicate progress using the progress indicators
(colored circles)
4. when they need feedback of the teacher, push the HELP button and introduce
the description of the question
5. when they solve a question (by the teacher or by themselves), push the
SOLVED? button and describe the solution to their question
6. if they indicated progress incorrectly, they can use the progress indicator of the
last finished section to undo the progress and go back to the previous section

3.4 Awareness System Benefits


The benefits of the introduced awareness system for enhancing the orchestration of
the lab sessions are presented following the 5+3 aspects framework as defined above.
Moreover, the aspects are presented grouped in two categories: aspects used for
orchestration during the session (live) and after the session.

• adaptation/flexibility/intervention: during the session, the awareness information


about students progress and help is used by the teacher to plan and execute the
intervention for feedback provision
• regulation/management: during the session, the awareness information about the
time devoted to a group is used by the teacher to manage the timing of the
session
• awareness/assessment: after the session, the teacher makes use of the students’
progress at the end of the session to determine which ones worked as expected
during the session and reward them with a positive grade
• design/planning: after the session, the teacher reviews the questions and answer
of the students and plan the future enactment of the activity

4 Validation of the Awareness System


The system has been validated in a real setting, a course of Multimedia Applications,
in 5 sessions with 4 different teachers (2 authors of this work and 2 outsiders). The
students in each session ranged from 20 to 30 (10 to 15 groups of 2 students), working
in pairs (2 students per PC) in the assignment of the lab session. About 800 events
were captured, of the following types:
122 I. Gutiérrez Rojas, R.M. Crespo García, and C. Delgado Kloos

• connection: students connect to the assignment web page


• finishSection: students indicate a section as completed
• undoFinishSection: students undo the completion of a section
• help: students asked for help
• solved: students indicated that a question have been solved
• initHelp: the teacher starts helping a group of students
• endHelp: the teacher finished the attention to a group of students

In a previous work [13], a set of metrics was presented in order to evaluate an


awareness system. One of these metrics is the waiting factor of the lab sessions (ratio
between the waiting and the tutoring times). Applying this metric to the collected
events, the results obtained were not the expected (see Figure 5).

Fig. 5. Tutoring and waiting times per session

The waiting times are always higher than the tutoring times and, therefore, the
waiting factor is always greater that 1. Analysing the data of the sessions (shown in
Table 1), we found out that the sessions could be categorised in two: sessions 1 to 4
correspond to a very simple assignments and therefore very few questions arose
among the students; instead, session 5 consisted of a complicated assignment and the
teacher was solving doubts all along the session (34). The data shows that the waiting
factor per se cannot be used to determine the time efficiency of the sessions since the
session 5 was very efficient but its waiting factor is the worst. Therefore, a new metric
should be defined that combines the measures of time and interventions in order to
characterise the efficiency of the session. The definition of such a metric is a future
work to this contribution.
Enhancing Orchestration of Lab Sessions by Means of Awareness Mechanisms 123

Table 1. Quantitative data collected in Multimedia Applications

Session Tutoring Waiting time Waiting Help Interven


number time (min.) (min.) factor requests tions

1 34.94 123.59 3.54 17 17

2 18.13 29.11 1.61 13 13

3 9.77 28.76 2.94 14 11

4 28.03 93.33 3.33 12 12

5 38.34 140.46 3.66 37 34

All in all, there could be other factors that conditioned the values obtained. For
example, the usage of the system may require a learning curve and thus the
parameters could be better in future experiments. Another issue could be that in order
to measure the tutoring time, the teachers should press the “INIT HELP” button and,
in some occasions, the teachers recognise to have forgotten about doing it.
Nevertheless, besides the quantitative data collected students were asked to fill out
a survey about the sessions in which they used the awareness system. The teachers
(only the two outsiders) were also interviewed regarding the dynamics of usage of the
tool, good/bad features and general feeling.
The main highlight of the interviews are summarised in two good and two bad
comments about the systems.
The interviewed teachers identified as the best features of the system that..

• they could find out in a glance which students in the class are working in the
session and their progress
• they could be more fair in the distribution of feedback, attending those students
who waited longer
They also identified as the worst problems of the system that...

• they did not read the questions of the students in the tool beforehand, but
directly asked the students to tell them
• the UI of the teacher interface was improvable, making it adapted to a portable
touch device (size of elements, buttons, etc.)
From the students surveys (points in a 1-5 likert scale), it can be stated that the using
the tool...
… the tutoring time of the teacher is more fairly distributed (mean 4.38)
… the order of time distribution is more fair (mean 4.46)
… make students concentrate more in solving the problem that in searching for
the teacher attention (mean 4.07)
… the students trusts that the teacher is going to help them although they do not
raise their hand (mean 4.23)
124 I. Gutiérrez Rojas, R.M. Crespo García, and C. Delgado Kloos

… the student interface is clear and easy to use (mean 4.00)


… students do not like to write the questions because they prefer to ask her the
questions directly

5 Conclusions and Future Work


In this work, a set of enhancements for learning orchestration in lab sessions based on
awareness mechanisms have been presented. The enhancements have been organised
following the 5+3 framework for orchestration, being the following:

• during the lab session, the teacher is informed with the students progress and
help requests in order to plan and execute the intervention for feedback
provision
• during the session, the teacher is informed about the time devoted to a group in
order to manage the timing of the session
• after the session, the teacher makes use of the students’ events during the
session that may be used as evidences for summative assessment
• after the session, the teacher reviews the questions and answer of the students
and plan the future enactment of the activity
The aforementioned enhancements have been validated in a real setting, in a course of
Multimedia Applications, demonstrating the effectiveness of the proposed system by
means of quantitative (likert scale surveys) and qualitative (interviews) data.
As future work, a lot of lines could be addressed, being the most relevant:

• define metrics that characterise a time efficient system


• integration of formative assessment when completing a section in the
assignment: this kind of integration would validate the progress of the students;
they would be commanded to deliver a piece of work that proves their progress
or do some multiple choice questions for self-assessment
• shared questions: it is a widget for the students to share their questions during
the session and they could follow a question of a peer, allowing the teacher to be
aware of the most followed questions. A widget based on this principle won the
3rd ROLE widget competition and it is being implemented for the ROLE
infrastructure.
• new visualizations of the data collected during the sessions for teacher to better
review the session after completion
• provide the students with information about the progress of the class (mean of
all students progress) and compare it to her individual progress
• implement new strategies for recommending the next step to the teacher: instead
of using always the longer waiting students, different algorithms could be
designed following the same principles that the CPU uses for allocating time for
processes (FCFS, FJS, Round-Robin)
• a gamification strategy could be implemented for the students to engage more in
the session, based on the collected data
Enhancing Orchestration of Lab Sessions by Means of Awareness Mechanisms 125

Acknowledgments. This research has been partially supported by the project


“Learn3: Towards Learning of the Third Kind” (TIN2008-05163/TSI) of the Spanish
“Plan Nacional de I+D+i", the Madrid regional project “eMadrid: Investigación y
Desarrollo de tecnologías para el e-learning en la Comunidad de Madrid”
(S2009/TIC-1650) and the “EEE project” (TIN2011-28308-C03-01) of the Spanish
“Plan Nacional de I+D+i".

References
[1] Orchestrating learning, Stellar Network of Excellence, http://www.stellarnet.
eu/d/1/1/Orchestrating_learning (last accessed April 02, 2012)
[2] Prieto, L.P., Holenko Dlab, M., Gutiérrez, I., Abdulwahed, M., Balid, W.: Orchestrating
technology enhanced learning: a literature review and a conceptual framework.
International Journal of Technology Enhanced Learning 3(6), 583–598 (2011)
[3] Alavi, H.S., Dillenbourg, P., Kaplan, F.: Distributed Awareness for Class Orchestration.
In: Cress, U., Dimitrova, V., Specht, M. (eds.) EC-TEL 2009. LNCS, vol. 5794, pp. 211–
225. Springer, Heidelberg (2009)
[4] Dong, J.-J., Hwang, W.-Y.: Study to minimize learning progress differences in software
learning class using PLITAZ system. Educational Technology Research and
Development (243) (2012), doi:10.1007/s11423-012-9233-x
[5] Prince, M.: Does Active Learning Work? A Review of the Research. Journal of
Engineering Education 93, 223–231 (2004)
[6] Barrows, H.S.: A taxonomy of problem-based learning methods. Medical
Education 20(6), 481–486 (1986), doi:10.1111/j.1365-2923.1986.tb01386.x
[7] Dillenbourg, P., Järvelä, S., Fischer, F.: The Evolution of Research in Computer-
Supported Collaborative Learning: from design to orchestration. In: Balacheff, N.,
Ludvigsen, S., de Jong, T., Lazonder, A., Barnes, S. (eds.) Technology-Enhanced
Learning. Springer (2009)
[8] Watts, M.: The orchestration of learning and teaching methods in science education.
Canadian Journal of Science, Mathematics and Technology Education (2003),
http://www.informaworld.com/index/918899916.pdf (retrieved)
[9] Websockets, http://dev.w3.org/html5/websockets/ (last accessed April 02,
2012)
[10] NodeJS, http://nodejs.org (last accessed April 02, 2012)
[11] MongoDB, http://mongodb.org (last accessed April 02, 2012)
[12] Gutiérrez Rojas, I., Crespo García, R., Delgado Kloos, C.: Orchestration and Feedback in
Lab Sessions: Improvements in Quick Feedback Provision. In: Kloos, C.D., Gillet, D.,
Crespo García, R.M., Wild, F., Wolpers, M. (eds.) EC-TEL 2011. LNCS, vol. 6964, pp.
424–429. Springer, Heidelberg (2011)
[13] Gutiérrez Rojas, I., Crespo García, R.: Towards efficient provision of feedback in lab
sessions. In: Proceeding of the International Conference on Advanced Learning
Technologies, ICALT 2012 (accepted, 2012)
Discerning Actuality in Backstage
Comprehensible Contextual Aging

Julia Hadersberger, Alexander Pohl, and François Bry

Institute for Informatics


University of Munich
{hadersberger,pohl}@pms.ifi.lmu.de, bry@lmu.de
http://pms.ifi.lmu.de

Abstract. The digital backchannel Backstage aims at supporting active


and socially enriched participation in large class lectures by improving
the social awareness of both lecturer and students. For this purpose,
Backstage provides microblog-based communication for fast information
exchange among students as well as from audience to lecturer. Rating en-
ables students to assess relevance of backchannel messages for the lecture.
Upon rating a ranking of messages can be determined and immediately
presented to the lecturer. However, relevance is of temporal nature. Thus,
the relevance of a message should degrade over time, a process called ag-
ing. Several aging approaches can be found in the literature. Many of
them, however, rely on the physical time which only plays a minor role
in assessing relevance in lecture settings. Rather, the actuality of rele-
vance should depend on the progress of a lecture and on backchannel
activity. Besides, many approaches are quite difficult in terms of com-
prehensibility, interpretation and handling. In this article we propose an
approach to aging that is easy to understand and to handle and therefore
more appropriate in the setting considered.

Keywords: Enhanced Classroom, Backchannel, Relevance, Aging.

1 Introduction
Lectures with large audiences is a much-noticed appearance of modern education.
In large class lectures students seldom actively participate, despite the fact that
active participation is vital for learning success. Several circumstances that fa-
vor passivity are provoked by large class lectures [1]: students are often inhibited
to speak in front of many peers. They are wary about interrupting the lecturer to
ask because their question might only be of minor relevance to the others and thus
would merely disturb the lecture; they are also afraid of appearing incompetent
when asking many questions [2]. Often students have also difficulties in formulat-
ing a question or a comment, especially when dealing with a quite unknown topic.
When lectures proceed at a high pace students only have little time to think about
the topic and only few opportunities to ask or comment. Besides, in the lecture
hall only one person can speak at a time. Whenever several group members engage
in a joint discussion, moderation is necessary.

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 126–139, 2012.

c Springer-Verlag Berlin Heidelberg 2012
Discerning Actuality in Backstage 127

To remedy the shortcomings of large class lectures, much effort has been put in
investigating the use of CMC1 and social media for learning (e.g. [3–5]). We argue
that the synchronous use of CMC in the form of a digital backchannel carefully
designed for the use in lectures may help to improve the social experience in the
classroom. For example, a student may assess the relevance of her question and
request for social support. Exchanging on a backchannel allows her to gain confi-
dence to raise a hand. But also the lecturer can utilize the communication on the
backchannel to lower the barrier and to stay connected with the audience. The
system Backstage [6, 7] is a digital backchannel specifically tailored for the use in
large class lectures as part of a research project that aims at advances in both e-
learning and social media. Backstage provides carefully designed microblog-based
communication by which students can rapidly exchange opinions, questions and
comments (cf. Section 2).
Communication on a backchannel can quickly become confusing and incom-
prehensible without further structuring and filtering, even when the number of
participants is small. Furthermore, the relevance and quality may vary entail-
ing the need to filter out irrelevant messages. Therefore, students may rate, i.e.
approve or reject, messages. Rating plays an important role for the lecturer: be-
cause of the outstanding role and the short time spans during which she can pay
attention to the backchannel while lecturing it is hardly possible for her to get
a meaningful overview of the backchannel communication without the help of
the audience. Rating makes possible to provide her with a top-k ranking of the
relevant messages. Also, rating is important for the students because it serves
as an instrument to collectively direct the lecturer’s attention to what they find
particularly relevant for their good reception of the lecture.
However, relevance of lecture-related messages sent during the lecture is of tem-
poral nature. Thus ratings and rankings, for that matter, should depend on time.
As the lecture proceeds, topics might change and some questions or comments
might become obsolete with respect to the progress of the lecture (while staying
relevant and available for discussions and exchange after the lecture). Therefore,
some kind of aging is needed. That is, the importance of messages should gradually
degrade over time. With aging, attention during the lecture is directed to recent
and active messages. Though, determining age on the basis of the physical time
does not seem to be reasonable for our purposes. Lectures usually vary in progress.
For example, introductory slides might be presented at much a higher pace than
a difficult mathematical proof. Aging should rather depend on a lecture-specific
measure of time like the activity on the backchannel. The approaches found in the
literature seem to be too involved for our needs and difficult to handle in the con-
text of a backchannel for large class lectures. In this article we present an approach
to aging that is based on the backchannel activity, and that is highly focused on
ease in comprehensibility and handling. It should be noted that although our ap-
proach is specifically conceived for Backstage it might also be interesting for other
microblogging platforms like Twitter2 , which is discussed in Section 5.

1
Computer-Mediated Communication
2
http://www.twitter.com
128 J. Hadersberger, A. Pohl, and F. Bry

2 A Short Overview of Backstage

Backstage is a digital backchannel for the use in large class lectures. The central
part of Backstage is CMC on the basis of microblogs akin to Twitter. Microblogs
are short messages comprising only a few words. They seem to be apt for the
synchronous use during lectures, since they only contain one information item
and may be read and written quickly. Unlike common microblogging, Backstage
requires messages to be assigned to predefined categories, e.g. Question or An-
swer. One rationale behind categories is to convey to the students the kind of
communication sought on the backchannel. As mentioned above, messages may
be rated by the students to express acceptance or rejection of a message in terms
of quality and relevance for the lecture.
A major goal of Backstage is to provide communication and promote student-
to-student as well as student-to-lecturer interaction conducive for learning. For
this reason, Backstage guides the user’s interactions [8]. To provide for context
on Backstage the presentation slides are integrated into the users’ dashboards
(cf. Figure 1).

Fig. 1. The lecturer’s dashboard on Backstage: the message stream is shown at the
left-hand side. The slides are displayed at the center with the categories of messages
on top. At the right-hand side the aggregated topic overview is displayed.

To align the backchannel communication with the slides the creation of a


message is a well thought process simple and intuitive to perform that, in a
manner, is inspired from scripts [9]. It is realized by an iconic drag-and-drop
onto the slides to direct users to messages profitable for learning: to write a
message the student has to be aware of what she wants to say (both in terms
of category and content), and to which part of the slide the message refers to
Discerning Actuality in Backstage 129

(cf. [8]). That is, on Backstage messages annotate slides, which is also referred
to as explicit referencing [10]. As already mentioned, messages on Backstage can
also be read, and possibly answered, both by students and by the lecturer at any
time after the lecture.
Backstage also provides means to improve the lecturer’s awareness. Since due
to the script-based user interface every message is necessarily assigned to some
predefined category (e.g. Question, Answer, Remark, Too Fast) an aggregated
overview showing the distribution of the communication to the categories can be
given. For example, such an overview makes possible for the lecturer to quickly
become aware during the lecture of many students getting lost, which presumably
results in a notable increase in Question- and Too Fast-messages. Besides a topic-
related overview, a top-k ranking of messages can be generated showing the k
messages that the audience finds particularly relevant. Such a ranking is based
on the ratings of students. Thus, rating allows students to direct the lecturer’s
attention to what that they find relevant. Both kinds of overview, the distribution
of messages to the categories and a content-related overview by a top-k ranking
supports the lecturer in staying attached to the backchannel.
To support active participation, Backstage allows the conduct of quizzes that
are reminiscent to audience response systems (e.g. [11, 12]). Recently, audience
response systems have gained much attention. They not only allow to playfully
assess students’ retention but also help to structure the lecture and activate
students at a regular basis. When a quiz is conducted on Backstage, students
can only answer the quiz; other functionalities are disabled. After the quiz is
finished the results are integrated as ordinary slides that can be annotated and
viewed. That is, quizzes can be used for introducing some gamification into the
lecture thus providing a kind of break and sustaining the students’ attention.

3 Related Work

Prior to presenting our approach to ranking with aging it is reasonable to provide


the reader with a short overview of the field. As mentioned above we want to
determine a ranking of messages upon the students’ ratings. What is understood
as rating and ranking is in many cases not so clear, however. Making matters
worse, rating and ranking often occur interleaved, since rankings are frequently
determined on the basis of ratings. Though, we distinguish between rating and
ranking as follows: rating refers to the process of assigning some concrete value
to a single message, e.g. “plus” and “minus” or “approve” and “reject”. Ranking,
in turn, relates two or more messages to each other, thereby specifying a relative
(strict) order, for example pairwise comparison of the form “Message A is more
relevant than message B”.

3.1 Rating

Rating has been applied in various situations. In the Internet it is especially


known for its use on commercial websites (rating products or sellers) and in
130 J. Hadersberger, A. Pohl, and F. Bry

Web 2.0 applications, to get feedback and find high-quality [13, 14]. Basically
rating schemes can be distinguished in two main groups, namely explicit and
implicit rating.
The first group – explicit rating – comprises all algorithms that necessitate an
intentional vote of a user, which means she is conscious of her evaluation [15, 16].
This kind of obvious rating forces the user to actively think about her judgment,
but this can also be seen as an effort so that the user might get discouraged if
there is no kind of reward for it [17]. The simplest solution for explicit rating is
solely giving users the possibility to “like” an item by voting for it [16], maybe
even on a five-star rating scale. Normalization is often used to keep the score
within a certain range. The downside of normalization is that the reliability of
the average score is not apparent to the users. For example, an item with an
average rating of two of five stars voted by only one person does not seem as bad
as an item with the same average rating voted by, say, twenty people. However,
the opinion of the larger group seems to be more reliable. Another explicit form
of rating scheme gives the possibility to not only vote positively for an item, but
also negatively or even express neutrality [18–21]. Negative ratings are sometimes
desired to give users the possibility to “punish” inappropriate items or behavior.
Disadvantageously, calculations with negative values can become complicated,
chances are that positive and negative values cancel each other out. As a result,
no received votes and a balanced average of votes might be observed as the same
overall score.
The second group – implicit rating – extracts rating information from non-
rating interactions or data that, however, is interpreted as votes. The user is
often unconscious about her influence, since rating happens in the course of
using the application [15, 16]. Implicit rating helps to overcome data sparsity,
since the user does not need to be motivated to particularly provide for ratings.
Different kind of interactions depending on the context can be chosen as a source
of rating. Clicks on links or items can be seen as interest and positive feedback,
but there could also be “misclicks” which are then misinterpreted [13, 18]. Other
interactions might be more reliable, like adding an item to someone’s favorites,
printing or buying an item or even measuring the time that was spent on an
item [17]. Furthermore, answering a question on a discussion board can also be
considered as interest in an item and thus, as a positive vote.
Although implicit rating seems to be more complex to handle, since much
data has to be analyzed and stored, the retrieval of more reliable data collection
in a more timely fashion is possible. On the other hand, explicit rating is the
only way to force the user to really consciously form an opinion about an item.

3.2 Ranking
Ranking can be found in various situations, for example online for listing the
best game players or ordering search results. For Backstage a ranking is needed
that melts the opinions of the users into one single ranking. Basically, there are
two different ways to get a collective ranking: aggregating individual ratings or
Discerning Actuality in Backstage 131

aggregating individual rankings. Furthermore, the collective ranking can be split


in two groups, namely non-parametric and parametric solutions.
Non-parametric solutions do not rely on any externally set parameters or
weights. The first way, getting a collective ranking by aggregating individual
ratings, comprises some simple mathematical solutions that were already men-
tioned in Section 3.1, like summing up values or calculating the arithmetic mean.
As already mentioned, these basic solutions entail different disadvantages, for
example positive and negative values cancel each other out. Furthermore, the
arithmetic mean makes it easier for new items to get a better overall rating than
older ones, since an item can only receive the highest positive overall rating if
every rating was that high [14]. More complex ideas entail more complex prob-
lems, like finding experts in question-answer-portals. Although it seems to be a
good idea to count the number of people a user has already helped, the problem
remains that it is not known if she only answered to lay people or other experts
[22]. To get individual rankings that can be aggregated to a collective ranking
users can be asked to directly order the items according to their opinion. As
it is very challenging for users to order many items, comparison based meth-
ods are frequently used. Therefore, two ore more items are shown to the user,
who has to decide which one she prefers. Repetitive comparison of the winning
item against the other alternatives until no items are left result in an individual
ranking. This can also be done implicitly, for example while browsing a website
with several links on it choosing one link can be interpreted as preference for
the clicked link over the other ones [15]. The so-called Hasse method [23] offers
the opportunity to create a ranking of items by directly comparing their two
or more properties. Disadvantageously items could be incomparable if they are
not better or worse in all properties. Hence, the Hasse method might result only
in a partial order. Afterwards it can be ranked according to the average posi-
tions. The so-called Copeland Score [23] combines the idea of the Hasse method
with the direct comparison of items. Like the Hasse method, items with several
properties are compared against each other. The Copeland Score for each item
denotes the number of wins minus the number of defeats (incomparability is
equivalent to zero) while it is compared to all alternatives. Afterwards the items
are ranked according to their descending Copeland Score. It has to be noticed
that the Copeland Score results in a total, but not necessarily in a strict order,
which means there can be two items with the same score.
Each of the above mentioned non-parametric solutions can be combined with,
and influenced by, parameters and hence become a parametric solution. Setting
the parameters is crucial and can influence the overall computation significantly
[23]. Therefore, many experiments are required to find the right configuration.
Two interesting projects, using individual ratings to get a collective ranking,
shall be mentioned here. The Backchan.nl project [20] is similar to Backstage
and includes a formula that combines the so-called voteFactor with the ageFac-
tor. The voteFactor is based on the proportion of positive votes for a message
and the number of votes the message received compared to all other messages.
This solution is already designed for a very specific context, as it does not only
132 J. Hadersberger, A. Pohl, and F. Bry

reward positive items but also highly discussed ones. Another algorithm is the
Real-Life-Rating [14], an extension of the so-called Bayesian Rating, which is
shortly explained in Section 4. This algorithm involves the expertise of users
for certain domains and the friendship between users additionally to the rating
itself. The Real-Life-Rating algorithm is very elaborate, but also very specific. It
seems to be adequate to rather make use of the Bayesian Rating to keep it sim-
ple. The last example shows the combination of individual ratings and rankings
aggregated at the same time to get a collective ranking. The ranking algorithm
for microblog search [24] is based on three different properties. First, the Fol-
lowerRank which denotes the number of followers of one user normalized by the
total number of her followers and the users she follows. Second, the LengthRank
which is the comparison by percentage of this message to the longest message
within the search results. Finally, the URLRank is set to a positive constant if
the message contains a URL, otherwise it is set to zero. Although this solution
can be criticized, as containing a link or being very long does not necessarily
constitute a good message, it is a very good example for the smooth transition
between rating and ranking. Although all three properties seem to be a ranking
due to their name, in fact the two URLRank and FollowerRank are indepen-
dently set or calculated values without any comparison to other items.

3.3 Aging
In most projects aging is a negative process of losing influence as time goes.
Therefore, aging is naturally expressed as some kind of weight decreasing over
time and expressing a remaining relevance. The older an item is, the lower its
influence on the overall score. As we will see in Section 4 the notion of the term
“age” is important.
One solution is based on the half-life parameter as known from the modeling
of nuclear decay processes. Therefore, a time-dependent monotonic decreasing
function f (t) is included in the algorithm [25], for example the exponential or
logistic function. The authors define the time function as f (t) = e−λt , where λ is
the decay rate T10 . This algorithm depends on the setting of T0 , which specifies
how long it takes to reduce the weight by half. The lower T0 , the faster the de-
cay of the weight and the lower the influence. Another algorithm concerning the
freshness of items on social tagging sites [26] divides the timeline in discrete and
equi-distant time intervals. The time function am−s is included into the formula,
where a denotes a decay factor between zero and one. While m counts the num-
ber of all time slices up to now and s is a indexed variable from one to m, m = s
is the current time slice. The fresher a tagging the smaller is the exponent, and
the bigger the whole factor. Fresher items have a bigger influence.
In contrast to the above mentioned algorithms, the ageFactor of the system
Backchan.nl [20] is not so clear. As the ageFactor is combined with the vote-
Factor by multiplication it seems obvious at first sight that the aging here is
once again some kind of weight. Examples show that voteFactor and ageFactor
are inconsistent with one another. Therefore, we solely focus on the ageFactor
formula here. The age of a message is defined by the average age of the last five
Discerning Actuality in Backstage 133

votes the message received. The authors use a constant τ = 104 by which the av-
erage age is divided to reduce the influence of the age factor. This solution seems
to be very intuitive, but it has several drawbacks. First of all, the parameter τ
has to be set individually according to each context. It could happen that the
ageFactor becomes larger than one if enough time goes by. The inconsistency of
this algorithm lies in the fact that with increasing age of an item the ageFactor
also increases. Using the ageFactor as defined in [20] with increasing age the
influence of such a post is also increased instead of reduced.

4 Discerning Actuality in the Ranking of Messages

For the presentation of aging we first assume that the rating procedure is a black
box that yields numeric values for the backchannel messages. According to these
ratings messages are sorted in order to obtain a ranking. Various rating schemes
of different complexities and requirements may be employed. For the big picture,
however, we present in a few words the rating currently used in Backstage. Users
may rate a message positively (approval) or negatively (rejection) only once. The
overall rating r(m) of a message m is calculated by the following weighted average
(e.g. cf. [14]):
 
1 pos(m)
r(m) = NR · R + nr(m) ·
NR + nr(m) nr(m)

In the formula above NR denotes the average number of ratings of all messages,
nr(m) = max(1, pos(m) + neg(m)) denotes the total number of ratings for the
message m, pos(m) is the number of positive ratings the message m received,
neg(m) the negative ratings for m, and R denotes the average rating of all
messages. As can be seen, positive and negative ratings do not cancel each other
out, but the negative ratings weaken the influence of the positive ratings. If the
total number of ratings for a message nr(m) is much smaller than the average
number of ratings NR the message’s rating is dominated by the average rating
R, meaning that not much credit is given to the users who rated the message
m. Conversely, if the number of ratings for m is greater than the average number
of ratings for a message the rating for m is dominated by the users who rated it.
Thus, the rating scheme is biased in as much as it favors the appraisal of the col-
lective over that of the few. However, whether this rating scheme is appropriate
for Backstage needs to be investigated in an experiment in the near future.

4.1 Measuring Time and Age in Backstage

To better reflect the progress of a lecture we propose to use the backchannel ac-
tivity during the lecture to promote aging. The logical time on the backchannel
advances after each n-th interaction on Backstage. Both the number n and the
specification of what is considered as activity is defined by the lecturer. Activ-
ities may comprise sending of messages of certain categories and rating. Since
134 J. Hadersberger, A. Pohl, and F. Bry

on Backstage rating is performed by (automatically) sending messages of a spe-


cial rating category, specifying activity amounts to nothing else than selecting
categories. Both the number of interactions after which the time advances and
the specification of activity on Backstage are very intuitive parameters that can
easily be handled by the lecturer, even during a lecture.
A first idea for measuring the age of messages would be to calculate the differ-
ence between the current time and the time of creation. However, this solution
is inappropriate, since messages that are regularly rated, i.e. active messages,
would age at the same pace as messages which are disregarded by the audience.
On Backstage, active messages shall age at a lower pace than inactive messages.
Thus, it is reasonable to also consider the age of a message’s ratings. Hence,
aging depends on the attention a message receives: it is promoted when the
focus by the audience of a message recedes. A naive approach to determining
age might be to calculate the difference of the current time and the time of the
most recent rating a message received. This is problematic, though. For example,
imagine that many students have rated a post a long time ago, i.e. the message
is actually obsolete, but one student revives the message by rating, the message
would suddenly, and inexplicably, rejuvenate.
The arithmetic mean over all ratings would solve this issue. However, it is
very sensitive to outliers. Many ratings at the same time would be needed to
assure that the age of this message can be considered robust. To overcome these
difficulties we favor the use of the median as the average age of a message. The
median of a frequency distribution is the sampled value of an (artificial) instance
that bisects the distribution. For a sequence (x1 , x2 , . . . , xk ) of k sampled values
the median x is computed as follows:

⎨x k+1 if k is odd
x= 2 
⎩1 xk + xk otherwise
2 2 +12

The median is an interesting representative of the central tendency, since it is


quite robust against outliers but likewise sensitive enough to reflect relevant
changes in the data (cf. [27]). Thus, to determine the age of a message, we
determine the median from the ratings’ age and from the creation time of the
corresponding message. Also considering the time of creation is necessary in the
case that a message has not received any ratings at the beginning. Otherwise,
the message would not be considered by aging.

4.2 Aging in Backstage

After each n interactions on Backstage aging is promoted and the ranking is


updated. We therefore propose the procedure given as pseudo-code in Listing 1.
As can be seen in the given procedure the rating score is obtained by multi-
plying the positions of a post in the two rankings built upon age and ratings.
Since it is possible that two posts may be assigned the same rating they may
share the same position in the respective ranking. The final top-k ranking is
Discerning Actuality in Backstage 135

Algorithm 1. AgingRank: Ranking with Aging


Require: the number k of messages that constitute the ranking
Require: the interaction counter n
if clockTick(n) then
candidatePosts := getCandidatePosts()
{promote aging}
for all post in candidatePosts do
updateAge(post, calculateMedianAge(post))
end for
rankingByAge := sortDescendingByAge(candidatePosts)
rankingByRating := sortDescendingByRating(candidatePosts)
{we assume lists to be 1-indexed }
for indexAge := 1 to maxIndex(rankingByAge) do
post := getElement(indexAge, rankingByAge)
{get the index of the post in the ranking by rating}
indexRating := getIndex(post, rankingByRating)
updateScore(post, indexAge * indexRating)
end for
{sort the candidates by just updated score values}
relevantPosts := sortDescendingByScore(candidatePosts)
resolved := resolveConflicts(relevantPosts)
result := firstElements(k, resolved)
updateRanking(result)
end if

then computed by sorting the list of relevant messages according to the mes-
sages’ scores. However, the given procedure may result in conflicts. For example,
two messages, say, m1 with indexRating = 2 and indexAge = 3, and m2 with the
positions conversed, that is indexRating = 3 and indexAge = 2 would receive the
same score 6. Both messages m1 and m2 would be assigned the same position in
the final ranking. Thus, conflict resolution is necessary.
We propose a simple but eligible approach to conflict resolution: we let the
lecturer decide which of the conflicting messages should get higher priority.
Therefore, the lecturer specifies in her profile, whether she favors a conserva-
tive ranking, i.e. older messages stay in the ranking, or a progressive ranking in
which older messages are replaced by newer ones whenever possible. In case of
further remaining conflicts we may eventually establish a strict order by resort-
ing to the physical age, since the conflicting messages can then be considered
equal in terms of relevance and logical age.
To determine the follow-up ranking it is not necessary to consider the entire
message stream. It rather suffices to determine a set of candidates, the number
of which depends on the number of interactions n by which aging is promoted.
Certainly, the messages listed in the current ranking are also candidates for the
follow-up ranking. However, other messages may be candidates as well. For this
purpose, consider the example timeline in Figure 2.
136 J. Hadersberger, A. Pohl, and F. Bry

n interactions n interactions

ranking ?
... ... time
now

message rating reference to corresponding message

Fig. 2. Example Timeline of Interactions. The rectangles illustrate the points in time
at which messages are sent, the circles illustrate the points in time at which messages
are rated. The dotted arrows connect the ratings with the rated messages. The dots at
the timeline indicate further interactions.

Between two ticks of the logical clock, n interactions are carried out by the
users. These interactions may comprise the creation of x ≤ n new messages
and y = n − x ratings for existing messages. The ratings may refer to up to y
messages created during the recent or some earlier time span. All these messages
have recently been in the focus of the audience. Thus, besides the currently
ranked messages, both the newly created and the newly rated messages are also
candidates for the follow-up ranking. Reckoned up, the set of candidate messages
comprises not more than k + n messages.

5 Discerning Actuality in Twitter-Based Tools


Although we conceived timely ranking by aging for the digital backchannel Back-
stage, it most likely might also be of interest for other microblogging platforms
based on Twitter. To employ our approach it is sufficient to provide for means
to determine relevance ratings, to measure activity, and to set the strategy for
updating the rankings. This section illustrates possible applications in both e-
learning and non-e-learning fields.
Twitter is the most prominent publicly available generic microblogging service
and gained much attention not only by e-learning researchers. Twitter allows to
relate microblog messages, so-called tweets, by hashtags. Thus, hashtags make
possible to retrieve a coherent line of communication on a topic. Users can fol-
low other users, i.e. become their followers. The tweets of the followed users are
displayed at one’s own message stream. One may forward messages of followed
users to their own followers by a special form of citation, so-called re-tweets:
the original message is copied and prefixed with the keyword “RT” followed by
the origin user. Thus, a retweet is usually of the form “RT @originUser [origi-
nal text ]”.
Twitter provides a rich API3 upon which custom microblogging applications
can be built. One e-learning backchannel similar to Backstage is Twitterwall4
[28]. The platform allows the retrieval and display of multiple message streams
3
Application Programming Interface
4
http://twitterwall.tugraz.at
Discerning Actuality in Backstage 137

by specifying hashtags. Furthermore it extends Twitter in that it provides rating


of tweets. To extend Twitterwall with aging, the rating scheme that is already
integrated can be used. Activity can be measured by the number of messages
containing certain hashtags. As for each hashtag Twitterwall displays a separate
message stream, it might also be interesting to provide rankings for each of those
streams that underly distinct aging.
Also, discerning actuality in tweet rankings directly on Twitter can be ac-
complished in much the same way as is proposed for Backstage. As mentioned
above, Twitter does not provide rating of tweets. However, rating of a tweet can
be mimicked by counting the number of users retweeting the tweet. That is, a
tweet that is frequently retweeted is heavily focused on by users and may thus
be considered relevant. On Twitter, activity can be measured by the number of
messages containing certain hashtags and by the number of retweets of those
messages. Obtaining a timely ranking of tweets may provide interesting insights
into trends in social news broadcast on Twitter.
Another quite interesting field of application might be stock microblogging,
e.g. TweetTrader5 [29]. Among other things, users of TweetTrader estimate in
tweets the performance of stock quotations. Using special processable syntax,
those tweets are evaluated and aggregated to determine the collective estima-
tion of near-future stock developments. Discerning actuality in a ranking of those
estimations might be of great interest for stock microblogging. Ratings in this
case might be based on the content of the tweets, i.e. the users’ assessments of
the stock development. The activity may be specified by the number of tweets
sent, for example. A progressive update strategy is likely to be preferred for a
ranking in order to always be aware of the most recent estimations. Also, timely
ranking of stock quotations might yield interesting outcomes in the analysis
of trends.

6 Conclusion and Future Work

This article proposes an intuitive and easy-to-handle approach to discerning


actuality in Backstage, a backchannel carefully designed for the use in large
class lectures. We show how aging can be used to provide the lecturer with a
ranking that considers actuality. The approach in this article favors the activity
on the backchannel as the time measure according to which aging of messages
is promoted, since the physical time only plays a minor role in determining
a lecture’s progress. Potential fields of applications are sketched. Since during
the development of the presented approach Backstage has undergone several
changes, the integration is not yet finished. Furthermore, its usefulness needs
to be investigated in an experimental setting. Promoting aging also seems to
be valuable for other purposes. For example, further functionalities that aim
at supporting the awareness of students and lecturer and making interactions
more personal and affectionate are currently under development. Some of these
5
http://tweettrader.net
138 J. Hadersberger, A. Pohl, and F. Bry

functionalities also depend on a sort of time and might also require aging. These
topics are going to be discussed in a forthcoming paper.

References

1. van de Grift, T., Wolfman, S.A., Yasuhara, K., Anderson, R.J.: Promoting Interac-
tion in Large Classes with a Computer-Mediated Feedback System. In: Proceedings
of the International Conference CSCL, pp. 119–123 (2003)
2. Schworm, S., Fischer, F.: Academic Help Seeking. In: Handbuch Lernstrategien
[Handbook of Learning Strategies], pp. 282–293 (2006) (in German)
3. Ebner, M., Schiefner, M.: Microblogging – More than Fun? In: Immaculada
Arnedillo, S., Isaias, P. (eds.) Proceeding of IADIS Mobile Learning Conference
2008, pp. 155–159 (2008)
4. Ebner, M.: Introducing Live Microblogging: how Single Presentations can be En-
hanced by the Mass. Journal of Research in Innovative Teaching 2(1), 108–111
(2009)
5. Holotescu, C., Grosseck, G.: Using Microblogging to Deliver Online Courses. Case-
Study: Cirip.ro. Procedia - Social and Behavioral Sciences 1(1), 495–501 (2009);
World Conference on Educational Sciences. New Trends and Issues in Educational
Sciences, Nicosia, North Cyprus (February 4-7, 2009)
6. Bry, F., Gehlen-Baum, V., Pohl, A.: Promoting Awareness and Participation in
Large Class Lectures: The Digital Backchannel Backstage. In: Proceedings of the
IADIS International Conference e-society, Spain, Avila, pp. 27–34 (March 2011)
7. Pohl, A., Gehlen-Baum, V., Bry, F.: Introducing Backstage – A Digital Backchan-
nel for Large Class Lectures. Interactive Techology and Smart Education 8(4),
186–200 (2011)
8. Baumgart, D., Pohl, A., Gehlen-Baum, V., Bry, F.: Providing Guidance on Back-
stage, a Novel Digital Backchannel for Large Class Teaching. In: Education in a
Technological World: Communicating Current and Emerging Research and Tech-
nological Efforts, Formatex, Spain, pp. 364–371 (2012)
9. King, A.: Scripting Collaborative Learning Processes: A Cognitive Perspective.
In: Fischer, F., Kollar, I., Mandl, H., Haake, J.M. (eds.) Scripting Computer-
Supported Collaborative Learning, pp. 13–37. Springer US, Boston (2007)
10. Suthers, D., Xu, J.: Kükäkükä: An Online Environment for Artifact-Centered Dis-
course. In: Proceedings of the 11th World Wide Web Conference (2002)
11. Golub, E.: On Audience Activities During Presentations. Journal of Computing
Sciences in Colleges 20(3), 38–46 (2005)
12. Kay, R.H., LeSage, A.: Examining the Benefits and Challenges of using Audience
Response Systems: A Review of the Literature. Computers & Education 53(3),
819–827 (2009)
13. Bian, J., Liu, Y., Agichtein, E., Zha, H.: Finding the Right Facts in the Crowd:
Factoid Question Answering over Social Media. In: Proceedings of the 17th Inter-
national Conference on World Wide Web, WWW 2008, pp. 467–476. ACM, New
York (2008)
14. Marmolowski, M.: Real-Life Rating Algorithm. Technical Report, DERI (2008)
15. Das Sarma, A., Das Sarma, A., Gollapudi, S., Panigrahy, R.: Ranking Mechanisms
in Twitter-Like Forums. In: Proceedings of the 3rd ACM International Conference
on Web Search and Data Mining, WSDM 2010, pp. 21–30. ACM, New York (2010)
Discerning Actuality in Backstage 139

16. Lerman, K.: Dynamics of a Collaborative Rating System. In: Zhang, H.,
Spiliopoulou, M., Mobasher, B., Giles, C.L., McCallum, A., Nasraoui, O., Sri-
vastava, J., Yen, J. (eds.) WebKDD 2007. LNCS, vol. 5439, pp. 77–96. Springer,
Heidelberg (2009)
17. Nichols, D.M.: Implicit Rating and Filtering. In: Proceedings of the 5th DELOS
Workshop on Filtering and Collaborative Filtering, pp. 31–36 (1998)
18. Agichtein, E., Castillo, C., Donato, D., Gionis, A., Mishne, G.: Finding High-
Quality Content in Social Media. In: Proceedings of the International Conference
on Web Search and Web Data Mining, WSDM 2008, pp. 183–194. ACM, New York
(2008)
19. Bian, J., Liu, Y., Agichtein, E., Zha, H.: A Few Bad Votes too Many? Towards
Robust Ranking in Social Media. In: Proceedings of the 4th International Workshop
on Adversarial Information Retrieval on the Web, AIRWeb 2008, pp. 53–60. ACM,
New York (2008)
20. Harry, D., Gutierrez, D., Green, J., Donath, J.: Backchan.nl: Integrating Backchan-
nels with Physical Space. In: Extended Abstracts on Human Factors in Computing
Systems, CHI 2008, pp. 2751–2756. ACM, New York (2008)
21. Kamvar, S.D., Schlosser, M.T., Garcia-Molina, H.: The Eigentrust Algorithm for
Reputation Management in P2P Networks. In: Proceedings of the 12th Interna-
tional Conference on World Wide Web, WWW 2003, pp. 640–651. ACM, New
York (2003)
22. Zhang, J., Ackerman, M.S., Adamic, L.: Expertise Networks in Online Communi-
ties: Structure and Algorithms. In: Proceedings of the 16th International Confer-
ence on World Wide Web, WWW 2007, pp. 221–230. ACM, New York (2007)
23. Al-Sharrah, G.: Ranking Using the Copeland Score: A Comparison with the Hasse
Diagram. Journal of Chemical Information and Modeling 50(5), 785–791 (2010)
24. Nagmoti, R., Teredesai, A., De Cock, M.: Ranking Approaches for Microblog
Search. In: Proceedings of the 2010 IEEE/WIC/ACM International Conference
on Web Intelligence and Intelligent Agent Technology, WI-IAT 2010, pp. 153–157.
IEEE Computer Society, Washington, DC (2010)
25. Ding, Y., Li, X.: Time Weight Collaborative Filtering. In: Proceedings of the
14th ACM International Conference on Information and Knowledge Management,
CIKM 2005, pp. 485–492. ACM, New York (2005)
26. Huo, W., Tsotras, V.J.: Temporal Top-k Search in Social Tagging Sites Using
Multiple Social Networks. In: Kitagawa, H., Ishikawa, Y., Li, Q., Watanabe, C.
(eds.) DASFAA 2010, Part I. LNCS, vol. 5981, pp. 498–504. Springer, Heidelberg
(2010)
27. Garcin, F., Faltings, B., Jurca, R.: Aggregating Reputation Feedback. In: Proceed-
ings of the First International Conference on Reputation: Theory and Technology,
Gargonza, Italy, vol. 1, pp. 62–67 (2009)
28. Ebner, M.: Is Twitter a Tool for Mass-Education? In: Proccedings of the 4th In-
ternational Conference on Student Mobility and ICT, Vienna (2011)
29. Sprenger, T.: TweetTrader.net: Leveraging Crowd Wisdom in a Stock Microblog-
ging Forum. In: Proceedings of the 5th International Conference of Weblogs and
Social Media, Barcelona, Spain (2011)
Tweets Reveal More Than You Know:
A Learning Style Analysis on Twitter

Claudia Hauff1 , Marcel Berthold2 , Geert-Jan Houben1 ,


Christina M. Steiner2 , and Dietrich Albert2
1
Delft University of Technology, The Netherlands
{c.hauff,g.j.p.m.houben}@tudelft.nl
2
Knowledge Management Institute, Graz University of Technology, Austria
{marcel.berthold,christina.steiner}@tugraz.at

Abstract. Adaptation and personalization of e-learning and technology-


enhanced learning (TEL) systems in general, have become a tremendous
key factor for the learning success with such systems. In order to pro-
vide adaptation, the system needs to have access to relevant data about
the learner. This paper describes a preliminary study with the goal to
infer a learner’s learning style from her Twitter stream. We selected the
Felder-Silverman Learning Style Model (FSLSM) due to its validity and
widespread use and collected ground truth data from 51 study partici-
pants based on self-reports on the Index of Learning Style questionnaire
and tweets posted on Twitter. We extracted 29 features from each sub-
ject’s Twitter stream and used them to classify each subject as belonging
to one of the two poles for each of the four dimensions of the FSLSM.
We found a more than by chance agreement only for a single dimension:
active/reflective. Further implications and an outlook are presented.

1 Introduction
Over the last decade, personalization and adaptation in E-learning has become a
mainstream component in E-learning systems. Such adaptations provide learners
with a personalized learning experience that is either unique to each individual
or unique to a particular group of learners. The goals are clear: to keep the learn-
ers motivated and engaged, to decrease the learners’ frustration, to provide an
optimal learning environment and, of course, to increase the learners’ expertise
in a particular subject.
In order to provide adaptation, the system needs to have access to relevant
data about the learner. What is deemed relevant in this context depends on
the facilities that are provided by the system. Adaptation can be provided on a
number of levels with varying granularity. It can be based on gender [1], on the
learners’ level of expertise [2, 3], on the learners’ culture [4] or on the learners’
learning styles [5].
The latter, adaptation according to the learners’ learning styles, is also the
focus of this paper. We note that there is controversy surrounding the learning
style hypothesis [6], which states that enabling a learner to learn with material

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 140–152, 2012.

c Springer-Verlag Berlin Heidelberg 2012
Tweets Reveal More Than You Know 141

that is tailored to her own learning style will outperform a learner who learns
the material tailored to a learning style that is not her own. As of today no
studies have conclusively shown that this hypothesis actually holds for a wide
range of people. Although learning styles may not yield improved results with
respect to objective measures (such as testing the increase in learner expertise),
learning styles are of importance for E-learning systems to improve the learners’
satisfaction in the material and to keep them engaged by offering them learning
that is appropriate for their self-perceived learning style.
At the same time a question raises: Do Twitter users actually provide infor-
mation about their learning style or how they learn? In paper by [7] the authors
investigates why people continue using twitter. Among others it could be shown
that users continue using Twitter, because of positive content gratification. Con-
tent gratification was comprises by disconfirmation of information sharing and
self-documentation (the way users learn, keep track what they are doing, docu-
ment their life). Therefore it can be argued that tweets are produced to report
about users’ learning behaviour intentionally. In addition, in this paper data
mining is also based on phrases which are derived from exiting questionnaire
and should cover some non-intentional phrases in regard to learning behaviour.
Over the years, a number of learning style models have been proposed, among
them Kolb’s Experiental Learning Theory [8], Fleming’s VARK learning styles
inventory [9] and Felder-Silverman-Learning-Style-Model (FSLSM) [10, 11]. In-
dependent of the particular model chosen, the procedure to determine a learner’s
learning style is always the same: the learner fills in a standardized questionnaire
(specific to the model) and based on the answers given the different dimensions
of the model are determined. One of the problems with this approach is that
the learner may be unwilling to spend a lot of effort on this procedure1 . More
importantly though, learners cannot be expected to repeatedly fill in such a ques-
tionnaire, which, if a system is used for a long time may become necessary, as
there is evidence that learning styles change over time [12]. Thus, an automatic
approach to infer the learning style of a learner is likely to be more precise in
the long run.
Ideally, we are able to determine the learner’s learning style without asking
the learner for explicit feedback. One potential solution to this problem lies
in the social Web whose rise has made people not merely consumers of the
Web, but active contributors of content. Widely adopted social Web services,
such as Twitter2 , Facebook3 and YouTube4 , are frequented by millions of active
users who add, comment or vote on content. If a learner is active on the social
Web, a considerable amount of information about her is available on the Web
and, depending on the particular service used, most of it is publicly accessible.
We envision E-learning systems in the future to simply ask the learner about
her username(s) on various (publicly accessible) social Web services where the

1
The ILS questionnaire for instance consists of 44 questions.
2
http://www.twitter.com/
3
http://www.facebook.com/
4
http://www.youtube.com/
142 C. Hauff et al.

learner is active on. Then, based on the learner’s “online persona”, aggregated
from the social Web, the system can automatically infer the learner’s learning
style. We have already shown in previous work [13] that it is possible to derive a
basic profile of the learner’s knowledge in a particular domain from the learner’s
activities on the microblogging platform Twitter. In this work now, we are in-
terested to what extent it is possible to derive information about a learner’s
learning style from the same social Web stream.
In the EU project ImREAL (Immersive Reflective Experience-based Adaptive
Learning) intelligent services are developed to augment and improve simulated
learning environments among others, to bring real world users data, e.g. content
retrieved from tweets, into the simulation to link real world experiences to the
simulation. In this paper the following hypothesis is investigated: the information
the learner can provide in the learning style questionnaire is already implicitly
available in the learner’s utterances in the social Web. If this is indeed the case,
the research question then becomes of how to extract this implicit information
and transform it into the different dimensions of the learning styles models.
We consider the collaborative work of machine learning and psycho-
pedagogical approaches presented here as a preliminary study - if we were able
to show success in predicting a learner’s learning style based on the learner’s
tweets with a number of simple features, we have evidence that this is a path
that is worth investigating further.
The remainder of the paper is organized as follows: in Section 2 related work is
presented. Section 3 describes our pilot study and the setup of the experiments.
The results are then presented in Section 4 and the paper is concluded with a
discussion and an outlook to future work in Section 5.

2 Related Work

We first describe previous work that sheds light on why people use Twitter.
Then, we turn to previous works that have attempted what we set out to do
too: to infer a learner’s learning style from implicit information available about
the learner, that is without letting the learner fill in a questionnaire.

2.1 The Use of Twitter in Scientific Research

Two questions that have been investigated by a number of researchers in the past
are what is the people’s motivation to use Twitter and what do the people actually
post about. Java et al. [14] determined four broad categories of tweets: daily chat-
ter (the most common usage of Twitter), conversations, shared information/URLs
and reported news. Naaman et al. [15] derived a more detailed categorization with
nine different elements: information sharing, self promotion, opinions, statements
and random thoughts, questions to followers, presence maintenance, anecdotes
about me and me now. Moreover, they also found that the approximately eighty
percent of the users on Twitter focus on themselves (they are so-called “Meform-
ers”), while only a minority of users are driven largely by sharing information (the
Tweets Reveal More Than You Know 143

“Informers”). Westman et al. [16] performed a genre analysis on tweets and iden-
tified five common genres: personal updates, direct dialogue (addressed to certain
users), real-time sharing (news), business broadcasting and information seeking
(questions for mainly personal information). Finally, Zhao et al. [17] conducted in-
terviews and asked people directly about their motivations for using Twitter; sev-
eral major reasons surfaced: keeping in touch with friends and colleagues, pointing
others to interesting items, collecting useful information for one’s work and spare
time and asking for help and opinions. These studies show that a lot of tweets
are concerned with the user herself; we hypothesize that among these user cen-
tred tweets, there are also useful ones for the derivation of the learner’s knowledge
profile.
A number of Twitter studies also attempt to predict user characteristics
from tweets. While we are aiming to extract a learner’s learning style, Michel-
son et al. [18] derive topic profiles from Twitter users which are hypothesized to
be indicative of the users’ interests and expertise. In a number of other works,
e.g. [19–21], elementary user characteristics are inferred from Twitter, including
gender, age, political orientation, regional origin and ethnicity.

2.2 Learning Style Investigations


A number of previous works exist that infer learners’ learning styles based on
their behaviour within the learning environment. In [22] the outline of such a
system is sketched, though no experiments are reported. Garcia et al. [23] investi-
gated to what extent it is possible to infer a learner’s learning style (specifically
the ILS variant) from the learner’s interaction with a Web-based E-learning
system and a class of Artificial Intelligence students. They relied on a number
of features that model the students’ behaviour on the learning system. Some
examples of the chosen features are the type of reading material (concrete or ab-
stract), the amount of revision before an exam, the amount of time spent on an
exam, the active participation on message boards and chats within the learning
environment, the number of work examples accessed and the exam result. The
approach was evaluated on 27 students with promising results; the most accu-
rate prediction was possible for the perception dimension (intuitive vs. sensing)
with a precision of 77%, followed by the understanding dimension (sequential vs.
global) with 63% precision and the processing dimension (active vs. reflective)
with 58% precision. The input dimension (visual vs. verbal) was not investigated
in this study. In contrast to this work, the features in our experiments are at
a lower level - we aim to utilize features that are independent of a particular
learning environment and also do not require a specific amount of interaction
with the environment first before the learning style can be predicted.
Sanders and Bergasa-Suso [24] also developed a Web-based learning system
that monitors user activity to infer the learning styles. Features include the
amount of data copied and dragged, the length of the page text, the ratio of
text to images, the presence or absence of tables, mouse movements, etc. While
initially their predictions did not perform much better than a naive predictor
that assigns the majority class to all instances [25], after a number of data
144 C. Hauff et al.

post-processing steps, they achieved accuracies well above such a naive predictor
for the active/reflective and the visual/verbal dimension5 .
Finally we note that instead of inferring the learning style from the learner’s
actions within the learning environment, a number of works have also inves-
tigated to infer the learning style from other user characteristics such as the
Big-Five personality model, e.g. [26].
Our work differs from these previous works in two ways. First of all, our
approach is independent of a particular learning environment. We rely on traces
the learner left in the past on the social Web. This has the distinct advantage
that when a learner starts using a novel E-learning system the learning style
can be computed immediately, while in [22–24] a certain amount of interaction
is required on part of the learner before the learning style can be inferred. This
can also mean that by the time the system has identified the learning style of the
learner and is ready to provide material according to the learner’s preferences,
the learner has already turned away to a better fitting learning system. Secondly,
the features we use in our pilot study are very low-level compared to the features
in the previous works; we rely on features that can be extracted from any Twitter
stream and as such, the results we report here will be the lower boundary of what
is possible.

3 Methodology
In line with previous works, in particular [23, 24], we use the following method-
ology and procedure to investigate our hypothesis: In the period of November
2011 and March 2012, the web-link to a new ILS online version was distributed
via different social web network channels such as Twitter, Facebook, LinkedIn
and large e-mail lists of different EU-projects and Universities (e.g. University
of Graz and Graz University of Technology). In a late stage of this process (end
of February), people who tweeted at least once they would be a certain type
of learner, e.g. I am an active learner, were directly contacted via Twitter and
asked to participate in the survey. Each participant was requested to read the
introduction, fill in some personal information such as gender, age, level of edu-
cation and the degree of which they were familiar with the term learning style.
In addition, they were asked to provide their Twitter username and to fill in the
ILS items. The instruction included information about the purpose of the study,
that the data would be treated anonymously and that each participant had the
chance to draw one of three 20 Amazon.com-vouchers. Duration time of filling
in all required data was about 15 minutes.
We then evaluate these questionnaires and the found learning styles of each
user are our ground truth, that we try to predict in the next stage. We crawl the
tweets of the respective Twitter accounts and derive features from them. Then,
5
Please note the the results between different papers are not directly comparable due
to differences in the precision formula employed and the number of classes present
for each dimension - [23] include a NEUTRAL class for each dimension which is
absent in [25] and [24].
Tweets Reveal More Than You Know 145

we employ a machine learning algorithm to classify each user into the different
dimensions based on these features.
Next, we first introduce the learning style model we selected in more detail and
then we outline how we derived the features and the machine learning approach.

3.1 The Felder-Silverman Learning Style Model


One of the most popular learning style models is the Felder-Silverman Learning
Style Model (FSLSM) [10, 11] which describes the most prominent learning style
differences between engineering students on four dimensions:

– Sensing/intuitive: Sensing learners are characterized by preferring to learn


facts and concentrate on details. They also tend to stick to concrete learning
materials, as well as known learning approaches. They like to solve problems
by concrete thinking and by applying routine procedures. Intuitive learn-
ers on the other hand prefer to learn abstract concepts and theories. Their
strengths lie in discovering the underlying meanings and relationships. They
are also more creative and innovative compared to sensing learners.
– Visual/verbal: This dimension distinguishes learners preferences in mem-
orizing learning material. The visual learner prefers the learning material
to be presented as a visual representation, e.g. pictures, diagrams or flow
charts. In contrast, verbal learners prefer written and spoken explanations.
– Active/Reflective: This dimension covers the way of information process-
ing. Active learners prefer the ‘learning by doing’ way. They enjoy learning
in groups and are more open to discuss ideas and learning material. On the
contrary, reflective learners favour to think about ideas rather than work
practically. They also prefer to learn alone.
– Sequential/Global: On this dimension learners are described according to
their way of understanding. Sequential learners learn in small steps and have
a linear learning process, focusing on detailed information. Global learners,
however, follow a holistic thinking process where learning happens in large
leaps. At first, it seems that they learn material almost randomly without
finding connections and relations between different areas, but in a later stage,
they perceive the whole picture and are able to solve complex problems.

3.2 The Index of Learning Style


The ILS [11] is a self-assessment instrument based on the Learning Style
Model [10, 11]. Participants are asked to provide answers to 44 forced-choice
questions with two answer options. Each of the four learning style dimensions
is covered by 11 items, with an ’a’ or b answer option corresponding to one of
the poles of the continuum of the corresponding learning style dimension, e.g.
active (a) vs. reflective (b). It is suggested to count the frequency of a responses
to get a score between 0-11 for one dimension. This method allows a fine grada-
tion of the continuum starting from e.g. 0-1 representing strong preferences for
reflective learning till 10-11 strong preference for active learning. Therefore, a
146 C. Hauff et al.

preference of a pole of the given dimension may be mild, moderate or strong. Re-
liability as well as validity analyses revealed acceptable psychometric values. For
internal consistency reliability ranging from 0.55 to 0.77 across the four learning
style scales of the ILS were found by [27]. Furthermore, factor analysis and di-
rect feedback from students whether the ILS score is representing their learning
preferences provided sufficient evidence of construct validity for the ILS.
For the presented study, a new online version of the ILS was created to incor-
porate a new design, instructions and to add text and check-boxes for required
information, such as the Twitter username and some demographic data. We
distributed the call for participation on various channels, including university
mailing lists and Twitter. In total, 136 people responded and filled in the ques-
tionnaire. In a post-processing step we removed subjects: (i) whose Twitter ac-
count is protected6 , (ii) whose Twitter account listed less than 20 public tweets,
(iii) who provided an invalid or no Twitter ID, and (iv) who did not complete
the ILS questionnaire. After this data cleaning process, a total of 51 subjects
remained whose learning styles are predicted across all experiments reported in
this paper.

3.3 Twitter-Based Features

We derived a set of 29 features from the Twitter stream of each subject. They are
listed in Table 1 and can be ordered into four broad classes: features derived from
the account information (e.g. number of followers and total number of tweets),
features derived from individual tweets whose scores are aggregated (e.g. the
percentage of tweets with URLs, the percentage of tweets directed at another
user, the average number of nouns or adjectives used by a user), features based
on tweet semantics (e.g. the percentage of tweets containing terms indicating
anger or joy) and features derived from the external pages that were linked to
by the users in their tweets (e.g. the fraction of content words vs. non content
words in those pages).
We relied on a number of existing toolkits and resources to derive those fea-
tures. The tweet processing pipeline is shown in Figure 1. The following steps
are executed:

– A Language Detection library7 is relied upon to determine the language a


tweet is written in.
– If the tweet is not in English, the Bing Translation web service8 is used to
translate the text into English.
– The Stanford Part-of-Speech Tagger9 , a library that tags English text with
the respective parts of speech (noun, adjective, etc), is relied upon to deter-
mine the tweeting style.
6
Tweets of users with a protected user account are not publicly accessible.
7
http://code.google.com/p/language-detection/
8
http://api.microsofttranslator.com
9
http://nlp.stanford.edu/software/tagger.shtml
Tweets Reveal More Than You Know 147

Table 1. Overview of the 29 features used as input for the classifiers

Features
Twitter-account based #tweets, #favourites, #listings, #friends, #followers,
#f riends
#f ollowers

Tweet style & behavior %tweets with URLs, #languages used, %directed tweets,
%retweets, %tweets with hashtags, average (av.) and
standard deviation (std.) of #terms per tweet, av. and
std. of #tagged terms per tweet, av. #nouns per tweet,
av. #proper nouns per tweet, av. #adjectives per tweet
Tweet semantics av. #anger terms, av. #surprise terms, av. #joy terms,
av. #disgust terms, av. #fear terms, av. #sadness terms,
%emotional tweets
#content words
External URLs av. #images in external URLs, av.
#non-content words
in external URLs

– Boilerpipe10 is a library that parses web pages that the subjects referred
to in their tweets. The output of running Boilerpipe distinguishes between
content parts of a web page and non-content parts (copyright notices, menus,
etc.). We rely on it to determine the number of actual amount of text (versus
images) on a web page.
– Finally, we determine the sentiment of the user by relying on WordNet Af-
fect [28]: it is a set of affective English terms that indicate a particular
emotion; there are 127 anger terms (e.g. mad, irritated ), 19 disgust terms
(e.g. detestably), 82 fear terms (e.g. dread, fright), 227 joy terms (e.g. tri-
umphantly, appreciated), 123 sadness terms (e.g. oppression, remorseful) and
28 surprise terms (e.g. fantastic, amazed). Each tweet is matched against
this dictionary and the number of emotional tweet for each dimension are
recorded.

3.4 Classification Approaches

Since our goal is an initial study on the feasibility of determining one’s learning
style from a number of tweets, we use two common machine learning approaches:
Naive Bayes and AdaBoost11 . Due to the small number of users, we rely on k-1
cross-validation for training and testing. Furthermore, as the two classes in each
dimension are not distributed equally, we set up a cost-sensitive evaluation where
an error for the less likely class per dimension was punished with a factor of 5 (the
error is punished with a score of 1 for the majority class). The results are reported
in terms of the classification precision, recall, F1 and Cohen’s Kappa [29] (κ).
10
http://code.google.com/p/boilerpipe/
11
We use the Weka Toolkit for our experiments.
148 C. Hauff et al.

Fig. 1. Tweet processing pipeline

We focus on the last evaluation measure in particular as it measures the inter-


annotator agreement, taking into account the element of a chance agreement.
Here, the ground truth and the predicted learning style act as the two annotators
of the data. A κ ≈ 0 indicates that the annotators agree as often as they would by
chance, a value below zero indicates an agreement that is lower than by chance
and values above 0 determine different levels of agreement that are better than
random agreement. A κ ∈ (0, 0.2] indicates a slight agreement, while (0.2, 0.4]
indicate moderate agreement and so on. In general, the larger the value of κ the
larger the agreement; when κ = 1 the agreement is perfect.

4 Results
4.1 Generating the Ground Truth
Due to the odd number of questions in the ILS questionnaire for each dimension,
a subject can always be assigned to one of the two opposite ends of the spectrum.
In this pilot study, we ignore the strength of the association and we simply assign
each subject to the pole with the greater score. The distribution of the subjects
across the four dimensions proposed in the ILS approach are presented in Table 2.

Table 2. Distribution of our 51 subjects across the four dimensions of the ILS ques-
tionnaire. We report the number of subjects that fall into each category, as well as the
mean (μ) and standard deviation (σ) with respect to the score. For comparison, we
also report the distribution that were reported in other user studies.

ILS-Twitter study [25] [30]


#subjects % μ σ % μ
Input visual 42 82% 7.31 2.44 76% 8.14
verbal 9 18% 3.67 2.45 24% 2.86
Processing active 31 61% 6.07 2.35 57% 5.99
reflective 20 39% 4.91 2.34 43% 5.01
Understanding global 36 71% 6.64 2.41 66% 5.00
sequential 15 29% 4.34 2.40 34% 6.00
Perception intuitive 35 69% 6.69 2.67 48% 4.32
sensing 16 31% 4.29 2.68 52% 6.68
Tweets Reveal More Than You Know 149

It is evident that the split between subjects in the two opposite poles of each
dimension is not uniform. To place this distribution in context, we also report
the distributions that were found in [25] and [30]. While the visual/verbal and
active/reflective dimensions are robust to the subject population, we observe
considerable differences among the three studies in the global/sequential and
intuitive/sensing dimensions.
Based on the absolute scores, which show the clearest distinction in the
visual-verbal dimension as well as the intuitive-sensing dimension, we hypothe-
size that the classifier will be performing better on those dimensions than the
others.

4.2 Results on the Classification Process


In Table 3 we now report the performance our classifiers achieved when clas-
sifying the subjects according to the four ILS dimensions. We note, not sur-
prisingly, that classification into the majority class results in high precision and
recall values, though if we consider κ we also note that only for a single di-
mension, namely active/reflective, can we say with relative certainty that the
classification approaches perform better than agreement by chance. This holds
for both classifiers. The other dimensions show only slightly significant results
for one or the other classifier, though not both. Thus, we have to conclude that
the simple features we introduced are sufficient for the active/reflective dimen-
sion, though they are not indicative for any of the other dimensions in the ILS
framework.

Table 3. Results of predicting the different learning style dimensions for our data set

active reflective visual verbal global sequential intuitive sensing


Naive Prec. 0.644 0.667 0.833 0.333 0.668 0.000 0.688 0.333
Bayes Recall 0.935 0.200 0.952 0.111 0.917 0.000 0.943 0.063
F1 0.763 0.308 0.889 0.167 0.786 0.000 0.795 0.105
κ 0.1547 0.086 -0.109 0.007
Ada- Prec. 0.697 0.556 0.814 0.125 0.733 0.364 0.649 0.214
Boost Recall 0.742 0.500 0.833 0.111 0.725 0.267 0.686 0.188
F1 0.719 0.526 0.842 0.118 0.806 0.308 0.667 0.200
κ 0.2463 -0.058 0.0783 -0.131

5 Conclusions
Twitter learning style analysis could be used to complete user profiles with
respect to learning preferences and as a result they could result in more effi-
cient adaptation and personalization of simulators, e-learning systems or other
150 C. Hauff et al.

technology-enhanced learning software. Providing feedback to learners about


their learning preferences could be helpful, but it should be relied upon with
caution. There have to be explicit explanations that the learning style is a ten-
dency of certain preferences and the assessment does not overrule ones own
judgments [11], but rather can be seen as advice or suggestion. Bearing this in
mind the Twitter analysis of learning styles could lead to smoother, non-invasive
assessment of personal learning preferences.
In this paper, we have performed a first study with the goal to infer a learner’s
learning style from her Twitter stream. We selected the ILS model due to its
validity and widespread use and collected ground truth data from 51 study
participants. We extracted 29 features from each subject’s Twitter stream and
used them to classify each subject as belonging to one of the two poles for each
of the four dimensions of the ILS model.
We found a more than by chance agreement only for a single dimension:
active/reflective. Here, the agreement was slight to moderate, while for the other
three dimensions no agreement between the prediction and the ground truth
above agreement by chance was found.
Moreover, there are some limitations inherent in ILS which need to be taken
into account. Felder and Spurlin [11] point out the limitation of learning style
assessment and the purposes for which it should be used.
We conclude that, while there is some evidence that a Twitter signal contains
useful information (as evident in the classification results of the active/reflective
dimension), such a classification in general is hard and more complex features
need to be derived. Thus, future work will focus on deriving more complex fea-
tures that are more in agreement with the different learning dimensions, instead
of relying on low-level features that can only be somewhat indicative when viewed
in isolation.

References

1. Burleson, W., Picard, R.: Gender-specific approaches to developing emotionally


intelligent learning companions. IEEE Intelligent Systems 22(4), 62–69 (2007)
2. Kalyuga, S., Sweller, J.: Rapid dynamic assessment of expertise to improve the
efficiency of adaptive e-learning. Educational Technology Research and Develop-
ment 53, 83–93 (2005), doi:10.1007/BF02504800
3. Chen, C.M., Lee, H.M., Chen, Y.H.: Personalized e-learning system using item
response theory. Computers & Education 44(3), 237–255 (2005)
4. Blanchard, E., Razaki, R., Frasson, C.: Cross-cultural adaptation of elearning con-
tents: a methodology. In: International Conference on E-Learning (2005)
5. Stash, N., Cristea, A., Bra, P.D.: Adaptation to learning styles in e-learning: Ap-
proach evaluation. In: Proceedings of World Conference on E-Learning in Corpo-
rate, Government, Healthcare, and Higher Education 2006, pp. 284–291 (2006)
6. Pashler, H., McDaniel, M., Rohrer, D., Bjork, R.: Learning styles. Psychological
Science in the Public Interest 9(3), 105–119 (2008)
Tweets Reveal More Than You Know 151

7. Ivy, L., Cheung, C., Lee, M.: Understanding Twitter Usage: What Drive People
Continue to Tweet? In: Proceedings of Pacific-Asia Conference on Information
Systems, Taipei, Taiwan (2010)
8. Kolb, D.A.: Experiential learning: Experience as the source of learning and devel-
opment. Prentice Hall, Englewood Cliffs (1984)
9. Leite, W.L., Svinicki, M., Shi, Y.: Attempted validation of the scores of the vark:
Learning styles inventory with multitrait-multimethod confirmatory factor analysis
models. Educational and Psychological Measurement 70(2), 323–339 (2009)
10. Felder, R.M., Silverman, L.K.: Learning and teaching styles in engineering educa-
tion. Journal of Engineering Education 78(7), 674–681 (1988)
11. Felder, R.M., Spurlin, J.: Applications, reliability and validity of the index of learn-
ing styles. International Journal of Engineering Education 21(1), 103–112 (2005)
12. Geiger, M.A., Pinto, J.K.: Changes in learning style preference during a three-year
longitudinal study. Psychological Reports 69(3), 755–762 (1991)
13. Hauff, C., Houben, G.J.: Deriving Knowledge Profiles from Twitter. In: Kloos,
C.D., Gillet, D., Crespo Garcı́a, R.M., Wild, F., Wolpers, M. (eds.) EC-TEL 2011.
LNCS, vol. 6964, pp. 139–152. Springer, Heidelberg (2011)
14. Java, A., Song, X., Finin, T., Tseng, B.: Why we twitter: understanding microblog-
ging usage and communities. In: Proceedings of the 9th WebKDD and 1st SNA-
KDD 2007 Workshop on Web Mining and Social Network Analysis, pp. 56–65.
ACM (2007)
15. Naaman, M., Boase, J., Lai, C.H.: Is it really about me?: message content in social
awareness streams. In: CSCW 2010, pp. 189–192 (2010)
16. Westman, S., Freund, L.: Information interaction in 140 characters or less: genres
on twitter. In: IIiX 2010, pp. 323–328 (2010)
17. Zhao, D., Rosson, M.B.: How and why people twitter: the role that micro-blogging
plays in informal communication at work. In: GROUP 2009, pp. 243–252 (2009)
18. Michelson, M., Macskassy, S.A.: Discovering users’ topics of interest on twitter: a
first look. In: AND 2010, pp. 73–80 (2010)
19. Hecht, B., Hong, L., Suh, B., Chi, E.H.: Tweets from justin bieber’s heart: the
dynamics of the location field in user profiles. In: CHI 2011, pp. 237–246 (2011)
20. Mislove, A., Lehmann, S., Ahn, Y.Y., Onnela, J.P., Rosenquist, J.N.: Understand-
ing the Demographics of Twitter Users. In: ICWSM 2011 (2011)
21. Rao, D., Yarowsky, D., Shreevats, A., Gupta, M.: Classifying latent user attributes
in twitter. In: SMUC 2010, pp. 37–44 (2010)
22. Graf, S., Kinshuk: An approach for detecting learning styles in learning manage-
ment systems. In: Sixth International Conference on Advanced Learning Technolo-
gies, pp. 161–163 (July 2006)
23. Garcia, P., Amandi, A., Schiaffino, S., Campo, M.: Evaluating bayesian networks
precision for detecting students learning styles. Computers & Education 49(3),
794–808 (2007)
24. Sanders, D., Bergasa-Suso, J.: Inferring learning style from the way students in-
teract with a computer user interface and the www. IEEE Transactions on Edu-
cation 53(4), 613–620 (2010)
25. Bergasa-Suso, J., Sanders, D., Tewkesbury, G.: Intelligent browser-based systems
to assist internet users. IEEE Transactions on Education 48(4), 580–585 (2005)
26. Fang Zhang, L.: Does the big five predict learning approaches? Personality and
Individual Differences 34(8), 1431–1446 (2003)
152 C. Hauff et al.

27. Litzinger, T., Lee, S., Wise, J., Felder, R.: Intelligent browser-based systems to
assist internet users. Journal of Engineering Education 96(4), 309–319 (2007)
28. Strapparava, C., Valitutti, R.: Wordnet-affect: an affective extension of wordnet.
In: Proceedings of the 4th International Conference on Language Resources and
Evaluation, pp. 1083–1086 (2004)
29. Landis, J.R., Koch, G.G.: The measurement of observer agreement for categorical
data. Biometrics 33(1), 159–174 (1977)
30. Zywno, M.: A contribution to validation of score meaning for felder-solomans index
of learning styles. In: Proceedings of the 2003 American Society for Engineering
Annual Conference and Exposition (2003)
Motivational Social Visualizations
for Personalized E-Learning

I.-Han Hsiao and Peter Brusilovsky

School of Information Sciences, University of Pittsburgh, USA


{ihh4,peterb}@pitt.edu

Abstract. A large number of educational resources is now available on the Web


to support both regular classroom learning and online learning. However, the
abundance of available content produces at least two problems: how to help
students find the most appropriate resources, and how to engage them into using
these resources and benefiting from them. Personalized and social learning have
been suggested as potential methods for addressing these problems. Our work
presented in this paper attempts to combine the ideas of personalized and social
learning. We introduce Progressor+, an innovative Web-based interface that
helps students find the most relevant resources in a large collection of self-
assessment questions and programming examples. We also present the results
of a classroom study of the Progressor+ in an undergraduate class. The data
revealed the motivational impact of the personalized social guidance provided
by the system in the target context. The interface encouraged students to
explore more educational resources and motivated them to do some work ahead
of the course schedule. The increase in diversity of explored content resulted in
improving students’ problem solving success. A deeper analysis of the social
guidance mechanism revealed that it is based on the leading behavior of the
strong students, who discovered the most relevant resources and created trails
for weaker students to follow. The study results also demonstrate that students
were more engaged with the system: they spent more time in working with self-
assessment questions and annotated examples, attempted more questions, and
achieved higher success rates in answering them.

Keywords: social visualization, open student modeling, visualization,


personalized e-learning.

1 Introduction
A large number of educational resources is now available on the Web to support both
regular classroom learning and online learning. However, the abundance of available
content produces at least two problems: how to help students find the most
appropriate resources, and how to engage them into using these resources and
benefiting from them. To address these problems a number of projects have explored
personalized and social technologies. Personalized learning has been suggested as an
approach to help every learner find the most relevant and useful content given the
learner’s current state of knowledge and interests [1]. Social learning was explored as

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 153–165, 2012.
© Springer-Verlag Berlin Heidelberg 2012
154 I.-H. Hsiao and P. Brusilovsky

a potential solution to a range of problems, including student motivation to learn


[2-5]. In our group’s earlier work, these approaches were explored in two systems,
QuizGuide [6] and Knowledge Sea II [7]. QuizGuide provides topic-based adaptive
navigation support for personalized guidance for programming problems. Knowledge
Sea II uses social navigation support to help students navigate weekly reading
assignments. These and similar systems demonstrated the value and effectiveness of
personalized learning and social learning in E-Learning. However, the combination of
these powerful approaches has not been seriously investigated. The work presented in
this paper attempts to explore the value of a specific combination of personalized
learning and social learning to guide students to the most relevant resources in a
course-sized volume of educational content.

2 Related Work

2.1 Open Student Modeling


The research on open student modeling explores the value of making students models
visible to, and even editable by, the students themselves. There are two main streams
of work on open student modeling. One stream focuses on visualizing the models
supporting students’ self-reflection and planning; the other one encourages students to
participate in the modeling process, such as engaging students through the negotiation
or collaboration on construction of the model [8]. Representations of the student
models vary from displaying high-level summaries (such as skill meters) to complex
concept maps or Bayesian networks. A range of benefits have been reported on
opening the student models to the learners, such as increasing the learner’s awareness
of knowledge development, difficulties and the learning process, and students’
engagement, motivation, and knowledge reflection [8-10]. Dimitrova et al. [11]
explored interactive open learner modeling by engaging learners to negotiate with the
system during the modeling process. Chen et al. [12] investigated active open learner
models in order to motivate learners to improve their academic performance. Both
individual and group open learner models were studied and demonstrated increased
reflection and helpful interactions among teammates. Bull & Kay [13] developed a
framework to apply open user models in adaptive learning environments and provided
many in-depth examples. Studies also show that students have a range of preferences
for how open student modeling systems should present their own knowledge.
Students highly value having multiple viewing options and being able to select the
one with which they are most comfortable. Such results are promising for potentially
increasing the quality of reflection on their own knowledge [14]. In our own work on
the QuizGuide system [6] we combined open learning models with adaptive link
annotation and demonstrated that this arrangement can remarkably increase student
motivation to work with non-mandatory educational content.

2.2 Social Navigation and Visualization for E-Learning


According to Vygotsky’s Social Development Theory [15], social interactions affect
the process of cognitive development. The Zone of Proximal Development, where
Motivational Social Visualizations for Personalized E-Learning 155

learning occurs, is the distance between a student’s ability to perform a task under
adult guidance and/or with peer collaboration and the student’s ability to solve the
problem independently. Research on social learning has confirmed that it enhances
the learning outcomes across a wide spectrum, including: better performance, better
motivation, higher test scores and level of achievement, development of high level
thinking skills, higher student satisfaction, self-esteem, attitude and retention in
academic programs [16-18].
To support social learning, a visual approach is a common technique used to
represent or organize multiple students’ data in an informative way. For instance,
social navigation, which is a set of methods for organizing users’ explicit and implicit
feedback for supporting information navigation [19]. Such a technique attempts to
support a known social phenomenon where people tend to follow the “footprints” of
other people [7, 20, 21]. The educational value has been confirmed in several studies
[22-24]. The group performance visualization has been used to support the
collaboration between learners among the same group, and to foster competition in
groups of learners [25]. Vassileva and Sun [25] investigated the community
visualization in online communities. They found that social visualization allows peer-
recognition and provides students the opportunity to build trust in others and in the
group. CourseVis [26] pioneered extensive graphical performance visualization for
teachers and learners. This helps instructors to identify problems early on, and to
prevent some of the common problems in distance learning. A promising, but rarely
explored approach is social visualization of open student and group models. Bull and
Britland [27] used OLMlets to research the problem of facilitating group collaboration
and competition. The results demonstrated that selectively showing the models to
their peers increases the discussion among students and encourages them to start
working sooner. Our work presented below attempts to further advance this approach.

2.3 Social Comparison


According to social comparison theory [28], people tend to compare their
achievements and performance with people who they think are similar to them in
some way. There are three motives that drive one to compare him/herself to others,
namely, self-evaluation, self-enhancement, and self-improvement. The occurrence of
these three motives depends on the comparison targets, they are respectively lateral
comparison, downward comparison and upward comparison. Earlier social
comparison studies [29] demonstrated that students were inclined to select
challenging tasks among easy, challenging and hard tasks by being exposed to the
proper social comparison conditions. Feldman and Ruble (1977) [30] argued that age
differences resulted in different competence and skills in terms of social comparison.
As young children grow older, they become more assured of the general competence
of their social comparing skills [30]. Later studies showed that social comparison,
prompted by the graphical feedback tool, decreases social loafing and increases
productivity [31]. A synthesis review of years social comparison studies summarized
that upward comparisons in the classroom often lead to better performances [32].
Among fifty years of social comparison theory literature, most of the work has been
done with qualitative studies by interviews, questionnaires and observation. In this
156 I.-H. Hsiao and P. Brusilovsky

Fig. 1. Progressor+: the tabular open social student modeling visualization interfaces. The open
social student model visualization allows collapsing the visualization parts that are out of focus
(bottom left) and also provides direct content access (bottom right).

research, we develop a set of quantitative measures for investigating social


comparison theory in our target context.

3 Progressor+ - An Open Social Student Modeling Interface


In past studies, we explored two open social student modeling interfaces, QuizMap
[33] and Progressor [34], to examine the feasibility and the impact of a combined
social visualization and open student modeling approach. Both systems use open
social student modeling to provide personalized access to one specific kind of
learning content – parameterized programming questions for Java. The use of a single
kind of context allowed us to ignore the potential complexity of diverse learning
content and focus on exploring critical aspects of open social student modeling. At the
same time, this meant were unable to explore the scalability of the approach, i.e., its
ability to work in a more typical e-learning context where many kinds of learning
content may be used in parallel. The goal of Progressor+ was to bring our earlier
findings up to scale and explore the feasibility of open social student modeling in the
context of more diverse learning content. To achieve this goal, we piloted a new
scalable tabular interface to accommodate diverse content. The Progressor+ system
interface is presented in Fig. 1. Each student’s model is represented as several rows of
Motivational Social Visualizations for Personalized E-Learning 157

a large table with each row corresponding to one kind of learning content and each
column corresponding to a course topic. The study presented in this paper has been
performed with two kinds of learning content – Java programming questions and Java
code examples (thus Figure 1 shows two rows for each student - quiz progress row
and example progress row), however, the tabular nature of the proposed interface
allows adding more kinds of content when necessary. Each cell is colored coded
showing student’s progress of the topic. We used a ten-color scheme to represent
percentile of the progress. The use of color-coding allows collapsing table rows that
are out of focus thus making it possible to present a progress picture of a large class in
a relatively small space. This feature was inspired by the TableLens visualization,
which is known as highly expressive and scalable [35]. While the interface of
Progressor+ was fully redesigned, it implemented most critical successful features
discovered in our past studies that we review below.

Sequence: The sequence of the topics provides direction for the students to progress
through the course. It also provides flexibility to explore further topics or redo already
covered topics. In the QuizMap study [33], the topic arrangement in the treemap
visualization was non-sequential. A key issue that emerged was that students had
difficulty connecting the course structure and the treemap layout. We improved the
design by providing a clear sequence in progressing through the topics in Parallel
IntrospectiveViews [36] and Progressor [34] studies. We discovered that students
benefited from the guidance offered by the course structure and explored more
diverse topics that were appropriate for them at the moment. From these studies we
also learned that topic-based personalization in open social student modeling worked
more effectively when a sequence feature was implemented. In addition, we have also
found that strong students tended to explore ahead of the class and weak students
tended to follow them, even for the topics that were beyond the current scope.
Therefore, we decided to maintain the “sequence” as one of the important features in
Progressor+.

Identity: Identity captures all the information belonging to the student. It is the
representation of the student’s unique model as well as one of the main entrances to
interaction with the domain content. From the QuizMap study [33], we learned that
distinguishing aspects of student’s own model from the rest of the student models is
not enough. This addressed the differences between the student herself and the rest of
the class, but it did not carve out a clear model unit that belonged to the student. As
we discovered, it is also important to offer a holistic view of individual student
progress. In the Parallel IntrospectiveViews [36] study, we utilized the concept of
unity, which proposed that perception of identity is higher if the model represents
unity. This concept makes the students identify themselves with the model and allows
them to easily compare themselves each other [12, 13]. In Progressor+, we believe that
the simple rows & columns table representation is cohesive and can be easily shown
in fragments and recognized as units. Such characteristics could promote the notion of
students’ identity when interacting with the system.

Interactivity: Interactivity in the visualization of the user model can be implemented


in several forms. Based on past studies, we knew that students benefited a lot from
158 I.-H. Hsiao and P. Brusilovsky

accessing content by directly clicking on the student’s own model. The idea is simple
but effective; the visualization of the user model is not a secondary widget but the
main entrance allowing the students to access content directly. Moreover, students are
also enabled to interact with content through their peers’ models, or interact with their
peers by comparing and sorting their performances. In Progressor+, the core
interactivity is to allow the students to access the content resources directly by
clicking on the students’ models - the table cells. Meanwhile, other interactivity
features are, for example, a collapse-and-expand function allowing the user model
visualization to deal with the complexity and the large topic domains [37], or a
manipulation function allowing the user to feel in control over his/her model [38].

Comparison: Letting students compare themselves with each other is the key for
encouraging more work and better performance [32]. In [33, 34, 39], we found
evidence that students interacted through their peers’ models. Moreover, the same
principle stems from the underlying supporting theory of Social Comparison. We
believe that socially exposing models implicitly forces the students to perform
comparison cognitively. We also learned that lowering the cognitive loads for
comparisons could encourage more interactions. Thus, we capitalize our past
successful experiences and implement different levels of comparisons: macro- and
micro-comparisons. Macro-level comparison allows students to view their own
models while at the same time seeing thumbnails of their peers’ models. It provides a
high level of comparisons, allowing fast mental overlapping of the colored areas
between models. Micro-level comparisons occur at the moment a student clicks on
any peer models. Progressor+ enters in the comparison mode by collapsing the rest of
the table rows and displaying the selected peer model with all its details. Both levels
of comparison allow students to perform social comparisons at their own free will.

4 Evaluation and Results

To assess the impact of our technology, we have conducted the evaluation in a


semester-long classroom study. The study was performed in an undergraduate Object-
Oriented Programming course offered by the School of Information Sciences,
University of Pittsburgh in the Spring semester of 2012. The system was introduced
to the class at the third week of the course and served as a non-mandatory course tool
over the entire semester period. Out of 56 students enrolled in the course, 3 withdrew
early and 38 out of the remaining 53 were actively using the system. All student
activity with the system was recorded. For every student attempt to answer a question
or explore an example, the system stored a timestamp, the user’s name, the session
ids, and content reference (question id and result for questions, example id and
explored line number for examples). We also recorded the frequency and the timing
of student model access and the peer comparisons. Pre-test and post-test were
administered at the beginning and the end of the semester to measure students’ initial
knowledge and knowledge gain.
Following our prior experience with open student modeling in JavaGuide [40] and
Progressor [34], we hypothesized that the ability to view students’ models would
Motivational Social Visualizations for Personalized E-Learning 159

motivate the students to have more interactions with the system. In particular, we
expected that the motivation to work learning content would extend to both kinds of
educational content, as in its earlier observed increase in the context of single-kind
content collection. To evaluate these hypotheses, we compared the student content
usage in three semester long classes that used three kinds of interfaces to access the
same collection of annotated examples and self-assessment questions: (1) a
combination of a traditional course portal for example access with an adaptive
hypermedia system JavaGuide for question access (Column 1 in Table 1); [41] a
combination of a traditional course portal for example access and social visualization
(Progressor) for question access (Column 2 in Table 1); and (3) an open social student
modeling visualization to access both examples and questions through Progressor+
(Column 3 in Table 1). To discuss the impact on students’ motivation and problem
solving success, we measure the quantity of work (the amount of examples, lines and
questions), Course Coverage (the distinct numbers of topics, example, lines and
questions) and Success Rate (the percentage of correctly answered questions). Table 1
summarizes the system usage for the same set of examples and quizzes in three
different conditions.

Table 1. Summary of system usage for three different technologies

JavaGuide Progressor Progressor+


Example N 20 7 35
Example 19.75 28.71 27.37
Quantity Line 116.6 219.71 184.18
Session 5.35 5.50 4.94
Distinct
9.15 12.28 12.20
Topic
Distinct
Coverage 17.3 25.13 27.37
Examples
Distinct
67.1 115.22 141.5
Lines
Quiz N 22 30 38
Attempt 125.50 205.73 190.42
Quantity Success 58.31% 68.39% 71.20%
Session 4.14 8.4 5.18
Distinct
11.77 11.47 12.92
Topic
Coverage
Distinct
46.18 52.7 61.84
Questions

4.1 Effects on System Usage


Among 53 registered students, 35 students explored the annotated examples and 38
students worked with self-assessment questions through Progressor+. On average,
students explored 27.37 examples; accessed 184.18 annotated lines and answered
160 I.-H. Hsiao and P. Brusilovsky

190.42 questions. We found that there was 38.58%, 57.95% and 51.73% more
examples, lines explored and questions answered correspondingly in Progressor+
compared to JavaGuide. Although we did not register a significant increase on the
usage in Progressor+, this still shows that the access through open social student
modeling visualization is at least as good as knowledge-based adaptive navigation
support, which is considered as a golden standard of personalized information access.
As we anticipated, we did not find significant differences in the amount of work done
between Progressor and Progressor+. This demonstrates that Progressor+ was as
engaging as Progressor. i.e., the registered increase in the usage of annotated
examples did not caused a decrease the self-assessment quizzes usage. Instead, the
overall volume of work increased. The quantity results show that open social student
modeling that integrates several kinds of content is a valid approach to providing
navigational support for multiple kinds of educational content.
In order to demonstrate that our approach is not only valid but also capable of
delivering added value, we used other parameters to measure students’ learning
quality. First, we calculated the number of distinct topics, examples, lines and
questions attempted by the student to measure the Course Coverage. We found that
students were able to explore more topics, examples, lines and questions by using
Progressor+ than the other two systems. In fact, students explored significantly more
distinct lines in Progressor+ than with JavaGuide condition, F(1, 53)= 9.72, p<.01. It
suggests that the inclusion of the additional content (examples) into the open social
student modeling visualization generated an expected increase of motivation to work
with examples while maintaining the motivation to work with questions. However,
was it necessary for students to get exposed to more educational content? Was the
new technology able to guide students to the right content at the right time? To
answer these questions, we have to examine the impact of this technology on
students’ learning.

4.2 Impacts on Students’ Learning and Problem Solving Success


To evaluate students’ learning activities, we measured students’ pre- and post- tests
scores for knowledge gain and used the Success Rate to gauge students’ problem
solving success. Progressor+ was provided as a non-mandatory tool for the course, and
students were able to learn from other factors, such as assignments, lab exercises etc.
Thus, in our target content, it is important to use another parameter to infer students’
learning. We chose to measure students’ problem solving success. Note that problem
solving is an important skill acquired by learning. It has been demonstrated that it
could enhance the transfer of concepts to new problems, yield better learning results,
make acquired knowledge more readily available and applicable (especially in new
contexts), etc. [42, 43].
We found that the students who used Progressor+ achieved significantly higher
post-test scores (M=15.0, SD=0.6) than their pre-test scores (M=3.2, SD=0.5), t(37)=
17.276, p<.01. In addition, we also found that the more example lines the students
explored, the higher level of knowledge they gained (r=0.492, p<.01). With open
social student modeling visualization, students also achieved better Success Rate. The
Motivational Social Visualizations for Personalized E-Learning 161

Pearson correlation coefficient indicated that the more diverse questions the students
tried, the higher success rate they obtained (r=0.707, p<.01). Similarly, the more
diverse examples the students explored, the higher success rate they obtained
(r=0.538, p<.01). We also looked at the value of repeated access to questions,
examples and lines. We discovered that the more often the students repeated the same
questions and the more often the students repeated studying the same lines the higher
success rate they obtained (r=0.654, p<.01; r=0.528, p<.01).

4.3 Evidence of Social Guidance

To obtain a deeper understanding of the open social student modeling as a navigation


support mechanism, we plot all the students’ interactions with Progressor+ (Figure 2).
We categorized the students into two groups based on their pre-test scores (ranging
from a minimum 0 to a maximum 20). Due to the pre-test scores being positively
skewed, we split the two groups by setting the threshold at score 7. Strong students
scored 7 points or higher (7~13) and weak students scored less than 7 (0~6). We
color-coded the activities into two colors, orange and blue. Orange dots represent the
activities generated by strong students and blue ones are the weak ones. The time of
the action is marked on the X-axis and the question complexity on the Y-axis from
easy to complex. We found 4 interesting zones within this plot. Zone “A” contains the
current activity that students performed along the lecture stream of the course.
Students had been working with the system very consistently throughout the first ten
weeks. Zone “B” represents the region of after the tenth week. Zone “C” contains all
of the attempts to explore earlier content, which the system motivated students to do
to achieve mastery of the subject. Zone “D” contains the attempts which students

Fig. 2. Time distribution of all examples and questions attempts performed by the students
through Progressor+. X axis is the Time; Y axis is the complexity of the course. Blue dots
represent strong students’ actions; orange ones are the weaker ones’ actions. Zone “A” –
lecture stream, zone “B” – final exam cut (after week 10), zone “C” –work with material from
earlier lectures, zone “D” –navigating ahead.
162 I.-H. Hsiao and P. Brusilovsky

performed ahead of the course schedule. It is not surprising that a lot of the student
interactions with Progressor+ occurred in Zone A. More interesting are Zones C & D.
A substantial proportion of the interactions occurred in Zone C. This indicates that the
students were self-motivated to go back to achieve better mastery on already
introduced topics. Moreover, based on Zone D in the figure, we found that the strong
students who already achieved mastery on the current topics were able to use the
visual interface to explore topics ahead of the course schedule. In addition, the plot
shows that strong students generally explored the content ahead of the weak ones.
Such phenomena provided evidence that strong students worked on new topics in
Progressor+ first and left the implicit traces for weak students that were visualized by
the interface and provided proper guidance for weaker students. It also demonstrated
that the system was actually inviting students to challenge themselves to move a little
bit ahead of the course pace instead of passively progressing.

5 Summary

This paper described an innovative tabular interface, Progressor+, which was designed
to help students to find the most relevant resources in a large collection of diverse
educational content. The interface provides progress visualization and content access
through open social student modeling paradigm. Students were able to navigate
through all their peers’ models and to perform comparisons from one to another. An
exploratory study was conducted. We found that students used Progressor+ heavily,
despite of the non-mandatory nature of the system. We also confirmed the
motivational value of the social guidance provided by Progressor+. The results
showed that the interface encouraged students to explore more topics, examples, lines
and questions and motivated them to do some work ahead of the course schedule. The
increased diversity helped to improve students’ problem solving success. A deeper
analysis of the social guidance mechanism revealed that the strong students
successfully led the way in discovering the most relevant resources, and provided
implicit trails that were harvested by the system and served to provide social guidance
for the rest of the class. The study results also demonstrated that the social open
student modeling increased student engagement to work with learning content. The
students working with Progressor+ spent more time working with annotated examples
and self-assessment questions, attempted more questions, and achieving higher
success rate.
While the results in this study were encouraging, we believe that the current
approach has not yet reached its full potential. For example, given that students were
able to discover more topics and questions by following implicit trails from the
stronger students, could we take a proactive role and recommend trails to weak
students instead of letting them follow the trails by themselves? According to our past
work, providing adaptive navigation support significantly increases the quality of
student learning and student motivation to work with non-mandatory learning content.
We plan to have a richer integration of open social student modeling with adaptive
navigation support. Furthermore, we are motivated to investigate deeper the issues of
data sharing and model comparisons in open social student modeling interfaces.
Motivational Social Visualizations for Personalized E-Learning 163

References
1. Kay, J.: Lifelong Learner Modeling for Lifelong Personalized Pervasive Learning. IEEE
Transaction on Learning Technologies 1(4), 215–228 (2008)
2. Vassileva, J.: Toward Social Learning Enviroments. IEEE Transaction on Learning
Technologies 1(4), 199–214 (2008)
3. Barolli, L., et al.: A web-based e-learning system for increasing study efficiency by
stimulating learner’s motivation. Information Systems Frontiers 8(4), 297–306 (2006)
4. Méndez, J.A., et al.: A web-based tool for control engineering teaching. Computer
Applications in Engineering Education 14(3), 178–187 (2006)
5. Vassileva, J., Sun, L.: Evolving a Social Visualization Design Aimed at Increasing
Participation in a Class-Based Online Community. International Journal of Cooperative
Information Systems (IJCIS) 17(4), 443–466 (2008)
6. Brusilovsky, P., Sosnovsky, S., Shcherbinina, O.: QuizGuide: Increasing the Educational
Value of Individualized Self-Assessment Quizzes with Adaptive Navigation Support. In:
World Conference on E-Learning, E-Learn 2004. AACE, Washington, DC (2004)
7. Brusilovsky, P., Chavan, G., Farzan, R.: Social Adaptive Navigation Support for Open
Corpus Electronic Textbooks. In: De Bra, P.M.E., Nejdl, W. (eds.) AH 2004. LNCS,
vol. 3137, pp. 24–33. Springer, Heidelberg (2004)
8. Mitrovic, A., Martin, B.: Evaluating the Effect of Open Student Models on Self-
Assessment. International Journal of Artificial Intelligence in Education 17(2), 121–144
(2007)
9. Bull, S.: Supporting learning with open learner models. In: 4th Hellenic Conference on
Information and Communication Technologies in Education, Athens, Greece (2004)
10. Zapata-Rivera, J.-D., Greer, J.E.: Inspecting and Visualizing Distributed Bayesian Student
Models. In: 5th International Conference Intelligent Tutoring Systems (2000)
11. Dimitrova, V., Self, J.A., Brna, P.: Applying Interactive Open Learner Models to Learning
Technical Terminology. In: Bauer, M., Gmytrasiewicz, P.J., Vassileva, J. (eds.) UM 2001.
LNCS (LNAI), vol. 2109, p. 148. Springer, Heidelberg (2001)
12. Chen, Z.-H., et al.: Active Open Learner Models as Animal Companions: Motivating
Children to Learn through Interacting with My-Pet and Our-Pet. International Journal of
Artificial Intelligence in Education 17(2), 145–167 (2007)
13. Bull, S., Kay, J.: Student Models that Invite the Learner. The SMILI() Open Learner
Modelling Framework. International Journal of Artificial Intelligence in Education 17(2),
89–120 (2007)
14. Mabbott, A., Bull, S.: Alternative Views on Knowledge: Presentation of Open Learner
Models. In: Lester, J.C., Vicari, R.M., Paraguaçu, F. (eds.) ITS 2004. LNCS, vol. 3220, pp.
689–698. Springer, Heidelberg (2004)
15. Vygotsky, L.S.: Mind and society: The development of higher mental processes. Harvard
University Press, Cambridge (1978)
16. Cecez-Kecmanovic, D., Webb, C.: Towards a communicative model of collaborative Web-
mediated learning. Australian Journal of Educational Technology 16(1), 73–85 (2000)
17. Johnson, D.W., Johnson, R.T., Smith, K.A.: Cooperative Learning Returns to College:
What Evidence is There That it Works? Change: The Magazine of Higher Learning 30(4),
26–35 (1998)
18. Koedinger, K.R., Corbett, A.: Cognitive Tutors: Technology bringing learning science to
the classroom. In: Sawyer, K. (ed.) The Cambridge Handbook of the Learning Sciences.
Cambridge University Press, New York (2006)
164 I.-H. Hsiao and P. Brusilovsky

19. Dieberger, A., et al.: Social navigation: Techniques for building more usable systems.
Interactions 7(6), 36–45 (2000)
20. Dieberger, A.: Supporting social navigation on the World Wide Web. International Journal
of Human-Computer Interaction 46, 805–825 (1997)
21. Wexelblat, A., Maes, P.: Footprints: history-rich tools for information foraging. In:
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: the
CHI is the Limit, pp. 270–277. ACM, Pittsburgh (1999)
22. Brusilovsky, P., Sosnovsky, S., Yudelson, M.: Addictive links: The motivational value of
adaptive link annotation. New Review of Hypermedia and Multimedia 15(1), 97–118
(2009)
23. Farzan, R., Brusilovsky, P.: AnnotatEd: A social navigation and annotation service for
web-based educational resources. New Review in Hypermedia and Multimedia 14(1),
3–32 (2008)
24. Kurhila, J., Miettinen, M., Nokelainen, P., Tirri, H.: EDUCO - A Collaborative Learning
Environment Based on Social Navigation. In: De Bra, P., Brusilovsky, P., Conejo, R., et al.
(eds.) AH 2002. LNCS, vol. 2347, pp. 242–252. Springer, Heidelberg (2002)
25. Vassileva, J., Sun, L.: Using Community Visualization to Stimulate Participation in Online
Communities. e-Service Journal. Special Issue on Groupware 6(1), 3–40 (2007)
26. Mazza, R., Dimitrova, V.: CourseVis: A graphical student monitoring tool for supporting
instructors in web-based distance courses. International Journal of Human-Computer
Studies 65(2), 125–139 (2007)
27. Bull, S., Britland, M.: Group Interaction Prompted by a Simple Assessed Open Learner
Model that can be Optionally Released to Peers. In: Conati, C., McCoy, K., Paliouras, G.
(eds.) UM 2007. LNCS (LNAI), vol. 4511, Springer, Heidelberg (2007)
28. Festinger, L.: A theory of social comparison processes. Human Relations 7, 117–140
(1954)
29. Veroff, J.: Social comparison and the development of achievement motivation. In: Smith,
C.P. (ed.) Achievement Related Motives in Children. Sage, New York (1969)
30. Feldman, N.S., Ruble, D.N.: Awareness of social comparison interest and motivations: A
developmental study. Journal of Educational Psychology 69(5), 579–585 (1977)
31. Shepherd, M.M., et al.: Invoking social comparison to improve electronic brainstorming:
beyond anonymity. J. Manage. Inf. Syst. 12(3), 155–170 (1995)
32. Dijkstra, P., et al.: Social Comparison in the Classroom: A Review. Review of Educational
Research 78(4) (2008)
33. Brusilovsky, P., Hsiao, I.H., Folajimi, Y.: QuizMap: Open Social Student Modeling and
Adaptive Navigation Support with TreeMaps. In: Kloos, C.D., Gillet, D., Crespo García,
R.M., Wild, F., Wolpers, M. (eds.) EC-TEL 2011. LNCS, vol. 6964, pp. 71–82. Springer,
Heidelberg (2011)
34. Bakalov, F., et al.: Progressor: Personalized visual access to programming problems. In:
2011 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC),
Pittsburgh, PA (2011)
35. Rao, R., Card, S.K.: The table lens: merging graphical and symbolic representations in an
interactive focus + context visualization for tabular information. In: Proceedings of the
SIGCHI Conference on Human Factors in Computing Systems: Celebrating
Interdependence, pp. 318–322. ACM, Boston (1994)
36. Hsiao, I.-H., Bakalov, F., Brusilovsky, P., König-Ries, B.: Open Social Student Modeling:
Visualizing Student Models with Parallel IntrospectiveViews. In: Konstan, J.A., Conejo,
R., Marzo, J.L., Oliver, N. (eds.) UMAP 2011. LNCS, vol. 6787, pp. 171–182. Springer,
Heidelberg (2011)
Motivational Social Visualizations for Personalized E-Learning 165

37. Shneiderman, B.: The eyes have it: A task by data type taxonomy for information
visualizations. In: Symposium on Visual Languages. IEEE Computer Society,
Washington, DC (1996)
38. Kay, J.: Learner know thyself: Student models to give learner control and responsibility.
In: International Conference on Computers in Education, ICCE 1997, Malasia, Kuching,
Sarawak (1997)
39. Bakalov, F., König-Ries, B., Nauerz, A., Welsch, M.: IntrospectiveViews: An Interface for
Scrutinizing Semantic User Models. In: De Bra, P., Kobsa, A., Chin, D. (eds.) UMAP
2010. LNCS, vol. 6075, pp. 219–230. Springer, Heidelberg (2010)
40. Hsiao, I.-H., Sosnovsky, S., Brusilovsky, P.: Guiding students to the right questions:
adaptive navigation support in an E-Learning system for Java programming. Journal of
Computer Assisted Learning 26(4), 270–283 (2010)
41. Lindstaedt, S.N., Beham, G., Kump, B., Ley, T.: Getting to Know Your User – Unobtrusive
User Model Maintenance within Work-Integrated Learning Environments. In: Cress, U.,
Dimitrova, V., Specht, M. (eds.) EC-TEL 2009. LNCS, vol. 5794, pp. 73–87. Springer,
Heidelberg (2009)
42. Dolmans, D.H.J.M., et al.: Problem-based learning: future challenges for educational
practice and research. Medical Education 39(7), 732–741 (2005)
43. Melis, E., et al.: ActiveMath: A Generic and Adaptive Web-Based Learning Environment.
International Journal of Artificial Intelligence in Education 12, 385–407 (2001)
Generator of Adaptive Learning Scenarios: Design
and Evaluation in the Project CLES

Aarij Mahmood Hussaan1 and Karim Sehaba2

Université de Lyon, CNRS


1
Université Lyon 1, LIRIS, UMR5205, F-69622, France
2
Université Lyon 2, LIRIS, UMR5205, F-69679, France
aarij-mahmood.hussaan@liris.cnrs.fr
Karim.sehaba@liris.cnrs.fr

Abstract. The objective of this work is to propose a system, which generates


learning scenarios for serious games keeping into account the learners’ profiles,
pedagogical objectives and interaction traces. We present the architecture of
this system and the scenario generation process. The proposed architecture
should be, insofar as possible, independent of an application domain, i.e. the
system should be suitable for different domains and different serious games.
That is why we identified and separated different types of knowledge (domain
concepts, pedagogical resources and serious game resources) in a multi-layer
architecture. We also present the evaluation protocol used to validate the sys-
tem, in particular the method used to generate a learning scenario and the know-
ledge models associated with the generation process. This protocol is based on
comparative method that compares the scenario generated by our system with
that of the expert. The results of this evaluation, conducted with a domain
expert, are also presented.

Keywords: Scenario generator, serious games, adaptive system, evaluation


protocol.

1 Introduction

Our work is situated in the context of adaptive generation of learning scenario. We


define a learning scenario as a suite of structured pedagogical activities generated by
the system for a learner keeping into account his/her profile in order to achieve one or
more educational goals. We are more specifically interested in the learning scenario
generation in serious games [1]. In this area, we propose a system capable of generat-
ing dynamically learning scenarios keeping into account the following properties:

• The ability to be utilized in any serious game taking into account its specificities.
• The use of interaction traces as knowledge sources in the adaptation process.
Along with the above mentioned properties, we also aim our system to be reusable
with different learning domains and different games as well. Therefore, the different
kinds of knowledge presented in the system are organized and separated in a

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 166–179, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Generator of Adaptive Learning Scenarios: Design and Evaluation in the Project CLES 167

multi-layer architecture. These layers represent the learning domain in the form of:
domain concepts, pedagogical resources required to teach these concepts and serious
game resources that are used to present pedagogical resources to the learner. This
separation means that the aspects of any particular layer can be modified without
necessarily modifying other layers, hence, rendering the system more reusable.
A trace [2] is defined as a history of learner’s actions collected in real-time while
the learner is using the serious game. It is considered to be the primary source for the
updating of a learner’s profile and the domain knowledge. It also serves as knowledge
sources in the scenario generation process. Formally, a trace is a set of observed ele-
ments temporally located [2][3]. Each observed element represents the learner action
on computer environment such as interacting with an educational resource, clicking
on a hyperlink, etc.
The idea of automatically generating learning/pedagogical is not new and has been
investigated previously by many authors [4][5][6]. However, these systems focuses
only on the pedagogical aspects of the problem and do not consider serious games as
a potential medium of delivering these scenarios to the learner. Furthermore, not
every system defines clearly the separation of the conceptual layer and the pedagogi-
cal resource layer which makes them difficult to reuse. Likewise, these systems don’t
exploit, in general, the learner’s traces in the generation process.
Our contribution is situated in the context of the Project CLES1 (Cognitive Lin-
guistic Elements Stimulation). CLES aims to develop a serious game environment,
accessible online, which evaluate and train the cognitive ability for children with cog-
nitive disabilities. In the context CLES, we conducted an evaluation aimed at:

1. Validating the working of the system generator of learning scenario, and


2. Validating the knowledge models that are used by the system to represent different
kind of knowledge.

The learning scenario generator is evaluated to confirm the algorithm used to select
the different resources (concept, pedagogical resources & game resources). Moreover,
the knowledge models are evaluated to verify their functionality in the generation.
The rest of the paper is organized as follows: in section 2 we detail the project
CLES, in section 3 a literature review on course generators and serious games is pre-
sented. Section 4 presents a brief presentation of our architecture system and section 5
presents the scenario generation process. Section 6 details the evaluation protocol of
knowledge models and generator working. We will present the results of the evalua-
tion in Section 7. The next section presents the discussions and conclusions.

2 Application Context
The work on project CLES (Cognitive Linguistic Elements Stimulation) was
conducted in collaboration with different partner laboratories. These partners are spe-
cializing in serious games development for children with cognitive disabilities, ergo-
nomic design and the study of cognitive mechanisms. This project aims to provide
1
http://liris.cnrs.fr/cles
168 A.M. Hussaan and K. Sehaba

serious game for training and evaluation of cognitive functions. Eight functions are
considered in CLES: perception, attention, memory, visual-spatial, logical reasoning,
oral language, written language and transversal competencies.
The serious game developed, in the context of CLES, is called “Tom O’Connor
and the sacred statue”. This is an adventure game. The protagonist of this game is a
character named Tom, his task is to search for the sacred statue hidden in a mansion.
According to the session, Tom is placed in one of the many rooms in the mansion.
Each room has many objects (chair, table, screen etc.). Hidden behind these objects
are challenges in the form of mini-games. The user has to interact with these objects
to start these mini-games. To move from one room to another and progress in the
game, the user has to discover all the mini-games in the room.
Thus, for each of the eight cognitive functions, we have about a dozen mini-games
and for each mini-game we’ve nine levels of difficulty. A more detailed description of
games developed in this project is presented in [7].
The role of the scenario generator is to select (according to the learner’s profile,
his/her interaction traces and his /her therapeutic goals for the session) the mini-
games with appropriate difficulty levels, and to put these games in relation with the
objects of different rooms of the mansion. This generator should therefore keep in to
account:

• What the practitioner has prescribed for his patients


• The knowledge base of the available treatments for the pathology
• Histories of the previous exercises of the learner, stored in the form of traces.
• Specificities of the serious game

The module we develop has to be validated on its theoretical properties (meta-models,


models and processes) in the context of the Project CLES (see the sections 6 and 7).

3 Literature Review

The purpose of this section is to present the existing approaches regarding the genera-
tion of pedagogical scenarios and serious games, and to show what lacks in the theses
approaches and where we are contributing. This literature review is done keeping in
mind, among other, the following characteristics of our system, namely:

• General architecture independent of the pedagogical domain and application,


• Usable with serious games, and
• The use of interaction traces for the updating of learner profile and adaptation.

This section is organized in two sections. The first section presents the course genera-
tors and the second presents the serious games for learning.

3.1 Course Generators


Learning scenario generation can be divided into two broad categories: course se-
quencing and course generation. The former selects the best possible pedagogical
Generator of Adaptive Learning Scenarios: Design and Evaluation in the Project CLES 169

resource at any time given the performance of the learner and the latter generates an
structured course in a single go before presenting it to the learner [8]. A course se-
quencer by the name of DCG (Dynamic Courseware Generator) is presented in [9].
DCG selects the next pedagogical resource (HTML pages) dynamically according to
the current performance of the learner. DCG is heavily dependent on web-based re-
sources and are not suitable for other mediums like (serious games). In WINDS [4],
the learner has to either manually navigate through the course or choose from the
recommendations offered by the system. However, a complete learning path is not
generated for a particular learner, which is required in games like CLES. An expert-
system type approach is presented in [10], forcing to enter all the rules beforehand,
therefore making it difficult to maintain for a large knowledge base. Statistical tech-
niques are employed in [11] in order to generate a course most suitable to the learner,
however, in addition to the relations between the concepts relation between different
resources are also maintained. The relations between pedagogical resources are neces-
sary for different resources to be included in the same scenario. This requirement is a
limitation where different pedagogical resources are not related (like in project
CLES). Case based reasoning is used in a web based system [12] called Pixed (Project
Integrating eXperience in Distance Learning). PIXED uses the learners’ interaction
traces gathered as learning episodes to provide contextual help for learners trying to
navigate their way through an ontology-based Intelligent Tutoring System (ITS).
They rely on the learners to annotate their traces which can be difficult for cognitive
handicapped persons.
A system which combines the techniques of course sequencing and generation is
presented in a system called « Paigos » [13]. The authors use HTN-Planning and for-
malized scenarios to deliver adaptive courses. The manner of construction of Paigos
makes it difficult for persons unfamiliar with HTN techniques to use it.
In general, course generators focus on the pedagogical aspects and do not target se-
rious games for delivering their courses. Therefore, it is difficult to use them with
serious games. Moreover, the interaction traces are, generally, not used for updating
the learner profile & domain knowledge.

3.2 Serious Games


Systems have been proposed to use games for planning and management of business
simulation games in [14]. The pedagogical scenario is presented as a tree, providing
adaptation according to different learner actions. The construction of tree becomes
difficult as the scenario becomes complex. An authoring tool for the creation of
2-dimensional adventure games is presented in [15], personalization is done by pre-
defining the decision tree. A pedagogical dungeon to teach fractions in a collaborative
manner is presented in [16]. The interaction traces are used here in the adaptation
process. The scenarios are static and the tight coupling between the pedagogical
scenario and the gaming interface deprives the approach from reusability. C pro-
gramming language is taught in [17]. The teachers present to the learner a sequence of
learning activities in a Bomberman type game. The manual presentation of learning
170 A.M. Hussaan and K. Sehaba

activities sequences is not practical in case of hundreds of learners. A role playing


game is also proposed for the purpose of osteopathic diagnosis [18]. This game also
relies heavily on manual teacher intervention.
These systems tightly couple the pedagogical aspects with the gaming aspects i.e.
we cannot reuse neither the pedagogical nor the gaming aspects with other games or
pedagogical domains. Furthermore, a structured pedagogical scenario is not well de-
fined, mostly; therefore, there isn’t a generated personalized pedagogical scenario as
well. The learners’ interaction traces are also not exploited as well, in general.

4 System Architecture

In this section we present the different kinds of knowledge used in our system and
how we’ve organized them in order to increase reusability. Furthermore, the modeling
of this knowledge is also presented along with the general working of the system.

4.1 Knowledge Representation

As mentioned earlier, our objective is to develop a generic system capable of generat-


ing dynamically adaptive learning scenarios keeping into account the learners’ profile
(including their interaction traces) and the specificities of serious games. For this, we
propose to organize the domain knowledge in a multi-level architecture. We have
considered three types of knowledge (as shown in the figure 1): domain concept,
pedagogical resource and serious game resource. The separation of this knowledge
on three layers helps in using change the aspects of one layer without forcibly chang-
ing the other layers.
As the name indicates, the first layer contains the domain concepts. These concepts
are organized in the form of a graph where the nodes represent the concepts and the
edges represent the relation between the concepts.

Fig. 1. Knowledge Layer


Generator of Adaptive Learning Scenarios: Design and Evaluation in the Project CLES 171

Formally, the domain knowledge is modeled as <C, R > where, ‘C’ is the concepts
of the domain and ‘R’ represents the relations between the concepts. Each concept ‘C’
is defined by <Id, P>, where: ‘Id’ is a unique identifier and ‘P’ is the set of proper-
ties that describe the concept like the author, the date of creation, description of the
concept, etc. ‘R’ is defined by < CFrom, T, RC+ >, where: ‘CFrom’ is the origin concept
of the relation, ‘T’ Is the type of the relation and ‘RC’(Relation Concepts) = <CTo, F,
Value > where: ‘CTo’ is the target concept of the relation, the direction of relation is
from CFrom to CTo, ‘F’ is the function that allows propagating the information in the
graph in order to update the learner profile. The semantics of the function may differ
depending on the type of relation. And ‘Value’ is the value between the concepts of
the relation. This value is used as default in the absence of function ‘F’.
We also have created many types of relations [7]. For example, we present here
two types of relations:

• Has-Parts (x, y1 … yn): indicates that the target concepts y1 … yn are the
sub-concepts of the super concept x. For example: Has-Parts (Perception, visual
perception, auditive perception).
• Required (x, y): indicates that to study concept y it is necessary to have sufficient
knowledge of concept x. For example, Required (Perception, Oral Language).

In the context of the project CLES, the domain concept models the eight cognitive
functions and relationships that may exist between them.
The second layer contains the pedagogical resources. In general, a pedagogical re-
source is an entity used in the process of teaching, forming or understanding allowing
learning, convey or understand the pedagogical concepts. The pedagogical resources
can be of different natures: a definition of a concept, an example, a theorem, an exer-
cise, etc. Formally, each pedagogical resource is defined by a unique identifier, a
type, the parameters, an evaluation function, and a set of characteristics (like name,
description, name of author etc). As shown in figure 1, each resource can be in rela-
tion with one or more domain concepts and vice versa. This relation shows that a
resource can be used to understand the concept with which it is related. In the context
of project CLES, the pedagogical resource layer contains the mini-games.
The third and final layer contains the game resources. They are static objects that
are initialized with dynamic or proactive behavior. In our model, we only consider the
game objects that are related to a pedagogical resource. Formally, each game resource
is defined by an identifier, the relations with the pedagogical resources with which it
is related and a set of characteristics like name, description etc. In the context of
project CLES, these resources are the objects of the serious game which are used to
hide the pedagogical resources (mini-games).

4.2 System Working

The architecture of our system is shown in the figure 2.


172 A.M. Hussaan and K. Sehaba

Fig. 2. System Architecture

The process of the system’s operation is as follows: (1), the domain’s expert(s)
feeds the system with the domain’s knowledge according to our proposed models and
the learners’ profile. These models were presented in the previous section. In each
learning session, the system is fed with pedagogical goals. These goals are either se-
lected by the learner or are predefined by the system from his/her profile. (2), the
system, according to the selected goals and the learner’s profile, selects the appropri-
ate concepts from the domain model. This selection is done by the module ‘Concept
Selector’. The output of this module is the ‘Conceptual Scenario’. This conceptual
scenario is comprised of concepts along with the competence required to achieve the
pedagogical goals.
(3), the conceptual scenario is sent as input to the module ‘Pedagogical Resource
Selector’. The purpose of this module is to select for each concept, in the conceptual
scenario, the appropriate pedagogical resources. These resources are selected accord-
ing to the ‘Presentation Model’ and the learner’s profile. The latter is represented by a
set of properties in the form of <attribute, value> pairs where the attribute represents
a domain concept and the value represents the learner’s mastery of that concept. The
purpose of the presentation model is to organize the pedagogical resources presented
to the learner. The structure of the scenario can be for e.g. starting a scenario by pre-
senting two definitions followed by an example and an exercise. The selection of this
model can either be done by the learner or by the teacher (expert) for the learner. The
structure of the scenario model can fit the form defined in [13].
Furthermore, the pedagogical resources are then adapted according to the ‘Adapta-
tion Knowledge’. The adaptation knowledge is used to set the parameters of pedagog-
ical resources according to the learner’s profile and pedagogical goals. The output of
this module is a ‘Pedagogical Scenario’. This scenario comprises pedagogical re-
sources with their adapted parameters.
(4) The pedagogical scenario is sent as input to the module ‘Serious Game Re-
source Selector’. This module is responsible for associating the pedagogical resources
Generator of Adaptive Learning Scenarios: Design and Evaluation in the Project CLES 173

with the serious game resources. This association is done based on the ‘Serious Game
Model’. The ‘Serious Game Model’ is used to associate the type of serious game re-
source with the types of pedagogical resource. This module produces the ‘Serious
Scenario’ (5).
The learner interacts with the learning scenario via the serious game. As a result of
these interactions the learner’s interaction traces are generated. These traces are stored
in the learner profile and are used to update the profile and consequently modify the
learning scenario according to the learner traces, if necessary.

5 Scenario Generator

As mentioned in section 4, the process of learning scenario generation given pedagog-


ical goals and learner’s profile is handled by three modules namely ‘Concept
Selector’, ‘Pedagogical Resource Selector’ and ‘Serious Game Resource Selector’.
The general functionality of these modules is already defined in section 4. In this
section we’ll present the textual description of the working of these algorithms.

5.1 Concept Selector


The purpose of this module is to generate a list of domain concepts required to
achieve the learning goals. This generation is performed keeping into account the
learner’s profile. The learning goals are defined as the set of target (domain) concepts
along with the competence of each concept required. The generated list of domain
concepts is called ‘conceptual scenario’ in our system. The generation process works
as follows; first for each target concept (TC), it is checked (by consulting the learner
profile) whether or not this TC is sufficiently known by the learner. If it is sufficiently
known by the learner then this TC is ignored and the next TC is looked.
Then the module checks whether or not the TC has some concepts related to it.
Some of these concepts, in relation with the concept in question, can be selected to be
added in the conceptual scenario. This selection depends on the type of relation be-
tween the concepts. In fact, we’ve identified, for each type of relation, a strategy for
the selection of concepts. For example, if a learner has chosen a target concept A and
A is in a relation of type ‘Required’ with another concept B (Required (B, A)), then
the generator will verify that whether the learner knows sufficiently the concept B. If
it’s not the case then the generator also includes concept B in the conceptual scenario.

5.2 Pedagogical Resource Selector

The purpose of this module is to select the appropriate resources for every concept in
the ‘Conceptual Scenario’ given a ‘Presentation Model (PM)’ and learner profile.
This selection is outputted in the form of a “Pedagogical Scenario”. This contains a
list of resources associated with each concept along with their appropriate parameters.
The selection process goes as follows; firstly, for each concept in the ‘conceptual
scenario’ the process searches for the resources of type ‘T’ as described in the PM. If
174 A.M. Hussaan and K. Sehaba

there is more than one pedagogical resource of type ‘T’ associated with the concept,
then the resource which is not already seen by the learner or not sufficiently known by
the learner is added to the list. The process also consults the adaptation knowledge to
select the parameters of the resources (in order to adjust the level of difficulty).

5.3 Serious Game Resource Selector


This module associated the pedagogical resources in the ‘Pedagogical Scenario’ with
the serious game resources according to the learner’s profile and Serious Game Model
(SGM). The result of the execution of this module is a list of game resources called
‘Serious Scenario’ which contains resulting concepts and the serious game resources
initialized with the pedagogical resources and their parameters.
The working of this module is as follows; firstly, for each pedagogical resource in
the ‘Pedagogical scenario’ the serious game resources related to the pedagogical
resource are searched. Then for each selected serious game resource, the process
consults the learner profile to verify whether the selected resource is appropriate for
the learner. If yes then this resource is added to the list.

6 Evaluation Protocol

The first evaluation of our system was conducted in presence of a domain expert. This
expert has been a practitioner of cognitive sciences for more than 20 years. The objec-
tive of our evaluation, as mentioned earlier, is the validation of:

• The scenario generator’s working: more precisely, this means the validation of the
concept selection strategy which we’ve defined for each type of relations, and
• The knowledge models: it means to validate the concepts and the relations that
we’ve introduced into the system in the context of project CLES.

For this, the basic strategy that we’ve adapted is comparative evaluation [19] i.e. it
consists in comparing the learning scenarios created manually by the domain expert
with the learning scenarios generated automatically by the system for the same input.
This input corresponds to the domain knowledge and profile types. Furthermore, dur-
ing the evaluation process we conduct an Elicitation Interview [20] with the expert.
The purpose of this interview is to help the expert in explicating (as much as possible)
his/her thinking process, how s/he reasons while creating a learning scenario.
Before conducting the interview we came up with a protocol of evaluation. This
protocol is designed to guide us in conducting the evaluation and help us in validating
our models and to identify any problems and their source.
This flow of this protocol is depicted in the figure 3. At first, the expert is asked to
create a certain number of learner profiles (1). As the expert has a vast experience in
his/her respected field, s/he can give us the profiles that are pretty much closer to the.
Ideally, we would like the expert to create a certain number different profiles. The
more are the profiles the more it is beneficial for our evaluation. Furthermore,
the profiles should also be diverse i.e. different profiles should contain different
Generator of Adaptive Learning Scenarios: Design and Evaluation in the Project CLES 175

competencies. This will help us in determining whether our system can handle diverse
cases or not. Apart from these profiles we ask the expert to fix some learning objec-
tives for the profiles. Afterwards, we ask the expert to create learning scenarios for
each learning objective and each profile.

Fig. 3. Evaluation Protocol

Once the expert has identified the profiles and the objectives, we introduce them
into the system in order to generate the learning scenarios. Then the two sets of scena-
rios are compared by the expert (2). This comparison is done by the expert (by an
interview of explication where we demand the expert to verbalize his/her thoughts.
The expert is filmed during the whole evaluation process.
The result of this comparison will be either the expert will find the scenarios simi-
lar or not. If the expert is sufficiently satisfied with the similarity of the scenarios (3),
then the scenarios will be presented to real learners. Ideally these learners should’ve
the same profiles as entered in the system. The scenarios will then be presented to the
learners. If possible, the learners should be filmed during their interactions with the
scenarios. The learners should be asked how difficult are they finding the scenarios.
The learners’ interaction traces will also help us in answering this question. By ana-
lyzing the traces we can determine that a learner is finding the scenario very difficult
if s/he is failing constantly in the exercises. Similarly, if the learner is answering the
exercises very quickly and correctly then we can conclude that the learner is finding
the exercises very easy to solve.
If the learners say that they are finding the scenarios too easy or too difficult (5),
then this will imply that either the knowledge entered in the system by the expert can
be improved or the system is not generating the scenarios properly. In either case, the
protocol to be followed to resolve the problem is defined next.
176 A.M. Hussaan and K. Sehaba

If as a result of the comparison the expert is not finding the scenarios similar
enough (4), then two cases are possible: 1/The system’s generator is not working
properly (6). 2/ the knowledge entered in the system by the Expert is not correct (7).
If the system’s generator is not working properly, then we review the following:

1. Concept selection strategy: This means we’ve to review the selection of concepts
based on different relations and the calculation of percentages based on them. Cur-
rently we’ve four kinds of relations.
2. Pedagogical Resource selection strategy: Here, we’ve to review the pedagogical
resource selection strategy. Currently, we select, according to the presentation
model, all the resources related to a concept. Then we verify whether a particular
resource is already seen & mastered by the learner. If this is the case we ignore that
resource and proceed on the next one.
If none of the cases are applicable, then maybe the expert has made some error in
entering the knowledge in the system. Furthermore, we can tell the expert that there
are either some relations missing between the domain concepts or some of the rela-
tions do not have the right type i.e. maybe a relation should be of the type has-parts
whereas it is marked as required in the model.
Following the above mentioned protocol we conducted our evaluation.

7 Experiments and Results

We started the evaluation by introducing the domain models in the system. Since the
original model of Project CLES is very large, the expert would have found the evalua-
tion of the whole model quite tedious. In fact, there are 8 super concepts and each
super concept having at-least 5 sub-concepts and each sub-concept has at-least 5
pedagogical resources. Furthermore, there is also the serious game resources asso-
ciated with the pedagogical resources. Therefore, we created three mini-models of the
original model. All these mini models contain the eight main domain concepts of
CLES. The initial arrangements of these concepts are shown in figure 4. All the links
between the super concepts are of the type ‘Required’. The relation between percep-
tion and its sub concepts is of the type Has-Parts.
These super concepts are present in each of the mini-models. In each mini model,
in addition to the super concepts, one concept is further detailed. The detailed con-
cepts are: written language, perception and memory. We also prepared six profiles for
each model. The profiles are as follows: Profile 1: 8 years, no deficiency in concept x/
Profile 2: 8 years, deficiency in concept x / Profile 3: 14 years, no deficiency in con-
cept x/ Profile 4: 14 years, deficiency in concept x / Profile 5: 18 years, no deficiency
in concept x / Profile 6: 18 years, deficiency in concept x.
The concept ‘x’ is the detailed concept in each model. The choice of these 18 pro-
files is not arbitrary but they are logically selected, Project CLES targets children
between 6 years and 18 years. So the choice covers almost all the age groups. The
expert was in agreement with us over the choice of the profiles.
Afterwards, we asked the expert to give sufficient values to the concepts in each
profile. The expert defines the values keeping into account the type of the profile for
Generator of Adaptive Learning Scenarios: Design and Evaluation in the Project CLES 177

example: lesser values are assigned to the profile with deficiency than those profiles
without deficiencies. Afterwards, the objectives for each of the profile are also fixed.
These objectives are a bit higher for the profiles without deficiency and vise-versa.

Fig. 4. One of the mini-model on the concept Perception

The whole time the expert was being filmed, with his permission, when he was fix-
ing the values of the profiles. We were also asking the expert question regarding how
he was assigning the values and why. Afterwards, we asked the expert to fix the pe-
dagogical objectives for each profile. During this process we also asked questions
about how and why he was choosing the pedagogical objectives. As a result of these
questioning we discovered many things about the modeling of the domain model and
how to select the right pedagogical objectives for a profile.
As soon as the profiles are created and the objectives are set, we introduced them
into the system and generated the scenarios via the system. In the meantime, we asked
the expert to create the learning scenarios manually. We asked the expert how and
why he is selecting the concepts and the pedagogical resources for every profile. Af-
terwards, we asked the expert to compare the scenarios he created manually with
those generated automatically.
The film that was made during the experimentation process is then analyzed by the
video analyzing and annotation tool called ADVENE (http://liris.cnrs.fr/advene) [21].
The film made was about two hours long we saw it again and again annotating the
important events in the video. These annotations were than analyzed and as a result
we discovered some very interesting information. We found out some modifications
to be performed in the domain model and some troubles were also detected in the
concept selection strategy. In the domain model, we added 5 new relations between
concepts, for example the addition of prerequisite relation between Memory and Oral
Language. We modified also some concept selection strategy.
178 A.M. Hussaan and K. Sehaba

Furthermore, we also found that our system only takes into account the learner’s
profile while setting the pedagogical resources’ levels; whereas; the expert was taking
into account the gap between the profile and the pedagogical objectives. As a result of
this evaluation we updated the knowledge models, and corrected the problems with
our system. Finally, after the results shown to the expert, the expert seems sufficiently
satisfied with the results. He also seems satisfied with the working of our generator.

8 Conclusion and Perspectives

In this paper we presented the working and the architecture of our system. This
system is conceived to generate dynamically adaptable learning scenarios for serious
games while keeping into account the learner’s profile, learner’s traces and specifici-
ties of serious games. The learner’s interaction traces are used in the scenario genera-
tion and adaptation process and also while updating the learner’s profile. This work
took place in the Project CLES where the objective is to develop a serious game for
children with cognitive disabilities. In this context, we conducted an evaluation for
the verification of the scenario generation process. To conduct this evaluation, we
presented an evaluation protocol that we’ve followed during the evaluation process.
Our evaluation was based on comparative strategies and is designed to identify
whether the problem exists in the expert’s knowledge introduction into the system or
in the generation of the scenario, when the expert is not satisfied with the scenario.
Moreover, there is also the possibility that the problem exists in both the expert’s
knowledge introduction and system’s generator. However, we pinpointed the problem
correctly. However, we can face this problem with future evaluations.
For our future evaluations, we’ll like to repeat the process with a number of experts
to further verify the system. The tests with real learners will also be conducted to
generate real learner traces and then use them to update their profiles. Furthermore,
we’ll also use them to adapt the scenario if necessary.

Acknowledgements. The authors would like to thank Mr. Philippe Revy, expert
speech and language therapist and the director of the society GERIP.

References
[1] Zyda, M.: From visual simulation to virtual reality to games. Computer 38(9), 25–32
(2005)
[2] Clauzel, D., Sehaba, K., Prié, Y.: Enhancing synchronous collaboration by using interac-
tive visualisation of modelled traces. Simulation Modelling Practice and Theory 19(1),
84–97 (2011)
[3] Settouti, L., Prie, Y., Marty, J.-C., Mille, A.: A Trace-Based System for Technol-ogy-
Enhanced Learning Systems Personalisation. In: Ninth IEEE International Conference on
Advanced Learning Technologies, pp. 93–97 (2009)
Generator of Adaptive Learning Scenarios: Design and Evaluation in the Project CLES 179

[4] Specht, M., Kravcik, M., Pesin, L., Klemke, R.: Authoring adaptive educational hyper-
media in WINDS. In: Proceedings of ABIS 2001, Dortmund, Germany, vol. 3(3), pp. 1–8
(2001)
[5] Karampiperis, P., Sampson, D.: Adaptive learning resources sequencing in educational
hypermedia systems. Educational Technology & Society 8(4), 128–147 (2005)
[6] Sangineto, E., Capuano, N., Gaeta, M., Micarelli, A.: Adaptive course generation through
learning styles representation. Universal Access in the Information Society 7(1-2), 1–23
(2007)
[7] Hussaan, A.M., Sehaba, K., Mille, A.: Tailoring Serious Games with Adaptive Pedagogi-
cal Scenarios: A Serious Game for Persons with Cognitive Disabilities. In: 11th IEEE In-
ternational Conference on Advanced Learning Technologies, pp. 486–490 (2011)
[8] Brusilovsky, P., Vassileva, J.: Course sequencing techniques for large-scale web-based
education. International Journal of Continuing Engineering Education and Life-long
Learning 13(1/2), 75–94 (2003)
[9] Vassileva, J.: Dynamic courseware generation: at the cross point of CAL, ITS and au-
thoring. In: Proceedings of ICCE, vol. 95, pp. 290–297 (December 1995)
[10] Libbrecht, P., Melis, E., Ullrich, C.: Generating personalized documents using a presen-
tation planner. In: ED-MEDIA 2001-World Conference on Educational Multimedia,
Hypermedia and Telecommunications (2001)
[11] Karampiperis, P., Sampson, D.: Adaptive learning resources sequencing in educational
hypermedia systems. Educational Technology & Society 8(4), 128–147 (2005)
[12] Heraud, J.-M., France, L., Mille, A.: Pixed: An ITS that guides students with the help of
learners ’ interaction logs. In: 7th International Conference on Intelligent Tutoring Sys-
tems, pp. 57–64 (2004)
[13] Ullrich, C., Melis, E.: Complex Course Generation Adapted to Pedagogical Scenarios
and its Evaluation. Educational Technology & Society 13(2), 102–115 (2010)
[14] Bikovska, J.: Scenario-Based Planning and Management of Simulation Game: a Review.
In: 21st European Conference on Modelling and Simulation, vol. 4 (Cd.) (2007)
[15] Morenoger, P., Sierra, J., Martinezortiz, I., Fernandezmanjon, B.: A documental ap-
proach to adventure game development. Science of Computer Programming 67(1), 3–31
(2007)
[16] Carron, J.-M., Thibault, Marty, Jean-Charles, Heraud: Teaching with Game Based Learn-
ing Management Systems: Exploring and observing a pedagogical. Simulation & Gam-
ing 39(3), 353–378 (2008)
[17] Chang, W.-C., Chou, Y.-M.: Introductory C Programming Language Learning with
Game-Based Digital Learning. In: Li, F., Zhao, J., Shih, T.K., Lau, R., Li, Q., McLeod,
D. (eds.) ICWL 2008. LNCS, vol. 5145, pp. 221–231. Springer, Heidelberg (2008)
[18] Bénech, P., Emin, V., Trgalova, J., Sanchez, E.: Role-Playing Game for the Osteopathic
Diagnosis. In: Kloos, C.D., Gillet, D., Crespo García, R.M., Wild, F., Wolpers, M. (eds.)
EC-TEL 2011. LNCS, vol. 6964, pp. 495–500. Springer, Heidelberg (2011)
[19] Vartiainen, P.: On the Principles of Comparative Evaluation. Evaluation 8(3), 371–459
(2002)
[20] Bull, G.G.: The Elicitation Interview. Studies in Intelligence 14(2), 115–122 (1970)
[21] Aubert, O., Prié, Y.: Advene: active reading through hypervideo. In: ACM Hypertext
2005 (2005)
Technological and Organizational Arrangements
Sparking Effects on Individual, Community
and Organizational Learning

Andreas Kaschig1, Ronald Maier1, Alexander Sandow1, Alan Brown2, Tobias Ley3,
Johannes Magenheim4, Athanasios Mazarakis5, and Paul Seitlinger6
1
University of Innsbruck, Austria
{Andreas.Kaschig,Ronald.Maier,Alexander.Sandow}@uibk.ac.at
2
University of Warwick, United Kingdom
alan.brown@warwick.ac.uk
3
Tallinn University, Estonia
tley@tlu.ee
4
University of Paderborn, Germany
jsm@uni-paderborn.de
5
FZI Research Center, Germany
mazarakis@fzi.de
6
Graz University of Technology, Austria
paulchristian.seitlinger@edu.uni-graz.at

Abstract. Organizations increasingly recognize the potentials and needs of


supporting and guiding the substantial individual and collaborative learning
efforts made in the work place. Many interventions have been made into leve-
raging resources for organizational learning, ultimately aimed at improving ef-
fectiveness, innovation and productivity of knowledge work in organizations.
However, information is scarce on the effects of such interventions. This paper
presents the results of a multiple-case study consisting of seven cases investi-
gating measures organizations have taken in order to spark effects considered
beneficial in leveraging resources for organizational learning. We collected a
number of reasons why organizations deem themselves as outperforming others
in leveraging individual, collaborative and organizational learning, measures
that are perceived as successful as well as richly described relationships be-
tween those levers and seven selected effects that these measures have caused.

Keywords: Community, knowledge maturing, knowledge work, multiple-case


study, organizational learning.

1 Introduction
Although many concepts, models, methods, tools and systems have been suggested
for enhancing learning and the handling of knowledge in organizations [1], there is
only scarce information on the effects of these technological and organizational
arrangements on the effectiveness of knowledge work [2-7]. While the share of know-
ledge work [8] has risen continuously during recent decades [9] and knowledge work

A. Ravenscroft et al. (Eds.): EC-TEL 2012, LNCS 7563, pp. 180–193, 2012.
© Springer-Verlag Berlin Heidelberg 2012
Technological and Organizational Arrangements Sparking Effects 181

can be found in all occupations, the question remains open whether we can design IT-
supported instruments that create positive effects on knowledge work independent of
their field of application.
In this paper, we report on how organizations employ IT and organizational
instruments to support knowledge work. We analyze these technological and organi-
zational arrangements as levers that are employed and the effects they achieve. In line
with dynamic models of organizational learning and knowledge creation, such as the
spiral model of knowledge creation [10], the 4I framework [11] or the knowledge
maturing (KM) model [12], we put a special focus on how organizations deal with
critical knowledge they develop and maintain across the individual, community and
organizational level. With a multiple case study that focused on organizations
perceiving themselves as successful in sparking positive effects on learning on an
individual, community and organizational level, we aim at answering the research
question: What successful measures (IT-based and organizational) are applied to
evoke positive effects on learning on an individual, community and organizational
level?
In pursuing this aim, we relied on qualitative and interpretive methods, based par-
ticularly on observation and face-to-face interviews at the work places of the inter-
viewees. The study was conceptualized as a case study with multiple instances the
investigation of which relied on a single, coordinated framework of study topics and
a common design. We gained multiple perspectives by interviewing several individu-
als in each case study that together provided rich empirical material on interventions
into three levels of learning, traversed when knowledge is passed from individuals’
learning and expressing of ideas over informal collectives such as communities to the
formal level of organizations.

2 Individual, Community and Organizational Learning

Knowledge is socially constructed and part of workplace practices. Therefore, top


down approaches that view knowledge as a decontextualized entity have often met
with little success. Thus, it is not surprising that many theories and models start out
with learning at the level of individuals. Personal knowledge is defined as the contri-
bution, individuals bring to situations which enables them to think, interact and
perform [13]. The “objects” of individual learning include personalized versions of
public codified knowledge, everyday knowledge of people and situations, know-how
in the form of skills and practices, memories of episodes and events, self-knowledge,
attitudes and emotions. The development of practice is reflective, forward-looking
and dynamic and seems to work best within a culture that acknowledges the impor-
tance of developing practice, expertise and analytical capabilities in an inter-related
way so as to be able to support the generation of new forms of knowledge. Those
involved in such developments need to have a continuing commitment to explore,
reflect upon and improve their practice [14]. At the same time, they play a key role in
generating new knowledge and applying it when working in teams with colleagues
with different backgrounds and different kinds of expertise [15].
182 A. Kaschig et al.

A number of models that connect individual learning at the workplace with


(supposed) effects on a community and organizational level have been proposed and
discussed in the literature [16]. The 4I framework [11] conceptualizes organizational
learning as a dynamic process. It consists of four categories (4Is) of social and psy-
chological processes on different levels: intuiting (individual), interpreting (individu-
al), integrating (group), and institutionalizing (organizational). One premise of the
model is that organizational learning includes a tension between exploration (assimi-
lating new learning) and exploitation (using what has already been learned).
The concept of Communities of Practice (COP) has been established as a linking
mechanism between individual practice and organizational learning [17]. Individual
and collective learning is to a large extent informal based on a continuous negotiation
of meaning that takes place within the community [18]. This negotiation captures the
way that individuals in the community make sense out of their experiences. Meaning
of their experiences is not defined by any external authority, but it is constructed in
the COP and constantly negotiated through collaborative processes. Recently, the
authors suggested a number of community tools that support these processes [19].
The spiral model on organizational knowledge creation [10], [20] claims that
knowledge creation is a social process moving and transforming knowledge from the
individual level into communities of interaction that cross organizational boundaries.
The KM model [12] frames a similar stance on this process as goal-oriented learn-
ing on a collective level. The model describes knowledge development as a sequence
of phases. In its early phases, expressing ideas and appropriating ideas, the model is
concerned with learning on an individual level. Similar to [17], the KM model views
communities as the main connection between the individual and organizational level
in which learning takes place in informal activities, termed distributing phase, yet,
might also involve artifacts such as boundary objects [21], created in the formalizing
phase, specifically if the boundaries of such communities should be crossed. Com-
munities sometimes also provide the social constellation of choice for ad-hoc training
and piloting of new products, processes or practices. Finally, on the organizational
level, the model depicts formal training as well as institutionalizing, and ultimately,
standardizing. The KM model has been iteratively developed based on evidence
gained in an ethnographically-informed study of KM processes and the individual and
collaborative activities that happen at the workplace [22] and a survey of a large sam-
ple of European companies [23]. The latter study was also used to identify successful
examples for KM and companies that were particularly successful.
The present study was conducted in order to gain an in-depth understanding of why
and how these cases were successful. This was done mainly by introducing intervie-
wees to the model, guiding them in relating the model with phenomena in their own
organizations and then conducting the interview based on these perceived and con-
crete occurrences of KM. Because the model resonated well with the respondents as
the previous studies on the KM model [22], [23] and a pretest had shown, this allowed
us to elicit rich stories about concrete cases of KM that were perceived as successfully
fostered by deliberately applied technological and organizational arrangements. The
results of this analysis should act as a guideline for organizations willing to support
KM appropriately.
Technological and Organizational Arrangements Sparking Effects 183

3 Study Design and Data Collection


To study technological and organizational arrangements, we agreed upon the follow-
ing topics as the main focus: (1) reasons for better performing KM than others; (2)
organizational measures that are deemed to support KM; (3) ways to overcome bar-
riers to KM; (4) IT-oriented measures that are deemed to support KM. As the four
topics were deemed rather complex and context-specific, we chose the case study
approach [24], [25]. For detailed in-depth data collection, multiple sources of infor-
mation were used, in our cases interviews and observations as well as documents and
reports [26]. We followed a holistic multiple-case study approach which is deemed to
be more robust than a single-case study design and, furthermore, provided evidence is
often seen to be more compelling [25], [27].
We followed a purposeful sampling approach [28] by choosing organizations iden-
tified as successful through our previous studies. The unit of analysis is individuals
that work and learn in a collective towards a common goal. The plural is important as
we did not focus on a single person, but according to the definition of KM on goal-
oriented learning on a collective level. This allowed us to triangulate practices within
the targeted collective of people and to get a multi-faceted picture of the studied or-
ganization. Six European organizations and one network of organizations were inves-
tigated. Between two and 15 representatives took part in each case study. The studied
cases varied with respect to country, size and sector (see table 1).

Table 1. Studied cases, for classification criteria see OECD and EUROSTAT [29]
Case Sector Size Country No. of Participants
C1 Service small Austria 3
C2 Service large Germany 5
C3 Service large Poland 7
C4 Service [network] United Kingdom 14
C5 Industry large Germany 15
C6 Industry large Germany 5
C7 Industry large Germany 7

Each case study concentrated on collectives of individuals working across depart-


ments, subsidiaries or even organizations. To get access to these collectives, intervie-
wees were selected based on a snowball sampling [28]. We defined criteria that
interviewees needed to fulfill which helped us to gain valuable data from people who
had a broad and informed view about their organization. Interviewees had to have,
e.g., a high share of knowledge work; experience in different organizational settings,
access to a variety of technical systems; good command of conceptual and manage-
ment tasks; and strong communication, coordination and cooperation needs.
Data collection was done face-to-face at the workplaces of participants wherever
possible. This allowed for direct observation of phenomena in the context of partici-
pants’ workplaces [28]. We intended (1) to provide cues for participants about
important facets surrounding support of KM by technological and organizational ar-
rangements (e.g. by observable artifacts in the participants’ work environments), (2)
to support the researchers’ understanding of the work environments of participants as
well as (3) to facilitate joint meaning-making of the technological and organizational
arrangements between participant and researcher. To facilitate data collection on the
184 A. Kaschig et al.

agreed four topics, an interview guideline was developed and adopted by case study
teams investigating different organizations. The first page of the interview guideline
supported the interviewer in explaining the concept of KM and contained a figure
depicting the KM model [12] which was discussed with the interviewees in the con-
text of their organizations. The second page was dedicated to the four topics which
were investigated in the sequence described above. The semi-structured interviews
were recorded, if allowed, transcribed and analyzed with qualitative content analysis.
Besides interviewing, in some cases further methods for data collection, such as focus
groups [28], were employed. Between the authors, several face-to-face meetings and
teleconferences provided opportunities to exchange lessons learned on case selection,
data collection and data analysis.
After conducting the field work, each team analyzed the collected data and created
an individual case report structured according to a common template. Once the main
findings were summed up and each case study team was aware of results from all case
studies, we jointly developed cross-case conclusions, again in a series of teleconfe-
rences and face-to-face meetings.

4 Levers, Effects and Their Relationships


We triggered a reflection on certain preconditions that the represented organization
meets for performing KM successfully by asking participants about reasons for per-
forming KM better than others in the first topic of the interview. In multiple rounds of
joint data analysis [25], we distilled seven effects of interventions for learning on an
individual, community and organizational level (see table 2).

Table 2. Effects
Increased willingness to share knowledge (C2, C3, C4, C5, C6, C7). Comprises a com-
municative environment as well as an attitude of being open-minded towards colleagues’
requests and an active provision of knowledge possibly needed by others.
Individual

Openness to change (C1, C3). Describes an organizational culture that prevents the de-
velopment of permanent consensus. Comprises defrozen thought patterns, overcome
rigidity of thinking and sticking in convenient but ineffective action patterns.
Positive attitude towards knowledge maturing itself (C3, C5, C6, C7). Employees across
the organizational hierarchy reflect on potential benefits of putting efforts into KM which
is deemed to depend greatly on employees involved in daily work activities and their
attitude towards and reflexiveness about it.
Improved accessibility of knowledge (C3, C4, C6, C7). Quick accessibility and easy
retrieval of knowledge is deemed to positively affect the goal oriented and non-redundant
Community

transfer of knowledge within and across communities.


Strengthened informal relationships (C1, C2, C3, C4, C6). Denote personal ties between
colleagues that are usually used to circumvent or shortcut formal procedures in the hie-
rarchical structure. Informal relationships help collaborative reflections upon learning
processes and distribution of ideas and information about current activities.
Availability of different channels for sharing knowledge (C1, C4, C6, C7). Availability of
Organization

different methods or systems used for sharing knowledge that can be related to IT or to
organizational measures.
Improved quality of workflows, tasks or processes (C3). Process improvement instru-
ments, such as best practice process-descriptions, are applied in order to gain improve-
ments with respect to cost, time and quality.
Technological and Organizational Arrangements Sparking Effects 185

By analyzing the measures that respondents perceived as causing these effects, we


surfaced the levers in the sense of technological and organizational arrangements
positively impacting the performance of KM. As a result of the cross-case analysis,
we structured these levers into five groups (see table 3): (1) soliciting, i.e. levers that
trigger employees to provide solutions and ideas for addressing issues present in the
organization; (2) guiding, i.e. levers that increase awareness for best practices and
standard operating procedures and/or influence the direction or quality of knowledge
work; (3) converging, i.e. endorsement of further development of a topic by legitimat-
ing to allocate time to an initiative or project where knowledge stemming from differ-
ent origins can be amalgamated; (4) regular sharing, i.e. the recurring endorsement of
sharing knowledge in a defined procedure that could be implemented as a recurring
event or as a permanent measure; (5) transferring, i.e. support transmission of know-
ledge from one group or community to another, for re-use.

Table 3. Levers

Acting as ‘claimant’ (C1, C7). A new idea often needs support from someone creating a
demand for it and pulling it towards realization. Ideally, this role is performed by a per-
son who has the capability and authority to stress his/her demand and thus can be a
proponent for the new idea.
Soliciting

Fostering competition-based idea management (C3). Employees present contributions


electronically or in an exhibition and the best ideas are awarded. Thus, all members of
the organization will learn about other projects and ideas.
Maintaining a best practice database (C3). A database providing a collection of best
practice process-descriptions that have been approved according to a quality assurance
concept aimed at improving tasks or processes.
Enabling awareness and orientation (C2, C3). Continuously documenting (all) business
processes and thereby providing transparency for these processes. This is in line with
requirements imposed by quality management initiatives.
Offering guidance by supervisors and management (C3, C7). Raising employees’
Guiding

awareness of knowledge management in general and integrating KM-related topics into


the process of management by objectives.
Performing benchmarks (C2, C3, C5, C7). By performing benchmarks, different units
within and across organizations are compared against each other to identify gaps, foster
competition foster discussion on possible future measures aiming at an improvement.
Providing organizational guidelines (C2, C3, C7). Shared sets of rules regarding com-
mon ways of performing knowledge work. This includes, e.g., organizing and naming
documents and folders on file shares, for approving business process related documents.
Allocating competence in projects (C1, C3, C7). People with different backgrounds
working together are perceived as very fruitful because time and legitimation of action is
Converging

provided which empower project teams to pursue project goals and introduce changes.
Conducting workshops on specific topics (C2, C3, C7). Topic-oriented meetings where
selected employees are brought together to drive a specific topic or to focus on develop-
ing a specific skill-set.
Enabling collaborative learning (C1, C4, C5, C6, C7). Providing tools and services for
creating, presenting, discussing, tagging and collecting resources enabling synchronous
and asynchronous sharing of information.
186 A. Kaschig et al.

Table 3. (continued)

Conducting regular (team) meetings (C2, C3, C5, C7). Knowledge transfer is supported
by an established procedure of regular team meetings, ensuring fast and target group
Regular sharing

oriented diffusion of knowledge in both directions along the hierarchy.


Offering formal trainings at regular intervals (C2, C3, C7). Topics of training courses
that are typically selected with respect to identified gaps between employees’ compe-
tence profiles and needs of organizational units or projects the employee works for.
Providing a flexible working space (C2). Employees who want or need to work together
(e.g. working on a new product or for discussing issues) have the possibility to choose
their working place and thereby increase communication effectiveness.
Employing technology-enhanced boundary objects (TEBOs) (C3, C4). TEBOs (i.e. soft-
ware-based interactive digital media, which support mediating knowledge sharing across
organizational boundaries) are conceived as tools which support situated learning.
Fostering communities of practice (C1, C2, C3, C4, C6). Regular topic-based meetings
for exchanging lessons learnt that were created by employees. These communities of
interest are mostly based on informal relationships between members.
Transferring

Fostering reflection by enabling purpose-oriented task groups (C2, C3, C4, C6). Groups
operating at boundaries between different communities help in extending and deepening
the communication. Thus, they enable ‘boundary crossing’ of knowledge.
Improving access to documented knowledge (C1, C2, C6, C7). Providing better transpa-
rency for finding knowledge contained in documents stored on network drives and by
creating a “knowledge library” (e.g., in a wiki) easy to access by knowledge workers.
Installing one supervisor for teams in different subsidiaries (C7). Leading teams respon-
sible for performing similar tasks in different subsidiaries fosters the transfer of know-
ledge between them, the development of COPs and facilitates benchmarking.

The relationships between levers and effects that we present in the following were
interpreted as causal based on the evidence provided by the perceptions of several
interviewees across cases and by a number of stories we obtained supporting them.
We provide one selected story of one case study for each effect describing the levers
perceived to cause the effect in detail. Moreover, we provide further evidence by one
short story reflecting a selected additional case where this effect was also observed.
Finally, we discuss the lever-effect relationship in the light of related research.
Increased Willingness to Share Knowledge. In C6, topics are fostered by communi-
ty of practice meetings. Attending a high number of such meetings is accepted by
employees of this organization as necessary and helpful to create an efficient working
environment and a joint understanding. Open discussions are allowed and supported
by different tools like forums and blogs. This technological support is strengthened by
giving individuals and teams more responsibility for their projects, for example, nego-
tiating various budget allocations, and offers opportunities to discuss more work- and
project-related ideas in the forums within communities. This effect was also observed
in case C2. Employees who were more willing to answer colleagues’ requests and
took part in community of practice meetings, fostered by the organization, were
supposed to perform “better” with respect to KM. These observations are in line with
experiences from other big companies, where knowledge-sharing became a part of
organizational culture and thus lead to more efficiency [30]. Also the less hierarchical
Technological and Organizational Arrangements Sparking Effects 187

and formal organizational structure leads to the absence of punishment for not follow-
ing organizational rules and therefore is also beneficial [31].
Openness to Change. In the software company investigated in C1, an internal Wiki
acts as a mindtool that reveals and relate thoughts of different people. Originally
implemented to improve access to documented knowledge, the wiki additionally pro-
vides software support that enables collaborative learning processes in early phases
of software development. When the wiki has been introduced, the management has
acted as a claimant influencing the employees to externalize their ideas and problem
solutions in form of wiki entries. Workshops on specific topics have been proven to
defreeze organizational thought patterns as they connect employees with different
perspectives and opinions. The wiki-based distribution of ideas among organization’s
members uncovers different perspectives, fostering diverging thinking during work
and preparing employees for a constructive discussion of project meetings. In C3, a
Wiki and a competition-based idea management scheme is used to collect innovative
ideas and put these via discussion and reflection into practice. As an effect, the
employees’ attitudes towards continuous improvement and open mindedness for or-
ganizational changes are fostered. De-freezing thought patterns by means of the Wiki
is driven by the complementary processes of accommodation and assimilation [32].
Revealing different perspectives in form of Wiki entries positively affects interper-
sonal conflicts at a cognitive level. If different perspectives of individuals come into
conflict, accommodative processes become operative: an existing conception of a
particular problem gets extended and differentiated.
Positive Attitude towards Knowledge Maturing Itself. In C7, senior management
actively communicated interest in the prospects of KM. Through guidance by super-
visors the number of KM-related ideas and projects arose. Also the attitude of super-
visors and (middle) management of the organization was positively affected, resulting
in evolving projects related to the knowledge management. In this respect, senior
management enabled middle and lower level managers to act as claimants for further
development of selected ideas. In C6, a positive attitude towards KM was evident.
This has been expressed by staff and senior management. Especially, an effort was
made by fostering topics by conducting community of practice meetings and by fos-
tering reflection by enabling purpose-oriented task groups. An individual’s attitude
affecting its intention to act is also discussed in literature. Gee-Woo, Zmud [33] show
that attitudes affect individuals’ intention to share knowledge. They relied on the
theory of reasoned action [34] stating that an individual’s decision to engage in a
specific behavior is determined by its intention to perform that behavior, which is
determined also by its attitude towards it.
Improved Accessibility of Knowledge. For C3, it was possible to improve access to
documented knowledge by a company-wide Wiki which is available for employees
and contains unrestricted information about all business activities of the company.
This wiki allows a quick access to information for all employees. A positive effect
arose through allocating competence in projects appropriately, through fostering top-
ics by conducting communities of practice meetings and fostering reflection by enabl-
ing purpose-oriented task groups. In C6, knowledge bases and Web 2.0 tools were
188 A. Kaschig et al.

perceived as essential for enabling collaborative learning by fostering the exchange


of information about project and company-related aspects. The organization improved
access to documented knowledge by making project-related information accessible to
other departments. The accessibility of knowledge is subject to many individual, or-
ganizational and technical obstacles [35]. Despite general cultural and hierarchical
issues [31], the quality of social networks is an important key factor for the ability to
interact with others and therefore enable access to knowledge from others [36].
Strengthened Informal Relationships. The organization of case C2 provides office
spaces for flexible use for its employees. Employees are encouraged to choose office
spaces close to colleagues they need to communicate with often, from whom they
want to learn something. Hence, employees got to know more colleagues which lead
to an improvement in their social networks and helped building (informal) relation-
ships. Improved communication channels meant quicker and less bureaucratic
answers so that employees are able to ask directly for comments on issues. This is
also supported by the organization fostering topics by allowing employees to conduct
community of practice meetings that took place between a number of employees
working on similar topics and exchanging lessons learnt and best practices. In C7,
workshops on specific topics are conducted to foster the creation of informal relation-
ships. Supervisors of different departments or subsidiaries meet regularly to identify
employees who might have similar interests/roles/tasks. During one-day workshops, a
topic is further developed, participants get to know each other and build informal
relationships. Informal relationships are generally considered to be important for in-
formal learning in organizations. The success of the activities participation in group
activities, working alongside others, tackling challenging tasks and working with
clients is mainly responsible for informal learning dependent on the quality of rela-
tionships in the workplace [13]. These informal relationships can be seen as individu-
al social capital, which is considered to be the basis of the social capital of the organi-
zation [37].
Availability of Different Channels for Sharing Knowledge. In C7, IT-related and
organizational measures are performed to provide different channels for sharing
knowledge. From an IT perspective, all subsidiaries are connected via a network and
employees are equipped with laptops for improving the access to documented know-
ledge, as well as cell phones and software and hardware for conducting voice and
video calls via the Intranet and Internet, hence provide software support for collabor-
ative learning. From an organizational perspective, knowledge transfer between dif-
ferent levels of hierarchy is supported by an established procedure of regular team
meetings. Furthermore, workshops on specific topics are used as another medium for.
In C4, IT-related and cross-organizational measures were used to provide a range of
different channels for sharing knowledge. Use of collaborative software and providing
spaces for cross-organizational meetings represented the provision of tools and ser-
vices for creating, presenting, discussing, tagging and collecting resources enabling
synchronous and asynchronous sharing of information. Setting up measures aiming at
this effect goes well along with the implications drawn by [38] who emphasizes that
different channels are needed for sharing different types of knowledge.
Technological and Organizational Arrangements Sparking Effects 189

Improved Quality of Workflows, Tasks or Processes. In C3, the only case study
which provided us with evidence on this effect, there exist a variety of activities to
improve the client’s business processes according to a Business Process Model and a
best practice model. The best practice database maintained by the organization pro-
vides a collection of process-descriptions that were approved according to quality
assurance procedures. The transition (i.e. the outsourcing) of business processes of the
organization’s clients is organized according to a highly formalized procedure that is
based on experiences of former engagements with other clients. For quality analysis
of the revised business processes, key performance indicators are provided. These
enable performing benchmarks across similar business processes at different clients
and were used to identify differences in the performance of different projects with
regard to efficiency, effectiveness, value, control etc. Using best practices as an in-
strument for transferring knowledge between individuals in an organization in order
to improve organizational processes is also named for example in [1].

5 Discussion and Limitations

An aggregated view of levers and sparked effects is provided in figure 1. The outer
columns depict the levers, grouped according to the five dimensions. The middle col-
umn presents the seven effects, mapped to individual, community and organizational
level. The arrows represent selected relationships between levers and effects and reflect
the stories described in section 4. The levers and effects we are suggesting here may be
misunderstood as simple cause and effect relationships. The case descriptions in section
4 show, though, that levers form an intricate network of cause and effect relationships,
each of which dependent on the other measures that have been taken. In this sense, all
measures need to be carefully designed in cooperation with other levers.

Fig. 1. Levers, effects and relationships


190 A. Kaschig et al.

Organizations perceiving themselves as successful support persistent collective


learning across individual, community and organizational levels. The levers and ar-
rangements that establish persistent learning across these levels can also be viewed to
move a collective such as a community or an entire organization between different
poles, e.g., (a) participation and reification, (b) togetherness and separation, (c) indi-
vidual and group [19], (d) grassroots developments and organizational guidance as
well as (e) opening up and filtering. The studied organizations seem to be successful
in bridging these typical polarities. For example, cases C1, C3 and C6 illustrate bridg-
ing participation and reification (a) when they combine improvements of the accessi-
bility of knowledge through databases or wikis (reification) with formal and informal
COP meetings and topical workshops (participation). C7 illustrates bridging together-
ness and separation (b) by offering a broad range of synchronous and asynchronous
means of communication. And the focus on flexible working and seating arrange-
ments in C2 nicely illustrates flexibly balancing individual and group focus (c).
Concerning the poles organizational guidance and grassroots developments (d),
various measures of organizational guidance had an important role in aligning and
structuring organizational practices and processes, and can be a result of formal orga-
nizational arrangements, as well as consequence of informal leadership, e.g. through
installing best practice guidelines in C3. At the same time, these same organizations
strike a balance with measures that allow grassroots developments and the emergence
of new ideas, e.g. with idea competition also in C3. Levers and effects on opening up
and filtering (e) also resonate with the polarity between diverging and converging
ideas, e.g., in C1 and C3. Knowledge seems to mature along a meandering process
between these poles, starting out with opening up for new ideas, filtering those that
are handed on to a community, opening up in the community for developing them
evolutionarily, filtering those that are formalized into boundary objects, opening up
for a competition of good practices identified in several communities and filtering
those that are institutionalized as organizational processes.
We did not specifically ask for roles that are seen to be supportive for KM. How-
ever, the levers need to be handled by people and a number of roles were explicitly
described in the interviews. The role of promoter was mentioned stressing the impor-
tance of having management support for levers or, moreover, their involvement in
levers was highlighted in several cases and is also reflected by some levers, e.g., in-
stalling one supervisor for teams in different subsidiaries and offering guidance by
supervisors and management. The only role that was directly mentioned in one case
was the ‘claimant’. Furthermore, we found evidence for people acting as boundary
spanners in several cases. These roles are formally implemented in the organization,
for example in case of one supervisor for teams in different subsidiaries where a sin-
gle employee functions as a boundary spanner. In contrast, the role of a ‘claimant’ is
performed voluntarily without any formal implementation. Interestingly, no dedicated
knowledge management roles, of the type outlined for example by Davenport and
Prusak [39], were mentioned. After a period of heightened attention to an institution-
alization of knowledge management in projects or separate departments, these dedi-
cated organizational units seem to not play an important role in leveraging resources
Technological and Organizational Arrangements Sparking Effects 191

for individual, community and organizational learning. Instead, every employee was
seen to be responsible for handling knowledge efficiently and differences between
this egalitarian take on knowledge management can be attributed to the primary roles
that employees play with respect to the business processes and work practices
performed in the organisations.
Although we relied on a sound method and compared the results in a comprehen-
sive cross-case analysis, a few limitations need to be acknowledged. Generally, the
limitations are in line with those of comparable empirical studies using purposeful
and convenient sampling [28], interviews and observations for data collection and a
qualitative methods for data analysis. As the number of seven cases is low, the results
of the study are not representative. However, the topics of this study are developed
based upon results of a previous study, which involved 139 organizations throughout
Europe [23]. Each case study aimed at (parts of) organizations and only a limited
number of participants could take part in the study. In this respect, the participants’
personal scopes (e.g., responsibilities, interests) may have influenced their percep-
tions. However, even selecting one person representing a whole organization is a
common practice in business and management studies [40]. We relied on at least two
interviews per case and selected only interviewees who had a good command of
knowledge and learning management in their organization and had gained experience
through work being based on offering and applying expertise in different organiza-
tional settings. By following these selection criteria, we ensured to gain multiple
perspectives on the state-of-play of performed or planned levers to positively
affect KM.

6 Conclusions and Future Work


This paper presents the results of seven case studies from four countries using a
common set of instruments in order to explore potentials of deliberately applicable
technological and organizational arrangements and perceived effects of these levers
on individual and organizational learning. The validity of these claims rests on select-
ing cases that were previously identified as perceiving themselves as particularly
successful. The strength of rich stories gathered in interviews conducted directly on
the work places of carefully selected multiple individuals per setting is considered a
vital aspect for understanding KM processes. Yet the focus on seven cases means that
the ways the processes operate in the different contexts are necessarily underplayed.
This provides an avenue for future research testing on the one hand the validity of the
levers and effects across organizational settings and on the other hand investigating
what contextual factors explain differences in the effectiveness of technological and
organizational arrangements between organizational settings. The stories report on
levers and the effects they spark on learning on an individual, community and organi-
zational level and thus help organizations to select concrete measures to improve
individual to organizational learning that are postulated as beneficial if not necessary
in a number of theories and models [10-12]. The identification of a temporal order of
how to introduce such arrangements of levers that fit well together and ideally intensi-
fy their positive effects as well as more in-depth knowledge about how to navigate
communities and organizational knowledge bases between the identified poles are
further encouraging aspects to be covered in future work.
192 A. Kaschig et al.

Acknowledgement. This work was co-funded by the European Commission under


the Information and Communication Technologies (ICT) theme of the 7th Framework
Programme (FP7) within the Integrating Project MATURE (contract no. 216356).

References
1. Alavi, M., Leidner, D.E.: Review: Knowledge Management and Knowledge Management
Systems: Conceptual Foundations and Research Issues. MIS Quarterly 25(1), 107–136 (2001)
2. Drucker, P.F.: Landmarks of Tomorrow. Harper, New York (1959)
3. Kelloway, E.K., Barling, J.: Knowledge Work as Organizational Behavior. International
Journal of Management Reviews 2(3), 287–304 (2000)
4. Davis, G.B.: Anytime/Anyplace Computing and the Future of Knowledge Work. Commu-
nications of the ACM 45(12), 67–73 (2002)
5. Schultze, U.: On Knowledge Work. In: Holsapple, C.W. (ed.) Handbook on Knowledge
Management 1 - Knowledge Matters, pp. 43–58. Springer, Berlin (2003)
6. Thomas, D.M., Bostrom, R.P., Gouge, M.: Making Knowledge Work in Virtual Teams.
Communications of the ACM 50(11), 85–90 (2007)
7. Arthur, M.B., DeFillippi, R.J., Lindsay, V.J.: On Being a Knowledge Worker. Organiza-
tional Dynamics 37(4), 365–377 (2008)
8. Blackler, F.: Knowledge, Knowledge Work and Organizations: An Overview and Interpre-
tation. Organization Studies 16(6), 1021–1046 (1995)
9. Wolff, E.: The Growth of Information Workers. Communications of the ACM 48(10), 37–