Beruflich Dokumente
Kultur Dokumente
VERITAS
Virtual and Augmented Environments and Realistic User
Interactions To achieve Embedded Accessibility DesignS
247765
Status F (Final)
Table of contents
Version History table .......................................................................................... 3
Table of contents ................................................................................................ 4
List of Figures ..................................................................................................... 6
List of Tables ...................................................................................................... 7
Abbreviations List ............................................................................................... 8
Executive summary ............................................................................................ 1
1 Introduction .................................................................................................. 2
1.1. Physical Impairments ............................................................................ 3
1.1.1 Motor Impairments .......................................................................... 3
1.1.2 Visual Impairments ......................................................................... 3
1.1.3 Speech Impairments ....................................................................... 3
1.1.4 Hearing Impairments ...................................................................... 4
1.2. Cognitive Impairments........................................................................... 4
1.3. Behavioural and Psychological Impairments ......................................... 5
1.4. Overview of Virtual Reality Tools .......................................................... 6
1.4.1. Introduction to VR and multi-sensorial systems .............................. 6
1.4.2. Visualization technologies .............................................................. 6
1.4.3. Sound in Virtual Environments........................................................ 8
1.4.4. Haptic Sensorial Channel ............................................................. 10
1.4.5. Devices for Interaction .................................................................. 11
2 Specification of Interaction Tools ............................................................... 14
2.1 Physical Interaction Tools ................................................................... 14
2.1.1 Visual Impairment IT ..................................................................... 14
2.1.2 Kinematic Functional Limitation IT Impl.1 ..................................... 18
2.2 Physical IT through innovative VR tools .............................................. 21
2.2.1 Kinematic Functional Limitation: Wearable Vibrotactile ................ 22
2.2.2 Dynamic Functional Limitation IT: Haptics .................................... 24
2.3 Cognitive interaction tool ..................................................................... 27
2.3.1 Conceptual specification of the simulation tool: The n-back task .. 29
2.3.2 Technical specification and integration of the cognitive simulation
tool................... .......................................................................................... 30
2.4 Behavioural & Psychological interaction tools ..................................... 32
2.5 Specification of the tool for Stress Induction ....................................... 33
2.5.1 Prior to performing the VERITAS tasks: using GUI....................... 33
2.5.2 Specification of the tool for Emotion Elicitation ............................. 36
3 Architecture of the Interaction Tools .......................................................... 39
3.1 Overall considerations ......................................................................... 39
3.2 Visual Impairment IT ........................................................................... 40
3.3 Interaction Manager ............................................................................ 41
List of Figures
Figure 1 Stereo image with polarization filters (left) and 4 wall CAVE
visualization system (right) ................................................................................. 8
Figure 2 Examples of haptic interfaces: Arm Exoskeleton and Hand
Exoskeletons (left) Desktop Haptic interface (right) .......................................... 10
Figure 3 Hands data glove (left) and markers for wrist optical sensor (right) ... 12
Figure 4 Vision cone ......................................................................................... 15
Figure 5 Eye point locations (left, right, middle)................................................ 16
Figure 6 Head orientation, vision line and fixation point ................................... 16
Figure 7 Rotation varying opening angles ........................................................ 17
Figure 8 Scaled human model.......................................................................... 19
Figure 9 Virtual markers ................................................................................... 19
Figure 10 Marker measurement stream (motion capture) ................................ 20
Figure 11 Joint angle limits ............................................................................... 20
Figure 12 First prototype of the vibrotactile bracelet equipped with 4 vibrating
motors without motion tracking functionalities .................................................. 24
Figure 13 GRAB Haptic user interface is able to deliver a force along any
wanted orientation in the 3d space ................................................................... 25
Figure 14 Custom Force-Feedback steering wheel develop by PERCRO
laboratory within the European project VIRTUAL
(http://vrlab.epfl.ch/Projects/virtual.html) .......................................................... 26
Figure 15 LogitechClearChat wireless USB headset with microphone. ............ 31
Figure 16 Graphical user interface of the Montreal Imaging Stress Task (MIST,
Dedovic et al., 2005). From top to bottom, the figure shows the performance
indicators (top arrow = average performance, bottom arrow = individual
subjects performance), the mental arithmetic task, the progress bar reflecting
the imposed time limit, the text field for feedback, and the rotary dial for the
response submission. ....................................................................................... 34
Figure 17 Visual Impairment Tool Architectural Scheme .................................. 42
Figure 18 Kinematic Functional Limitation IT (1) Architectural Scheme .......... 45
Figure 19 Motor Impairment Kinematics Tool Architectural Scheme ............... 47
Figure 20 Dynamic Functional Limitation IT Architectural Scheme .................. 49
Figure 21 Control Functional Limitation IT Architectural Scheme ..................... 51
Figure 22 Cognitive IT Architectural Scheme ................................................... 53
Figure 23 Behavioural Stress Induction IT Architectural Scheme ..................... 55
Figure 24 Behavioural Stress Induction Offline IT Architectural Scheme ........ 56
Figure 25 Behavioural Emotion Elicitation IT Architectural Scheme ................. 58
Figure 26 Behavioural Emotion Elicitation Offline IT Architectural Scheme ..... 59
December 2010 vi PERCRO
VERITAS D2.7.1 PU Grant Agreement # 247765
List of Tables
Table 1 Visual Impairment Tool Data Specification .......................................... 40
Table 2 Interaction Manager Data Specification ............................................... 41
Table 3 Kinematic Functional Limitation It (Impl.1) Data Specification ............. 43
Table 4 Interaction Manager Data Specification ............................................... 44
Table 5 Kinematic Functional Limitation It (Impl.2) Data Specification ............. 46
Table 6 Interaction Manager Data Specification ............................................... 47
Table 7 Dynamic Functional Limitation It Data Specification ........................... 48
Table 8-Interaction Manager Data Specification............................................... 49
Table 9 Control Functional Limitation It Tool Data Specification ...................... 50
Table 10 Interaction Manager Data Specification ............................................. 50
Table 11 Cognitive It Data Specification ........................................................... 52
Table 12 Interaction Manager Data Specification ............................................. 53
Table 13 Behavioural Stress Induction It Data Specification ............................ 54
Table 14 Interaction Manager Data Specification ............................................. 55
Table 15 Behavioural Emotion Elicitation It Data Specification ........................ 57
Table 16 Interaction Manager Data Specification ............................................. 58
Table 17 Interaction Manager Data Specification ............................................. 62
Abbreviations List
Abbreviation Explanation
AD Alzheimer Disease
CAVE Cave Automatic Virtual Environment
CPU Central Processing Unit
CRT Choice Reaction Time
CSV CommaSeparated Value
DOF Degree of Freedom
DoW Description of Work
FMRI Functional Magnetic Resonance Imaging
FFSW Force Feedback Steering Wheel
GUI Graphical User Interface
HI Haptic Interface
HMD Head Mounted Display
HRTF Head Related Transfer Function
ISP Immersive Simulation Platform
IADS International Affective Digital Sounds
IAPS International Affective Picture System
IM Interaction Manager
IT Interaction Tools
LCD Liquid Crystal Display
MIST Montreal Imaging Stress Test
PBM Physical Based Modelling
PD Parkinson Disease
PE Parameter Encoder
PET Positron emission tomography
SAPI Speech Application Programming Interface
VE Virtual Environment
VR Virtual Reality
WM Working Memory
WVB Wireless Vibrotactile Bracelet
Executive summary
This document shall provide an overview of the software architecture and the
specifications of the VERITAS Interaction Tools for Designer Experience that
will be developed within the WP 2.7.
VERITAS Interaction Tools are interfaces that are developed to enhance the
designer's experience, to the point that he will be able to feel or intuitively
understand the user's disabilities. The main underlying concept is that the novel
interaction tools shall provide either sensorial stimulations altered according to
the disability to be simulated or other kind of information able to reproduce an
equivalent stress as experienced by disabled users. The aim is to let designers
directly experience a specific disability, and therefore get a deeper awareness
and understanding of disabled users needs, so as to help them in conceiving
products with a better accessibility and usability. Thus, the designer is driven
through the assessment of accessibility of designed products and services in an
intuitive way, allowing him to get consciousness of the impairments of the user.
The first section of the document is dedicated to the description of the general
approach that has been assumed for the choice of simulating certain disabilities
and for the definition of the basic functionalities of the different tools. A brief
overview of the different types of disabilities categorized in Physical
Impairments, Cognitive Impairments and Behavioural and Psychological
impairments is presented also in this section.
The second section of this document provides guidelines on how the novel
interaction tools, developed in VERITAS in order to enhance the designers
experience by transmitting information able to simulate or to let them feel a
certain disability, relate to each other and follow a common view and design
guidelines. Detailed specifications are then provided.
The third section of this document is dedicated to the definition of the software
architecture of each interaction tool. The type of data required by the different
tools for interacting with the Immersive Simulation Platform and the relationship
with the Interaction Manager.
For each of the Interaction Tools the scheme of its integration within the
platform is provided.
1 Introduction
In VERITAS SP2 the development of an Immersive Simulation Platform (ISP) in
which the designer will be involved in first person in the test of the designed
system or service is foreseen. She/he will be immersed in a Virtual Environment
system and be able to select the type of user. The designer will than embody
the end-user and will be able to feel the functional limitation associated with the
end-user characteristics.
So the aim of the work that is developed in WP2.7 is to provide some innovative
tools (Interaction Tools, IT) for allowing the designer to acquire consciousness
of the impairments of the end-user by a direct experience.
Since not all the impairments can be experienced in a direct way by the
designer we consider two different approaches for the impairment simulation:
Direct Experience:
A subset of the functional limitations correlated with the impairments that are
included in the VERITAS platform can be directly replicated within the ISP. The
designer in this case can experience directly the functional limitation that affects
the end-user. An example can be the replication of the visual field functional
limitation in which the designer can be immersed in a virtual scene that is
purposely modified in order to replicate at best the visual impairment of the
user.
Augmented Experience:
Not all the functional limitations can be directly replicated on the designer. For
example it could not be acceptable to replicate pain due to arthritis or similar
motor impairments. However in many of these cases we can use purposely
developed advanced interaction tools for augmenting the simulation experience.
The scope of the interaction tool is to facilitate the designer in getting conscious
of the impairment of the user. Following this approach for example the limitation
in motion of the articular joint can be simulated by visualizing some warnings
when a joint limit is achieved.
lisps are the most common examples. Some rare cases can lead to people that
are totally unable to speak (mutism).
New Cognitive ITs for designer experience will have the aim of altering some of
such cognitive functions in order to reduce the designer cognitive performance
to the level of the impaired user.
Memory is the ability to store, retain, and recall information, and thus is also a
crucial aspect of cognitive performance. User modelling for the application
domains and disability types within VERITAS requires the following types of
memory to be modelled: semantic, episodic, procedural, and working memory.
Deficits in different memory types are observed in the elderly, Parkinsonian and
Alzheimer patients
Perception (visual, auditory, and haptic) may be impaired in the elderly. On the
other hand, e.g., haptic and auditory perception play a more important role for
vision-impaired people as a substitute for visual perception. Patients with
Cognitive flexibility is the ability to switch attention from one aspect of an object
to another, or to shift the attentional set, as investigated in set shifting tasks like
the Wisconsin Card Sorting Test. Patients with Parkinsons disease are
characterized by a deficit in set shifting expressed by a difficulty in suppressing
a prepotent response.
New Behavioural and Psychological IT for designer experience will have the
aim of altering some of such facets in order to impose on the designer a
behavioural and psychological state that is comparable to the state of the end-
user.
Fatigue disabled and elderly people may get tired more rapidly. For
maintaining safety and optimal usability, the users fatigue should be quantified
and involved as a design parameter that should be kept below a certain level
Emotional state the users satisfaction is reflected in his or her emotions that
can be quantified using the techniques of affective computing and entered as
design parameters in order to ensure positive emotional response to using the
improved products
In order this to happen, a VR environment must obey to all the laws of physics
which we are commonly used to and which we daily live together with. A VR
system, having to accomplish to functionalities like the perception of users
movements, their control and the sensorial response, needs a set of
technologies among which we can find:
3D stereoscopic display
3D sound
The principle of stereoscopy is based on the fact that humans own two eyes,
each one of them giving a different perspective vision of the surrounding world.
Combining these two perspective bi-dimensional images, it is possible to
recover information on the missing third dimension (depth).
Basically two different hardware technologies are available for the perception of
stereo images: active stereo and passive stereo. In active stereo, for each
frame two images are projected sequentially; therefore, there is a continuous hi-
frequency (about 120Hz) switching between images for the right eye and
images for the left eye. Users wear a special active device, shutter glasses,
which is synchronized with the image switcher and able to make lenses opaque
or transparent. When the image for the right eye is present, the left lens is
completely opaque, otherwise it is transparent. The same happens for the right
lens. The human brain receives a sequence of images, but they are so quickly
presented and switched that it believes to perceive them at the same time. In
other words, the brain merges the images and can reconstruct depth from them
In passive stereo both images are projected at the same time but, thanks to a
filtering system (the most widely known uses colour filters or polarization filters,
which exploit the phenomenon of light polarization), only the correct image
(Figure 1.1) reaches each eye. There are advantages and disadvantages for
both technologies: active stereo is more expensive and requires dedicated
hardware, passive stereo presents the problem of ghosting (or stereo crosstalk),
which means that one eye perceives also a small fraction of the image
presented for the other eye.
The HMD does not suffer from this problem because each eye has a LCD panel
directly in front of it, however it is expensive and it has a limited field of view.
Autostereoscopic Displays are 3D displays that do not require wearing special
glasses or helmets, allowing the viewer to see a different perspective of a scene
with each eye. Typically, the displays use lens arrays or special polarization to
direct left and right images to the appropriate eye. The drawback of this type of
display is that it requires the head to be positioned within a narrowly controlled
space, although recent developments in technology are expanding the space in
which the head may be positioned.
Figure 1 Stereo image with polarization filters (left) and 4 wall CAVE visualization system
(right)
In the field of portable devices, retinal displays may represent the near future of
HMD see-through displays. They use scanned-beam technology, and then
optically guide the computer generated image directly to the users eye. The
specially-coated ocular piece is optimized to allow the image to be reflected into
the users eye, while simultaneously allowing the user to continue to see the
outside world unhindered. Users can adjust the optical focus of the image,
placing it precisely at the users working distance. The result is a good clarity of
combined image data and the real-world.
In order to produce the right feedback to the sensorial system and allow a
natural interaction with the Virtual Environment (VE), the stereoscopy must be
combined with a tracking system that, at every instant, read the position of the
users head.
The typical way to recreate a sample audio field is to convert sound waves into
electrical signals. Depending on the wanted level of directionality it is possible to
have Mono recordings (involve only one channel from one direction), Stereo
recordings (involve two channels from different locations/directions) or Surround
capture (involves more channels).
Without doubt visual and acoustical rendering are the most important and,
consequently, the most commonly implemented feedbacks in a VR system.
Among the various motivations it can be considered that a consolidated base of
hardware devices already existed, even if used in other sectors: projectors,
loudspeakers, monitors, headphones were already largely available
independently from VR purposes. An increasing importance, however, is being
assumed by the haptic feedback, as haptic interfaces become more affordable
and widespread. Haptic Interfaces (HI), originally created for tele-operation
purposes, are robotic devices able to interactively exert stimuli which induce
tactile perception The main functionalities of a HI are to exert a well known force
or torque (achieved by means of actuators) and to acquire the position of the
user body involved in the interaction (achieved by means of sensors).
Figure 2 Examples of haptic interfaces: Arm Exoskeleton and Hand Exoskeletons (left)
Desktop Haptic interface (right)
collision detection tasks (which are crucial for the correct interaction among
virtual objects and between the user and the VE) or in deeper calculations on
the dynamics of VE objects.
As PBM usually needs huge amounts of memory and CPU time to achieve an
interactive real-time simulation, often the PBM module runs on a separate
computing node. Generally speaking, all the rendering modules for the various
sensorial channels are considered to be distinct (even if they run on the same
computing node) and exchange information by means of network
communication (messages) or shared memory. This often leads to a
redundancy in the data base, because the same object may be modelled in
different representations or resolutions according to the particular rendering
module. Moreover, when several of these modules are co-existing, the problem
of co-locating the stimuli generation arises: as an example, when a virtual hand
visually touches a virtual objects, the possible haptic feedback should be
generated in the same time and with the same properties so that the user does
not perceive a non natural or, even worse, disturbing feeling.
Motion capture is the process of acquiring spatial data related to the user's
body. The position and the posture of body, fingers, limbs, or the sequence of
these postures combined to form an animation, are acquired by tracking
devices. These devices may be able to acquire single points 6DOF data
(position and orientation) or more structured data (like the joint angles of body
postures). Several technologies exist, based on different physical principles, like
magnetic fields, ultrasonic waves, kinematics chains, or on different data
acquisition methodologies, like range measurements, marker recognition,
motion-related data. Each one has its pros and cons and, depending on the
specific task, can be opportunely used.
low latency (to avoid erroneous behaviours that may lead even to sickness)
low encumbrance (to ensure wearability and to avoid hampering the user's
motion)
Figure 3 Hands data glove (left) and markers for wrist optical sensor (right)
The acquired efferent data are subsequently processed to allow the actual
interaction, with the user modifying the virtual environment according to his/her
actions. In the simpler cases (for instance, the updating of the point of view) the
process is straightforward. In other cases the most suitable interaction
metaphors must be accurately chosen to achieve the expected result.
The interaction may be direct (the user performs actions in first-person, seeing
his/her real body interacting with the VE) or mediated (either in first or third-
person, seeing an avatar).
The tool described in this section focuses on the simulation of visual impairment
and provides the designer a mean to perceive how a beneficiary suffering from
visual impairment may perceive the environment.
1
See http://en.wikipedia.org/wiki/Visual_impairment
2
Standard symbol used for testing vision
December 2010 14 PERCRO
VERITAS D2.7.1 PU Grant Agreement # 247765
modelled through a cone with the corresponding opening angle at the eye
position and aligned to the vision line (Fig.2.1). In order to cover additional
aspects as gaze and visual perception ranges, more general cones have to be
provided, which are based on varying opening angles along the cone rotation
around the vision line and on the head orientation.
This geometrical construct is the basis for the visual interaction tool. The cone is
generated on the input parameter eye position, vision line and opening angles.
The opening angles are set by the selected virtual user model, while the other
position parameters have to be provided according to the designers behaviour.
The integration of that tool in the immersive platform will support the following
online interaction process:
The designers eye position and vision line is continuously tracked by a specific
head-eye tracking device.
Based on the tracked eye and vision data the corresponding vision cones are
visualized in the virtual environment through a specific head-video device.
In addition the space outside the vision cone is disturbed or hidden completely.
When this process is applied to the designer acting in the virtual environment,
he only can see and recognized the environment within the vision range
restricted by the selected visual impairment. This allows a direct experience of
these impairments by the designer.
(see http://en.wikipedia.org/wiki/Landolt_C)
The coordinates of the left and right eyes or the middle point of eyes position
are given (Fig.2.2). At least one eye position has to be provided.
This input parameter is optional and only necessary for specific vision cone IDs
(see below). The orientation is given by the head upward and transversal vector
(Fig.2.3).
head orientation
vision line
fixation point
Vision cone ID
This input parameter defines which kind of visual cone should be generated. A
potential list is acuity, gaze, and visual perception.
Each vision cone requires a specific set of parameters, which controls the
geometrical representation of the cone. These parameters are taken from the
given virtual user model. These parameters can be absolute (i.e. opening angle
in degrees) or relative (reduction of normal / average opening angles in
percentage).
Based on the input parameters, the interaction tool performs the following
process:
When the selected vision cone (ID) requires the head orientation, the current
vision line is rotated to be perpendicular to the head upward and transversal
vector.
The vision cone is generated as a rotation object around the vision line. For
each rotation angle between 0 and 360 the corresponding opening angle is
extracted from a database according to the selected vision cone ID. The
opening angles are modified according to the given control parameters. The
cone origin is the given eye position and the height is equal to the given vision
line length.
One vision cone for each given eye position (left, right, middle) as a graphical
surface representation (i.e. B-Spline surface)
This articulated digital human model is the basis for the mobility interaction tool.
Based on a set of body point positions the human model is accordingly moved
by inverse kinematics methods. These methods calculate proper joint angles to
match the given body point positions (motion tracking). This calculation takes
the specific joint angle ranges into account. When a motion forces a joint to
move beyond its limits, the joint angle is set to the limit, what stops this joint
movement. The joint angle ranges are set by the selected virtual user model,
while the body point position parameters have to be provided according to the
designers behaviour. This is done using a motion capturing system to measure
marker positions during the motion.
The integration of that tool in the immersive platform will support the following
online interaction process:
Based on the tracked body point positions the human model is moved
correspondingly and visualized in the virtual environment through a specific
head-video device. The motion calculation takes the given joint angle limits into
account.
In addition the human model posture is checked for joint angles blocked at their
limits. In this case the joint is marked on the human model visualization.
Alternatively the blocked joints can be displayed as a list through the video
device.
When this process is applied to the designer acting in the virtual environment,
he sees his virtual body representation moving and receives a virtual feedback
when his movement is not possible due to the limited mobility. Through
corresponding motion behaviour modifications the designer can imitate and
experience restricted motion caused by the mobility impairment.
The digital human model is scaled to represent the designers kinematics and
body dimensions (Fig.2.5) in order to allow a proper motion tracking.
Each joint of the human model is equipped with joint angle limits restricting the
joint kinematics to produce anatomical reasonable motions (Fig.2.8). The limits
can be set to standard values (average people) or to specific values. These
specific values can be provided absolutely or calculated by factors from the
standard values. The absolute values or the factors are extracted from the
selected virtual user model.
Based on the input parameters, the interaction tool performs the following
process:
The current marker position set is provided from the continuous measurement
stream (motion capture).
The digital human model, which is scaled to designers body dimensions and is
equipped with proper virtual markers, is moved using inverse kinematics. In the
final calculated posture the virtual marker positions match to the measured
marker positions.
When the inverse kinematics method leads to joint angle beyond the
corresponding limits, the joint motions are block at these limits. These blocked
joints are marked internally.
Joint Limitations: Many common diseases associated with aging and several
specific pathologies can cause limitations in the movements of the arm, leg,
neck backbone etc Among them in VERITAS VR tools mainly address to arm
or leg movements.
Force exertion Limitation: Common diseases associated with aging and other
specific pathologies can be cause of reduced ability in force exertion. VERITAS
VR Tools will be able of simulating this symptoms focusing on the user arms
forces.
Reduced motor control: Common diseases associated with aging and other
specific pathologies (for example Tremor) can be cause of reduced ability in
controlling the movements of the limbs. VERITAS VR Tools will be able of
simulating these symptoms focusing on the user arms arm movements.
In the next sections it is described for each VR device the detail of the
principles, functionality and their integration in the VERITAS ISP.
In several applications such devices are arranged in the form of bracelet or suits
and they are used to communicate directional information i.e. while driving a car
the device will inform the driver about the direction of a possible imminent
collision.
To track the position in the 3D space of the user wrist in order to provide the
OSP with the information needed to animate the virtual human model.
2.2.1.3 Functionalities:
The Wireless Vibrotactile Bracelet, represented in Fig.2.9. that is already in
development at PERCRO Laboratory [Diederichs2009], is a wireless device
with rechargeable batteries and on-board Bluetooth communication able to
provide the user with programmable pulsating vibrotactile feedback on multiple
points around the user wrist. Such device has been applied for the support of
navigation in real and virtual environment.
Typical physical functional limitations of elderly and disable user involve the
limitation of joint movements due to arthritis and calcifications or as
consequence of injuries. The movements of the joints are limited by stiffening of
ligaments or by inflammations and the user feel pain when reaching certain
configurations.
These VERITAS VR devices will focus on the movements of the human arm
and leg. Other body parts are still critical like neck and hip but their inclusion is
out of the application fields considered in VERITAS project.
The new WVB is going to be designed in a way that the designer will receive
vibration around his wrist indicating the progressively approaching of the arm to
critical positions while performing the tasks required by the activity in the
simulation environment.
The WVB will be equipped with a motion tracker that measure in real-time the
position of the wrist and compute through an inverse kinematics algorithm the
position of the shoulder and forearm joints of the virtual user.
Figure 12 First prototype of the vibrotactile bracelet equipped with 4 vibrating motors
without motion tracking functionalities
The VR tool will require the exchange of user impediment information provided
by the Integrated Interaction Tool.
The GRAB (Fig.2.10) device is a haptic able to exert controlled force on the
user hands and fingers. Its functionalities are basically equivalent to the
commercial Phantom device from Sensable Corp. but the workspace of the
device is much larger and covers almost completely the workspace of the
human arm.
2.2.2.3 Functionalities:
The GRAB haptic user interface is a 3 DoF (Degrees of Freedom) haptic device
able to exert forces along any 3D direction in the range of 5N of magnitude. A
detailed description of the device is provided in [Aviz03].
Figure 13 GRAB Haptic user interface is able to deliver a force along any wanted
orientation in the 3d space
This kind of devices shows very good performances for what concern realism
and quality of the feedback but they obviously lack of versatility. In the case of
FFSW the application is obviously restricted to a car simulation environment.
In the case of driving a car the control of vehicle direction takes place by
applying forces and controlling the position of the rotation of the steering wheel.
In particular the functional limitations that will be considered are those related
with disabilities that cause loss of muscles tone and reduced muscular mass i.e.
typical disabilities associated with aging and with other specific pathologies like
Ulnar Neuropathy or as consequence of Lordosis.
2.2.2.8 Functionalities
In particular the FFSW device will allow programming the force response of the
steering wheel according to the dynamic model of the simulated car. The force
feedback is programmed by default to simulate the behaviour of a real steering
wheel; when the designer asks for the simulation of one of the above mentioned
impairments the force response of the steering will be altered. In particular the
steering force will be increased in order to make the designer feel the equivalent
effort of the end-user.
Real-time data of speed and trajectories data in order to compute the force that
has to be provided to the user.
The Cognitive Interaction Tool will address working memory limitations (e.g.,
older people, patients with Parkinson disease (PD) or Alzheimer disease (AD)).
Depending on the disability to be experienced, the tool will artificially impose
cognitive load on the designer to simulate a cognitive restriction. The tool will
further allow pre-selecting and adjusting of the degree of cognitive load imposed
on the designer, depending on the extent of the respective target disability.
The cognitive disabilities and users that will be addressed in the scope of the
cognitive impairment interaction tool are described in more detail in D1.4.1.
In the scope of the VERITAS use cases, as described so far in the DoW,
cognitive impairments are mentioned in particular within the automotive
application area (here related to reaction time) and concerning the design of
user interfaces in the areas of infotainment, office workspace and personal
healthcare systems. VERITAS allows the simulation of end-user groups and
their disabilities. This again allows designers to early and automatically discover
errors and to incorporate the gained knowledge about problems and spaces of
improvement for the respective application areas.
The usage of VERITAS for later tests of developments through the VERITAS
simulation platform with a set of virtual users (here in particular through the
application of the cognitive user models) will ideally cover a wide range of the
target end-user group. Thus, low- and higher fidelity prototypes of accessible
and high quality applications and all the issues addressed and identified by
VERITAS with regard to the specific cognitively impaired user group can be
tested by developers in a time- and cost-efficient manner.
Considering all these information provided through the VERITAS cognitive user
models, use cases and application scenarios the cognitive simulation tool
developed here should focus on the simulation and experience of memory
limitations as these explain most of the characteristics common for many
cognitive impairment disabilities. Memory limitation encompass a wide range of
attributes pointed out, such as slow reaction times, difficulties in sustaining
attention, orientation issues as well as problems (on a higher level) in decision
making processes. This is particularly relevant for the application areas:
Dual task studies frequently use n-back tasks to examine the influence of
divided attention on a primary task (e.g., Baddeley, Hitch & Allen, 2009;
McKinnon & Moscovitch, 2007). Researchers have also compared verbal and
spatial n-back to determine the involvement of verbal and visuo-spatial
components of WM (e.g., Baddeley et al., 2009). However, n-back tasks
typically involve visual rather than auditory presentation (even for verbal
materials) and, to facilitate data-analysis, almost invariably involve button press
as opposed to spoken responses. This prevents the n-back paradigm from
being used in combination with primary tasks that involve vision and action (in
VERITAS for instance the evaluation of user interfaces or simulations of
physical environments). This is a challenge that needs to be addressed through
the cognitive interaction tool.
The software used for recognizing designers spoken numbers will use a fixed
set of single-token Speech Recognition Grammars 3, one for each of the
numerals one to nine. The speech recognizer will be configured to only
perform recognition against these grammars, preventing it from trying to match
against any other possible speech. This should make the software application
more robust against recognition errors. Time boundaries are set to determine
whether the digit spoken occurred within the parameters of the n-back task
being performed.
The user interface of the software will offer designers a menu and toolbar with
controls, allowing them to start and stop the n-back process and to configure
the degree of cognitive load to be imposed. The results of the simulation
session can be stored (as Comma-Separated Value (CSV) text file, as it is a file
format widely supported by analysis software tools) for later analysis.
The cognitive interaction tool will be a software application that can be installed
and run on the common operating systems Windows XP, Vista and Windows 7,
and as such can be easily integrated with the other VERITAS interaction tools
(IIT) of the VERITAS platform.
3
see http://en.wikipedia.org/wiki/Speech_Recognition_Grammar_Specification
Behavioural & Psychological ITs will be devoted to let the designer experience
the expected psychological reactions of VERITAS end-user while performing
tasks that are considered in the VERITAS scenarios via the VERITAS ISP.
Stress, fatigue, motivation, and emotions are closely related to cognition and
influence cognitive performance. Therefore, behavioural and psychological
interaction tools should be closely related to the cognitive interaction tools
developed within A2.7.3.
The emotional state of the user can be manipulated via audiovisual stimuli:
videos (Gross & Levenson, 1995; Kreibig et al., 2007), pictures (e.g. IAPS
images, http://csea.phhp.ufl.edu/media.html), music (Bishop et al. 2009,
Schmidt & Trainor, 2001, Krumhansl, 1997) or affective digitized sounds (IADS,
http://csea.phhp.ufl.edu/media.html). Visual stimuli (videos, pictures) can be
used to elicit emotions in the user prior to performing the VERITAS tasks in the
immersive environment, while auditory stimulation (music) can be used also in
the immersive environment while performing the VERITAS tasks.
In view of the above, the behavioural and psychological interaction tools should
mainly provide means for inducing mental stress and manipulating the
emotional state of the user (emotion elicitation).
- Perceptual tunnelling (Wickens et al., 1998; Wickens & Hollands, 2000), which
manifests itself in constriction of the effective visual perceptual field while the
items in the periphery are less attended. The focus is usually on the perceived
stressor and other items (e.g. at the periphery of the visual field) are less
available to cognitive processing.
- Cognitive tunneling (Wickens et al., 1998; Wickens & Hollands, 2000), when a
limited number of possibilities are considered by central cognition.
The capacity of working memory is impaired under stress (Wickens et al., 1998)
as it seems to be less available for saving and rehearsing information, and less
useful for attention-demanding tasks. Wickens et al. (1998) also state that long-
term memory seems to be little affected. The effect of stress on cognition
manifests itself with decreased attention (Hancock, 1986, Wickens et al., 1998).
Figure 16 Graphical user interface of the Montreal Imaging Stress Task (MIST, Dedovic et
al., 2005). From top to bottom, the figure shows the performance indicators (top arrow =
average performance, bottom arrow = individual subjects performance), the mental
arithmetic task, the progress bar reflecting the imposed time limit, the text field for
feedback, and the rotary dial for the response submission.
In the condition of stress induction, a time limit is enforced for each task; the
elapsed time is displayed by a progress bar moving from left to right on the
computer screen, with the exact time allowed for each task depending on the
users previous performance. The program can be written in C# programming
language on the Microsoft .NET Framework for Microsoft Windows.
The basic algorithm of the program should create mental arithmetic tasks using
up to 4 numbers ranging from 0 to 99 and up to 4 operands (+ for addition, for
subtraction, * for multiplication and / for division). The algorithm should create
tasks for which the solution will be an integer between 0 and 9, such that a
single keystroke is needed for the response.
The user selects a number on the rotary dial either by pressing the left or right
arrow keys on the keyboard or by pressing the left or right mouse buttons.
Pressing the left arrow or left mouse button moves the highlighted number on
the rotary dial of the programs user interface counter clockwise, whereas
pressing the right arrow key or mouse button moves the highlighted number on
the rotary dial of the programs user interface clockwise (see Fig.2.13). Pressing
the down arrow key or the middle mouse button submits the highlighted number
on the rotary dial as the subjects response to the arithmetic task. This response
is then compared with the correct answer for the task, and the appropriate
feedback (correct or incorrect) is presented in the feedback field of the
computer screen. If no response is recorded within the time limit, the response
timeout is displayed.
During each session of stress induction, the program is set to a time limit that is
10% less than the users average response time; this approach induces a high
failure rate. In addition, the program continuously records the subjects average
response time and the number of correct responses. If the user answers a
series of 3 consecutive mental arithmetic tasks correctly, the program reduces
the time limit to 10% less than the average time for the 3 correctly solved tasks.
Conversely, if the user answers a series of 3 consecutive tasks incorrectly, the
program increases the time limit for the following tasks by 10%. In this way, a
range of about 20% to 45% correct answers is enforced.
Individual runs can last between 2 and 6 minutes, as in the original approach for
fMRI or PET imaging (Dedovic et al., 2005). During the runs, the colour bar at
the top of the screen shows 2 performance indicators, for the users own
performance and average performance of all users. The average performance
arrow appears in the green area, at the right of the screen, while the users
individual performance usually appears in the red area, to the left of the screen.
Between the runs, the user can be informed about his or her performance,
reminding him or her that the average performance is about 80%90% correct
answers. The user is then reminded that there is a required minimum
performance and that his or her individual performance must be close or equal
to the average performance of all users so that the simulation can be realistic.
To reduce interference with the VERITAS task which requires using the
keyboard and/or mouse or other input devices of the VR environment, the
answer to the arithmetic task should be received via microphone. The
experiencing designers have to wear a wireless headset to facilitate and allow
free movement. The designers microphone picks up what they say and speech
recognition software records each digit spoken with a time stamp, as in
subsection 2.2.2.
Further examples on how video can be used for elicitation of negative emotions
can be found, e.g., in (Kreibig et al., 2007) where fear and sadness were
induced using two different film clips for each of the two emotions. The
differences in emotions were reliably detected using multichannel physiological
recordings and discriminant analysis. The clips were about 10-12 minutes long.
Exact details on the films and what scenes from these films were included in the
clips for emotion elicitations can be found in (Kreibig, 2004).
December 2010 36 PERCRO
VERITAS D2.7.1 PU Grant Agreement # 247765
The tool will include a repository of multiple film clips. Each time the interaction
tool is started, a random clip should be presented.
In (Schmidt & Trainor, 2001), the musical stimuli comprised four orchestral
excerpts that reflected different affective valence (i.e., pleasant vs. unpleasant)
and intensity (i.e., intense vs. calm): intense-unpleasant emotion (e.g., fear),
Peter and the Wolf by Prokofiev, wolf excerpt; intense-pleasant (e.g., joy),
Brandenburg Concerto No. 5 by Bach, first movement; calm-pleasant emotion
(e.g., happy), Spring by Vivaldi, second movement; and calm-unpleasant
emotion (e.g., sadness), Adagio by Barber.
In the following a brief description resuming the main features of each tool is
provided together with tables including a list of the main input/output data
exchanged and a scheme of the related architecture.
This tool (see figure 3.1) implements a simulation of disabilities related to visual
acuity impairment; the simulation is based on the generation of a vision cone,
limiting the designer field of view, whose parameters (see section 2.1.1) are
determined as follows:
the opening angle of the vision cone is computed by the Parameter Encoder
after receiving the User Model data;
the designers eye position and gaze direction are continuously tracked by a
specific head-eye tracking device.
Based on the tracked eye and vision data the designer will perceive the space
outside the vision cone as disturbed or hidden completely; alternatively, the
corresponding vision cone can be visualized in the virtual environment as a
geometrical shape (placed a short distance ahead of the designer in order to
give a visual feedback of the limited field of view). This tool assumes the
existence of a specific purposely developed module in the ISP (Visual Cone
Reshaper), in charge of altering in real-time the displayed portions of the scene
according to the computed vision cone and/or to display the vision cone as a 3D
geometrical shape.
Linked
Data I/O Attributes Class Type
Entity
The role of the IM for this tool is to translate, via the Parameter Encoder, the
User Model data (according to age and disability) into the opening angle of the
visual cone, and to transmit to the tool the data received from the Immersive
Simulation Platform related to the eye (position and orientation) and to the head
(orientation) which are used to completely define the visual cone parameters.
The IM will also transmit to the ISP the visual cone parameters dynamically
computed by the IT.
I/O Linked
Data Class Type
Attributes Entity
This tool (see figure 3.2) implements a simulation of disabilities related to limited
mobility impairment; the simulation is based on providing a virtual feedback
when the designer movements are not possible due to the constraints posed on
the kinematical joint angles of the body. The tool, based on the value of the joint
angles retrieved by a Motion Capture system, will visually signal a warning
when the measured joint angles exceed the allowed ranges. If the ISP operates
a Head Mounted Display, these joint angles are used to update the pose of the
avatar accordingly with these limits.
The tool produces as relevant data for logging, via the IM logger module, the
constrained Joint angles and an ID of the generated warning.
The logger module logs both the warning ID and the constrained joint angles
computed by the IT.
Type Linked
Data I/O Attributes Class
Entity
int IT / ISP,
Warning ID Input / Output Dynamic
LOG
This tool (see figure 3.3) implements a simulation of disabilities related to limited
mobility impairment; the simulation is based on providing a vibration feedback
when the designer movements are not possible due to the constraints posed on
the kinematical joint angles of the body. The tool, based on the value of the
hands position retrieved by a Tracking system, whenever the hand position
exceeds the allowed range, generates a vibration on a bracelet worn by the
designer. The intensity and the direction of the vibration can provide additional
information related to amount and direction of the movement overflow
Type Linked
Data I/O Attributes Class
Entity
Linked
Data I/O Attributes Class Type
Entity
Linked
Data I/O Attributes Class Type
Entity
Linked
Data I/O Attributes Class Type
Entity
The logger module stores the resulting steering angle and the computed force
estimation.
I/O
Data Class Type Linked Entity
Attributes
I/O
Data Class Type Linked Entity
Attributes
Type Linked
Data I/O Attributes Class
Entity
3.8 Cognitive IT
* the extra cognitive load is a function not only of the model of the simulated
user but also of the cognitive model of the designer; therefore such a
characterization of the specific designer should be made available before using
the platform: To this purpose specific activities need to be carried out in order to
provide means to perform this characterization.
Linked
Data I/O Attributes Class Type
Entity
Linked
Data I/O Attributes Class Type
Entity
The required level of stress can be induced also by making the designer use
offline this tool (see figure 3.8), prior to have the immersive experience. In this
case the tool is self-standing and a typical desktop setup is used; the interaction
takes place with standard tools like a visual GUI and the use of mouse,
keyboard and a monitor. The IM is used only to encode parameters and to log
relevant data.
Type Linked
Data I/O Attributes Class
Entity
The logger module stores the generated formulas, the results and the possible
warning ID.
Linked
Data I/O Attributes Class Type
Entity
In the second case, the tool assumes the existence of a database of music
fragments, accessible by the ISP Audio Manager, whose playing status can be
triggered upon the reception of the ID of the fragment.
The required level of emotion can be induced also by making the designer use
offline this tool (see figure 3.10), prior to have the immersive experience. In this
case the tool is self-standing and a typical desktop setup is used; the particular
emotion is induced by displaying selected videos or images. The IM is used
only to encode parameters.
Linked
Data I/O Attributes Class Type
Entity
Linked
Data I/O Attributes Class Type
Entity
Linked
Data I/O Attributes Class Type
Entity
Given the high heterogeneity of the Interaction Tools it is not possible to think to
realize an actual physical integration of the various tools, also because they are
very likely to operate separately from each other. Nevertheless, the Interaction
Tools can be thought as logically integrated through the interaction manager
which serves as an abstraction layer in order to achieve the communication
between the Interaction Tools and the Immersive Simulation Platform. In the
following a complete list of the data exchanged between the Interaction
Manager, the Interaction Tools and the Immersive Simulation Platform is
provided together with a resumptive scheme describing the overall logical
architecture.
Hand movement
direction Input Dynamic vector ISP
(if available)
Visual Cone
Input / Output Dynamic array IIT / ISP
Parameters
Constrained joint
Input / Output Dynamic array IT / LOG, ISP
angles 2
IT / LOG, ISP
Steering angle 2 Input / Output Dynamic float
(app)
Avatar body
Output Static array IIT
parameters 1
Finger Controlled
Output Dynamic vector ISP, LOG, IIT
position
Degree of scrambling
1 Output Static int IIT
1
this data are gathered in the following scheme under the name Encoded
Parameters
2
this data, when exiting from the IIT, are gathered in the following scheme
under the name Computed data
3
this data, transmitted by the ISP to the IIT via the IM, are gathered in the
following scheme under the name Bridged data
*all the output data linked to the LOG entity are gathered in the following
scheme under the name Logged data
4 Conclusions
In this document it is provided an overview of the software architecture and the
specifications of the VERITAS Interaction Tools for Designer Experience that
will be developed within the WP 2.7.
VERITAS Interaction Tools are interfaces that are developed to enhance the
designer's experience, to the point that he will be able to feel or intuitively
understand the user's disabilities. The main underlying concept is that the novel
interaction tools shall provide either sensorial stimulations altered according to
the disability to be simulated or other kind of information able to reproduce an
equivalent stress as experienced by disabled users.
The first section of this document is dedicated to the description of the general
approach that has been assumed for the simulation certain disabilities and for
the definition of the basic functionalities of the different tools. The concept of
simulation through Direct Experience and through Augmented Experience are
explained and a brief overview of the different type of disabilities is given
through a categorization of Physical Impairments, Cognitive Impairments and
Behavioural and Psychological impairments.
The second section the details related to each novel interaction tools are
provided. The work done for the definition of specification and functionalities is
reported.
A summary table of the different tools and their specification is also provided in
Annex A of this document.
Visual Impairment IT
The tool will provide also a 3rd person mode that will allow
the designer to look at his avatar in the VE from a 3 rd
person point of view in order to better analyze the
scenario.
the ISP
Batch mode option: No batch mode, it can run only within the ISP
Ability to access High level interaction with the ISP, Direct access to
input & output interfaces is not foreseen.
interfaces
Short description The tool is the first of the two kinds of tool that focus on
the simulation of kinematics functional limitation and
provides the designer a mean to intuitively perceive the
limitation in the articular motion of the impaired user. The
tool is developed for being implemented inside an
immersive Virtual Environment (VE). The movements of
the designer body will be tracked by a motion tracking
system. The designer will receive warnings on the video if
he is performing movements that are not compatible with
the disability of the user that has been selected.
Batch mode: No batch mode, it can run only within the ISP
Ability to access High level interaction with the ISP, Direct access to
input & output interfaces is not foreseen.
interfaces
Short description The tool is the second of the two kind of tool that focus on
the simulation of kinematics functional limitation and
provides the designer a mean to intuitively perceive the
limitation in the articular motion of the impaired user. The
tool is developed for being implemented inside an
immersive Virtual Environment (VE). The movements of
the designer body will be tracked by a motion tracking
system. A specific software module will be able to
constantly keep track of the joint position of the designer
body and compare it to the joint rotation limitation of a
specific beneficiary. When a limitation id reached a
vibratory warning is transmitted to the designer wrist
indicating also the direction of approach to the joint
limitation.
Hardware Bracelet:
Specifications
Wireless Bluetooth connection (50m range)
Weight: 100g
Ability to access Low level control of the bracelet imposing the frequency of
input & output the pulsating vibration and the duty cycle.
interfaces
Functionalities condition
Ability to access Low level control of the force feedback steering wheel.
input & output
interfaces
Hardware DoF: 3
Specifications
Force: 5N
Batch mode: Yes in the batch mode the device will simply impose
oscillatory forces and the trajectory will be recorder and
stored.
Ability to access Low level direct access to the GRAB haptic Interface
input & output
interfaces
Literature http://www.percro.org/index.php?pageId=GRAB&page=desc_1
references or
website .
Table A 6 - Congnitive IT
User Interface Online Version: Interaction takes place in the ISP through
headset
User Interface Online Version: Interaction takes place in the ISP through
headset
References
Albert, M. L., Feldman, R. G., & Willis, A. L. (1974). The subcortical dementia of
progressive supranuclear palsy. Journal of Neurology, Neurosurgery and
Psychiatry, 37: 121-130.
Bckman, L., Small, B.J., & Fratiglioni, L. (2001). Stability of the preclinical
episodic memory deficit in Alzheimer's disease. Brain, 124(1): 96-102.
Baddeley, A.D., Bressi, S., Della Sala, S., Logie, R., & Spinnler, H. (1991). The
decline of working memory in Altzheimers disease. A longitudinal study. Brain,
114(6): 2521-2542.
Baddeley, A.D., Hitch, G.J., Allen, R.J. (2009) Working memory and binding in
sentence recall. Journal of Memory and Language, 61, 438456.
Brand, M., Labudda, K., Kalbe, E., Hilker, R., Emmans, D., Fuchs, G., Kessler,
J., Markowitsch, H.J. (2004). Decision-making impairments in patients with
Parkinson's disease. Journal Behavioural Neurology. 15(3-4): 77-85.
Cassell, K., Shaw, K., & Stern, G. (1973). A computerized tracking technique for
the assessment of Parkinsonian motor disorders. Brain, 96: 815-826.
Cerella, J. (1990). Aging and information processing rate. In: J.E. Birren, K.W.
Schaie (Eds.): Handbook of the psychology of aging, 3rd ed., San Diego:
Academic Press, pp. 201221.
Cerella, J., Poon, L.W., & Williams, D.M. (1980). Age and the complexity
hypothesis. In: L.W. Poon (Ed.): Aging in the 1980s, Washington: American
Psychological Association. pp. 33240.
December 2010 82 PERCRO
VERITAS D2.7.1 PU Grant Agreement # 247765
Cronin-Golomb, A., Corkin, S., Rizzo, J.F., Cohen, J., Growdon, J.H., & Banks,
K.S. (1991). Visual dysfunction in Alzheimer's disease: relation to normal aging.
Ann. Neurol., 29: 4152.
Delazer, M., Sinz, H., Zamarian, L., & Benke, T. (2007). Decision-making with
explicit and stable rules in mild Alzheimers disease. Neuropsychologia, 45:
16321641.
Dewick, H.C., Hanley, J.R., Davies, A.D.M., Playfer, J., & Turnbull, C. (1991).
Perception and Memory for faces in Parkinsons Disease, Neuropsychologica,
29(8): 785-802.
Duncan, J. Emslie, H., Williams, P., Johnson, R., Freer, C. (1996) Intelligence
and the frontal lobe: The organisation of goal-directed behaviour. Cognitive
psychology, 30, 257-303.
Giffard, B., Desgranges, B., Nore-Mary, F., Lalevee, C., de la Sayette, V.,
Pasquier, F., & Eustache, F. (2001). The nature of semantic memory deficits in
Alzheimer's disease: new insights from hyperpriming effects. Brain 124:1522-
1532.
Giovannetti T., Schmidt, K., Sestito N., Libon D, Gallo, J (2006). Everyday
Action in Dementia: Evidence for Differential Deficits in Alzheimers Disease
December 2010 83 PERCRO
VERITAS D2.7.1 PU Grant Agreement # 247765
Giovannetti T., Schwartz M.F., Buxbaum L.J. (2007). The Coffee Challenge: A
New Method for the Study of Everyday Action Errors. Journal of Clinical and
Experimental Neuropsychology, 29, 609 - 705.
Giovannetti, T., Bettcher, B. Magouirk, Brennan, L., Libon, D.J., Kessler, R.K.,
Duey, K.(2008). Coffee with jelly or unbuttered toast: Omissions and
commissions are dissociable aspects of everyday action impairment in
Alzheimers disease. Neuropsychology, 22, 235 245.
Glisky, E.L. (2007). Changes in Cognitive Function in Human Aging In: D.R.
Riddle (Ed.): Brain Aging Models, Methods, and Mechanisms. Frontiers in
Neuroscience, Wake Forest University School of Medicine, Winston-Salem, NC
Boca Raton (FL): CRC Press.
Gordon, B. & Carson, K. (1990). The basis for choice reaction time slowing in
Alzheimers disease. Brain and Cognition 13: 148-166.
Gunzelmann, G., Gross, J.B., Gluck, K. A., & Dinges, D.F. (2009a). Sleep
Deprivation and Sustained Attention Performance: Integrating Mathematical and
Cognitive Modeling. Cognitive Science, 33: 880910.
Hayes, A.M., Davidson, M.C., Keele, S.W., Rafal, R.D. (1998). Toward a
Functional Analysis of the Basal Ganglia. Journal of Cognitive Neuroscience 10
(2): 178198.
Heikkil, V.-M., Turkka, J., Korpelainen, J., Kallanranta, T., & Summala, H.
(1998). Decreased driving ability in people with Parkinsons disease. J Neurol
Neurosurg Psychiatry, 64:325330.
Heim, S., Tschierse, J., Amunts, K., Wilms, M., Vossel, S., Willmes, K.,
Grabowska, A., & Huber, W. (2008). Cognitive subtypes of dyslexia. Acta
Neurobiol Exp (Wars). 68(1): 73-82.
Jaeggi, S. M., Buschkuehl, M., Jonides, J., & Perrige, W. J. (2008). Improving
fluid intelligence with training on working memory. In Proceedings of the
National Academy of Sciences, 105(19), 6829-6833.
Jaeggi, S.M., Buschkuehl, M., Perrig, W. J., Meier, B (2010) The concurrent
validity of the N-back task as a working memory measure. Memory, 18, 394-
412.
Kane, M. J., Conway, A. R., Miura, T. K., Colflesh, G.J. (2007) Working
memory, attentional control and the N-back task: A question of contruct validity.
Journal of Experimental Psychology: Learning, Memory and Cognition, 33, 615-
622.
Knight, R.G., Godfrey, H.P.D., & Shelton, E.J. (1988). The psychological deficits
associated with Parkinsons disease. Clinical Psychology Review, 8: 391-410.
Lima, S.D., Hale S., & Myerson, J. (1991). How general is general slowing?
Evidence from the lexical domain. Psychol Aging, 6:416425.
Lovett, M.C. (2002). Modeling selective attention: Not just another model of
Stroop (NJAMOS). Cognitive Systems Research, 3(1): 67-76.
McGuinness, B., Barrett, S.L., Craig, D., Lawson, J., & Passmore, A.P. (2008).
Attention deficits in Alzheimers disease and vascular dementia. J Neurol
Neurosurg Psychiatry, 81:157-159.
Meyer, D.E., Glass, J.M., Mueller, S.T., Seymour, T.L., & Kieras, D.E. (2001).
Executive-process interactive control: A unified computational theory for
answering 20 questions (and more) about cognitive ageing. European Journal
of Cognitive Psychology, 13 (1/2): 123164.
Miyake, A., Friedman, N.P., Emerson, M.J., Witzki, A.H., Howerter, A., &
Wager, T.D. (2000). The unity and diversity of executive functions and their
Monacelli, A.M., Gushman, L.A., Kavcic, V., & Duffy, C.J. (2003). Spatial
disorientation in Alzheimers disease: The remembrance of things passed.
Neurology, 61: 1491-1497.
Nystrom, L.E., Braver, T.S., Sabb, F.W., Delgado, M.R., Noll, D.C., Cohen, J.D.
(2000) Working memory for letters, shapes and locations: fMRI evidence
against stimulus-based regional organisation in human prefrontal cortex.
Neuroimage, 11, 424-446.
Owen, A.M., McMillan, K. M., Laird, A.R., Bullmore, E. (2005) N-back working
memory paradigm: A meta-analysis of normative functional neuroimaging
studies. Human Brain Mapping, 25, 46-59.
Pass, F., Renkl, A., & Sweller, J. (2003). Cognitive load theory and instructional
design: Recent developments. Educational Psychologist, 38(1), 1-4.
Perry, M.E., McDonald, C.R., Hagler, D.J., Gharapetian, L., Kuperman, J.M.,
Koyama, A.K., Dale, A.M., McEvoy, L.K. (2009) White matter tracts associated
with set-shifting in healthy aging. Neuropsychologia, 47: 28352842.
Perry, R.J. & Hodges, J.R. (1999). Attention and executive deficits in
Alzheimers disease. A critical review. Brain, 122: 383404.
Pignatti, R., Rabuffetti, M., Imbornone, E., Mantovani, F., Alberoni, M., Farina,
E., & Canal, N. (2005). Specific Impairments of Selective Attention in Mild
Alzheimer's Disease. Journal of Clinical and Experimental Neuropsychology,
27:436448.
Schwartz MF, Montgomery MW, Buxbaum LJ, Lee SS, Carew TG, Coslett HB,
et al. Naturalistic action impairment in closed head injury. Neuropsychology
1998; 12: 13-28.
Smith, E.E., Jonides, J., Koeppe, R.A. (1996) Dissociating verbal and spatial
working memory using PET. Cerebral Cortex, 6, 11-20.
Stemme, A., Deco, G., & Busch, A. (2007). The neuronal dynamics underlying
cognitive flexibility in set shifting tasks. J Comput Neurosci, 23:313331.
Wilson, R. S., Kaszniak, A. W., Klawans, H. L., & Garron, D. C. (1980). High
speed memory scanning in Parkinsonism. Cortex, 16: 67-72.
Wood, J.M. & Troutbeck, R. (1995). Elderly drivers and simulated visual
impairment. Opt Vis Sci, 72(2): 115-124.
ID1.3.1 Report on the study of the state of the art of human physical models
ID1.4.1: Report on the study of the state of the art of cognitive models
ID1.5.1 Report on the study of the state of the art of behavioural and
psychological models