Beruflich Dokumente
Kultur Dokumente
to firstperson I/we statement. Observations of people doing the tasks that the
intended system will support.
Secondary Evidence - Instructions from a project manager: We want the system
to allow those people to do these tasks in this way *3rdperson statements+.
Reports from people who have gathered primary evidence They may have
filtered the evidence, merged interviews with many users, biased the report due to
their own preconceptions of the problem. Contents of the project plan Plans are
sometimes developed at a high level by people who do not necessarily understand
the tasks and issues.
Functional requirements these deal with the functions of a product/system
product/system, showing what it must be able to do and possibly how it performs
these functions
Nonfunctional requirements these sometimes deal with the wider context of
using the product/system, for example its usability over a particular range of users
Multiple stakeholders - There are often multiple stakeholders involved in
developing an interactive system.
They might have similar requirements
They might have differing sets of requirements, depending on the role they have
in the interactive system
They might have conflicting requirements
They might not fully understand what the system is intended to do
They might not fully understand the implications of the requirements that they
themselves are asking for
Evaluation in HCI - Evaluation is a key part of the process of developing interactive
systems and also a key part of conducting HCI experiments. What sort of
evaluation you do and how you go about doing it depends on the overall context
of your work and on the particular question you are trying to answer. An
important aspect is that you are dealing with unpredictable humans who will be
using your interactive system.
Evaluation - Creating a deep understanding of the evaluation situation involves
(among other things) deciding:
What specific things do you want to evaluate?
Will you do an expert heuristic evaluation or will you have participants test
your system?
How large (and expensive) should the evaluation be?
Who will be your participants; will they be laboratory rats or intelligent
participants?
For each of these situations, discuss:
What do we want to find out?
What evaluation activity will we do?
What data will we collect?
How will we process that data?
What might we do next as a result of that data?
Are there any special issues involved in what you propose to do?
What will your proposed evaluation cost?
Many ways to evaluate in HCI Examples:
1. Checking that your first working prototype matches what they said they want,
then what they actually want [medical records]
2. Investigating if your interaction idea actually works in an application context
*freezeframe+
3. Deciding which is the better of two different designs for a particular interaction
[wheelchair]
4. Testing whether two quite different methods achieve the same outcome
[asthma]
1. Finding out if a new technology can be applied to a specific field of work
2. Conducting an earlystage trial (could be called a pilot study) of an
almostcomplete system *does it work for real, what do the participants think of it,
what have we discovered RCH+
3. Conducting an evaluation of an operational system [how well does it work, what
have we discovered]
Staying with the concept of the big picture:
When we evaluate an HCI system or component we are essentially conducting an
HCI experiment
We follow, either explicitly or implicitly, a standard experimental approach
Aim
Background
Method
Results
Discussion
Conclusion
Randomised Controlled Trials (RCT) - Randomised Controlled Trials (RCT)are
considered to be the gold standard for evaluating stable interactive systems
where there are measurable outcomes from using those systems. RCTs can be
expensive to run, so you would normally only run an RCT on a system if there was
real value in having the result.
Heuristic evaluation - You can think of a heuristic as a design principle
written in the past tense:
The designer should provide immediate and appropriate feedback whenever the
user interacts with the system
Have the designers provided immediate and appropriate feedback whenever the
user interacts with the system?
You can apply a heuristic evaluation at different stages of the systems
development:
Early design with sketches or paper prototype
Early working prototype
More complete working prototype
Typically, you are looking for problems at the design stage rather than at the late
implementation stage.
Choosing appropriate heuristics.
You can start a heuristic evaluation by deciding which heuristics you want to
consider for that evaluation.
Whether a particular heuristic is appropriate will depend on the application and
also on the state of the development of that application.
Usability Heuristics for User Interface Design
Visibility of system status - The system should always keep users informed about
what is going on, through appropriate feedback within reasonable time.
Match between system and the real world - The system should speak the users'
language, with words, phrases and concepts familiar to the user, rather than
system-oriented terms. Follow real-world conventions, making information appear
in a natural and logical order.
User control and freedom - Users often choose system functions by mistake and
will need a clearly marked "emergency exit" to leave the unwanted state without
having to go through an extended dialogue. Support undo and redo.
Consistency and standards - Users should not have to wonder whether different
words, situations, or actions mean the same thing. Follow platform conventions.
Error prevention - Even better than good error messages is a careful design which
prevents a problem from occurring in the first place. Either eliminate error-prone
conditions or check for them and present users with a confirmation option before
they commit to the action.
Recognition rather than recall - Minimize the user's memory load by making
objects, actions, and options visible. The user should not have to remember
information from one part of the dialogue to another. Instructions for use of the
system should be visible or easily retrievable whenever appropriate.
Flexibility and efficiency of use - Accelerators -- unseen by the novice user -- may
often speed up the interaction for the expert user such that the system can cater
to both inexperienced and experienced users. Allow users to tailor frequent
actions.
Aesthetic and minimalist design - Dialogues should not contain information which
is irrelevant or rarely needed. Every extra unit of information in a dialogue
competes with the relevant units of information and diminishes their relative
visibility.
Help users recognize, diagnose, and recover from errors - Error messages should
be expressed in plain language (no codes), precisely indicate the problem, and
constructively suggest a solution.
Help and documentation - Even though it is better if the system can be used
without documentation, it may be necessary to provide help and documentation.
Any such information should be easy to search, focused on the user's task, list
concrete steps to be carried out, and not be too large.
Qualitative data analysis
Looking for concepts in your data (codes)
Looking for common concepts across many items of data. Example: if you
interview many people and most of them mention a specific thing then that
specific thing would be something you pay attention to.
Looking for unusual concepts. Example: in interviews many people might like a
particular thing but one person strongly dislikes it. Find out why this is so.
Keep track of your data and your analysis in a systematic way
Some software tools let you link all the stages of qualitative data analysis back to
the original data so that you can check and reread
Progressively write about your understanding of the data (memos) so that you
dont forget and so that you can show them to other people
Interaction - Creating a deep understanding of the interaction situation involves
(among other things) understanding:
The context of the interaction
The robustness of the situation
The types of users and their mental models of what they are doing.
Study the project plan/design brief/written or spoken instructions
Research what other people have done
Explore the big picture
Pay attention to the users: observe them, interview them, study their
demographics
Refine and test your requirements
Iterate over prototypes, testing them with actual users
Communication - Treat both your design work and your evaluation work as
exercises in communication.
Tailor what you write/show/draw to your particular audience (colleagues, other
designers, implementers, bosses, clients)
Structure your work so that it flows logically, everything you say is important and
it matches the style that your audience expects
Proofread, spellcheck and grammarcheck everything.
Be inventive in the ways that you communicate your work:
Use concrete examples
Demonstrate rather than lecture
Use images and videos
Mobile HCI issues:
Requirements and design for new features and applications (example: cameras
on mobile phones)
Riding the wave of new ways that people will find to use the features that they
find on their mobile computers
Dealing with constraints imposed by mobile computing: small screens, onehand
touch interactions, indoor/outdoor ambient lighting
Practicalities: limited power consumption, intermittent internet connections
Conceptual models: matching features (cameras, boarding passes) to users
conceptual models or vice versa.
Software development issues:
Specialised operating systems
Focus on apps
Assumptions about being connected to the internet
Assumptions about geolocation information
Low price but large volume sales