Beruflich Dokumente
Kultur Dokumente
FOR LIFE
A Human Perspective
o n Te c h n o l o g y
Development
PERTTI SAARILUOMA,
JOSÉ J. CAÑAS,
JAANA LEIKAS
Designing for Life
Pertti Saariluoma • José J. Cañas • Jaana Leikas
Technology is only valuable to the extent that it can enhance the quality
of life. When solving complex engineering problems, it is easy to forget
the basic reason why technologies are designed and developed. It is also
easy to forget what actually turns a technical idea into an innovation.
Although designers create technologies, only users can, in the end, decide
what becomes an innovation.
This book was written to expound on the true value of taking human
life as the starting and focus point for technology design. Given the nar-
row interpretation of the concept of ‘human’, few definite design practices
include the perspective of human life. As information and communica-
tion technologies have proliferated vastly beyond the scope of tools and
become a ubiquitous part of people’s daily lives, the need for proper design
approaches has become obvious. The traditional focus on human-centred
design approaches is increasingly considered insufficient for understand-
ing human–technology interaction (HTI), as it arises from an interest
in technology rather than an interest in humans. It has been recognized
that completely automated technologies do not exist, and designers have
accepted that there will always be a human interacting with technolo-
gies. However, ‘human’ in this sense has meant that a human being, the
user, has been the focus only as a test object for the design and launch of
a desired technology. Accordingly, the most-emphasized aspect of HTI
v
vi Preface
has been whether people find it easy to use certain products and systems,
as difficult-to-use technology is easily rejected. From this perspective,
the field of technology and innovation design can be seen more as an
engineering discipline in which, for example, psychological theories of
interaction are employed first and foremost as instruments for developing
‘usable’ technology.
There have been attempts to change this practice to make it more
human centred by introducing the term ‘human-driven design’, where
the starting point in a design should be the human being and his or her
needs, goals, and desires. The focus has been transferred from technology
and ‘quality assurance’ to a more human approach, and the significance
and emphasis of the human sciences in design has increased. Accordingly,
the practical implications of this field (and the resources channelled to it)
have been substantial.
Yet interpretations of the concept of ‘human’ have been narrow, and
perspectives on how to understand people’s needs have been limited—
and have varied depending on the purpose and the interpreter. They have
provided no concepts that would make it possible to study the real needs
of people that could be fulfilled with desirable, sustainable, and ethically
accepted technology.
Why has it been so difficult to design the human dimension of tech-
nology? We argue that this is because the most profound factor of HTI
has been left out of the discussion: people’s everyday lives. Understanding
daily life makes it possible to create a practical methodology that can
be used effectively to design the human dimension of technology, and
to convince business developers of the true value of treating the human
perspective as a cornerstone of the design. Quite simply, this means that
HTI design should be able to perceive, analyse, and design technology
through knowledge of (and for) people’s everyday lives.
Designing technology to improve the quality of human life requires
a multidisciplinary design approach. On one hand, multidisciplinary
teams can give designers with a technical background the opportunity to
better acquaint themselves with human research by working with human
researchers. On the other hand, human researchers should be more aware
of the various roles they can play in the process of designing and develop-
ing new technological solutions for people. Human researchers can be
Preface vii
provided with concepts, facts, methods, and theories that are useful in
many aspects of design.
This book is written for a wide group of professionals who may look at
design through very different types of conceptual lenses. For engineering
designers, it may strengthen their understanding of human research avail-
able for developing HTI design. For human researchers from psycholo-
gists to physiatrists, and from sociologists, anthropologists, to artists and
historians, this book should give an idea of how to apply and strengthen
their expertise in the discourses of developing human life by means of
technical design and development. As said, design thinking is a multi-
disciplinary activity that requires different roles to be filled with different
kinds of experts.
After recognizing the multidisciplinary nature of HTI design, the
question is how knowledge about humans can be incorporated into
design processes efficiently. This book introduces a new way of organizing
creative thinking about design, and discusses scientific knowledge about
the human mind as an essential element of design discourses. It reviews
different perspectives of HTI and contributes ideas of life-based design.
It systematizes traditional design discourses by showing that HTI design
thinking must always meet four fundamental design questions:
for offering an inspiring environment for the first author during a visit
there. Without this opportunity, this book would still be unfinished. The
Finnish authors wish to thank the National Technology Agency (Tekes)
for the Theseus I-II and ITEA-2 for Easy Interactions projects, which
facilitated the start of this work. Tekes also supported the BeWell and
Life-Based Design projects, which have been important for develop-
ing the basic conceptions of the book. The work of the Spanish author
has been supported by projects SEJ2007-63850/PSIC and PSI2012-
39246 from the Spanish Ministry of Education and Science, and proj-
ect ProyectoMIO! from the Spanish Ministry of Industry. We thank
PalgraveMacMillan Team, Nicola Jones, Eleanor Christie, Sharla Plant,
Greg Willliams, Ramaswamy Parvathy, and Angeline Amrita Stanley for
their invaluable help. We also thank Kelley Friel for correcting the lan-
guage. Finally, we thank Noora Hirvonen for the illustrations.
We dedicate this book to the light nights of Finnish summers and the
gentle Andalusian evenings, when the true Finnish–Spanish cooperation
took place in the heat of a Finnish sauna and in the blue hills of Granada.
Pertti Saariluoma
José J. Cañas
Jaana Leikas
Virrat, Finland
June 2015
Contents
1 Technology in Life 1
Technology in History 5
Technology Emancipates People 7
Risks of Technology 11
Towards a New Interaction Design Culture 14
Human Turn 17
References 18
2 Design Discourses 25
How Researchers and Designers Perceive HTI 26
Anatomy of Paradigms 28
Main Paradigms of HTI Design: An Overview 29
From Paradigms to Research Programmes 34
Organizing HTI 37
The Fundamental Four: The Basic Questions and Discourses 40
References 42
xi
xii Contents
Fig. 1.1 Technology has not only changed the working methods of
housework. It has also changed women’s social position; today
women often work outside the home and men have started to
participate more in housework 9
Fig. 2.1 The basic design discourses and respective research programmes 37
Fig. 3.1 Event paths in an event tree—an example of text processing 54
Fig. 4.1 Using a technical artefact 81
Fig. 4.2 Ponzo illusion 89
Fig. 4.3 The overall structure of human memory 103
Fig. 4.4 The first sentence is easier to remember although the
sentences comprise the same letters 106
Fig. 4.5 Apperception—two persons apperceive and mentally
represent the same visual stimulus in a different way 120
Fig. 5.1 After nuclear accidents people lost their trust in this type of
technology 138
Fig. 6.1 Main components of human action 179
Fig. 6.2 Global explanatory frameworks for the form of life 184
Fig. 6.3 Generation of design ideas from a form of life 189
Fig. 6.4 The palette of human life sciences 192
Fig. 6.5 Life-based design model (Leikas et al. 2013) 195
Fig. 7.1 General explanatory framework 212
xv
List of Tables
xvii
1
Technology in Life
tions are becoming more complex, and automatic processes have replaced
many operators. Though creating new artefacts never ceases, it is evident
that the ever-evolving human relationship with technology is becoming
more and more important.
The term ‘technical artefact’ or ‘artefact’ refers to any human-made
object, natural process, or even a modified natural phenomenon that
is used to improve human performance, satisfy some human need, or
improve the quality of life (Simon 1969). An artefact can also be a techni-
cal machine or device that has a human function (Houkes et al. 2011). It
can be a computer program, machine, device, instrument, chemical prod-
uct, or tool. It can also be something that gives subjective enjoyment to
people—a film, a TV programme, or a cigarette. It can even be a natural
phenomenon used by people, like fire or a stone tool. The main require-
ment is it helps people pursue their action goals. The term ‘technology’
typically refers to a combination of technical artefacts and their human
uses—that is, what people do with artefacts and how they are organized
around them (Davis et al. 2014; Houkes et al. 2011; Karwowski 2006;
Meier-Oeser 1998; Orlikowski 1991, 2000).
People, and the various roles they play when interacting with technol-
ogy—such as users, consumers, and operators—constitute the human
dimension of technology. HTI in its broadest sense covers all forms and
aspects of interaction between people and technical artefacts; it includes the
roles of designer, business manager, object of action, constructor, and builder.
Although issues related to computational devices dominate today’s
discussion on HTI, traditional mechanical artefacts and their uses are
also relevant for this discussion, since most technical artefacts today have
both computational and mechanical dimensions. The nature of scientific
knowledge in design thinking is changing, and designers and develop-
ers of modern technologies are required to acquire new types of skills.
For centuries, natural science and mathematics have provided the basic
concepts and theories for technology design thinking and innovation,
but they are no longer sufficient in HTI design. The interaction between
humans and new types of technologies must be holistically understood
in all its complexity.
The human dimension is an inevitable part of technology design think-
ing, since technological solutions have direct or indirect links to everyday
1 Technology in Life 3
life. For example, modern paper machines can dry paper incredibly quickly,
thanks to the invention of the so-called extended nip, a flexible mantle
that sped up the process considerably and improved the quality of paper
(Saariluoma et al. 2006). Therefore, this invention increased the produc-
tivity of papermaking factories by tens of percentages, which improved
the quality of life for the users of the paper (e.g., newspaper readers) and
increased investors’ return of investment (ROI).
In engineering, design thinking is conceived as organizing the laws of
nature in a meaningful manner (Pahl et al. 2007). Also, understanding
the human mind and human life is necessary for people working with
technology design. However, the traditional (largely common sense and
user-need based) conceptualization of the human mind and actions is
no longer sufficient. The conceptual structure of design thinking has to
be expanded to also consider other concepts of human research (Davis
et al. 2014; Houkes et al. 2011; Karwowski 2006; Meier-Oeser 1998;
Orlikowski 1991, 2000).
The most important design solutions are based on scientific knowledge
developed in the natural sciences such as mathematics, physics, chemis-
try, and materials science. Since they are able to apply the laws of nature
and to use simulations to study different alternatives, engineers can con-
fidently predict that their design solutions will work in practice. Human
research, by contrast, is not applied as effectively as the laws of natural
science in technology design. Thus, while a design might perform well
technically, users may not be able to (or may not want to) use the artefact,
or simply have no practical use for it.
As the human dimension of technology is becoming more important
in practical design, development, and innovation, it is necessary to turn
one’s attention to human research, that is, the relevant areas of biology,
psychology, and socio-cultural research. Yet natural scientific and human
perspectives on thinking are conceptually different in many respects.
While human research is dominated by causal explanation, the natural
sciences rely on such philosophical and metascientific notions as inten-
tion or understanding (Radnitsky 1968; von Wright 1971).
There has been relatively little communication between the academic
human–computer interaction (HCI) community and industry in the
area of HTI design (Carroll 1997), and researchers in both fields have
4 Designing for Life
Technology in History
The importance of technology in shaping everyday human life has been
understood for a long time (Basalla 1988; Bernal 1969; Marks 1988;
Pfaffenberger 1992). Many historical periods have been named after the
dominant new technologies of their time (Basalla 1988), for example, the
Stone, Bronze, and Iron Ages (Renfrew and Bahn 1991). Similarly, such
terms as the age of steam, the age of electronics, or the age of computing
are also common in characterizing the influence of specific technologies
on society (Basalla 1988; Bernal 1969; Marks 1988; Pfaffenberger 1992;
Williams, and Edge 1996).
6 Designing for Life
1
Examples from ancient times are presented in order to illustrate that the nature of many design
problems has remained the same over time and in many cultures.
1 Technology in Life 7
happiness. Many of these restrictions are political and social, such as the
black Civil Rights or the women’s liberation movement, or improving
nutrition and sanitation. But many can be solved or improved through
technological advancements. In medicine, for instance, new ways of
treating illnesses require new kinds of technical tools, such as new medi-
cations and medical instruments.
The emancipatory role of technology has been one of the main triggers
that has led many individuals and organizations to focus their efforts on
creating technologies. Decreasing child mortality, illnesses, hunger, and
violence, for example, has been possible with the help of technologies
(Bernal 1969). While child mortality was very high 150 years ago even in
developed countries, it started to rapidly decrease at the end of the nine-
teenth century with improvements in medical understanding, hygiene,
and technology (Wolleswinkel-van den Bosch et al. 1998). Emancipation
in the context of HTI thus refers to the liberation of people by technolog-
ical means from any circumstances that diminish the quality of their lives.
The emancipating role of technology can best be seen in the way it
shapes the world around us. For example, social discourses have been lib-
erated by different kinds of information and communication technolo-
gies (Hansen et al. 2009; Lyytinen and Klein 1985; Ross and Chiasson
2011). In many respects, technologies have made social emancipation
possible (and even necessary) as they have changed people’s everyday lives.
Homes, to give just one example, have undergone a technological revolu-
tion over the last 50–60 years (Cowan 1976, 1985). They have become
‘industrialized’ in the sense that they provide people with different kinds
of technologies that take care of household tasks that were previously
performed by housewives and servants. Along with household technolo-
gies, such fundamental innovations as electric lighting, industrial food
production, medications, radio, phone, and television have all altered the
position of women in society. Thanks to technological and social develop-
ments, women today have the opportunity to educate themselves, work
outside the home and live an economically independent life (Fig. 1.1).
At the same time, the number of childbirths has dramatically decreased
(Subbarao and Raney 1993).
New ICT technology will play a similar emancipating role in human
life that the earlier forms of technologies have played in their time. While
1 Technology in Life 9
Fig. 1.1 Technology has not only changed the working methods of house-
work. It has also changed women’s social position; today women often work
outside the home and men have started to participate more in housework
we do not know what the exact nature of this technology revolution will
be, it is essential to develop our current scientific and industrial practices
so they provide us with the proper tools for the coming changes. One
sign of change is the political use of new media, which has made possible
certain social changes—both welcome and not—which were impossible
just a few decades ago (Rheingold 2012). Another interesting initiative
is design by social media, for example, an opera composed recently by
an international community.2 It can be expected that more such cultural
and even political changes will emerge due to innovative forms of social
media and new technologies, but to make future emancipation possible,
it is essential to develop our capacity to design improvements.
As can be seen from the opera example, the emancipation enabled by
technology need not only concern issues of primary needs and necessi-
ties or large-scale issues in life. It can also include, for example, issues
related to self-fulfilment and social freedom (Eccles and Wigfield 2002;
Horkheimer 1947; Maslow 1954). For example, finding friends with
similar interests and experiences on social media: that is, emancipa-
tion from loneliness. It can also mean more flexible use of one’s time
thanks to effective tools for distance working. Technology may improve
2
See http://bit.se/3WiRR5
10 Designing for Life
Risks of Technology
Yet technology can emancipate people only when it operates as expected.
However, neither human actions nor technical artefacts are free from
errors or moral problems. The use of technologies may lead to an unfore-
seeable accident or negative side effects, risks, or failures (Dekker 2006;
Perrow 1999; Reason 1990, 1997). They may even have a disastrous
influence on a society or the whole world (Beck 1992, 2008; Giddens
2000). Technologies may also entail moral risks, as they make it possible
for people to harm others. Indeed, the development of technology has
not been automatic and unproblematic; there have been many unwanted
negative consequences and risks. For example, many natural resources,
such as oil and coal, are diminishing due to the increasing need for fos-
sil fuels to run technical inventions, and accessing them has become
increasingly difficult (Aklett et al. 2010; Fantazzini et al. 2011). Another
negative consequences of the growth of technology is environmental pol-
lution; the accumulation of noble metals in the earth is becoming increas-
ingly difficult to deal with (Anderberg et al. 2000), and nuclear accidents
such as Chernobyl and Fukushima provide further examples of the risks
associated with technology (Reason 2000). The prevalence of cancer has
increased in those areas due to these accidents (Pflugbeil et al. 2011).
A social consequence of technology is the digital divide, which is
caused by the failure in technology development to sufficiently consider
democratic accessibility and the adoption of products and services (Stahl
2006, 2010). Indeed, there has been much academic discussion about the
growing digital divide (Attewell 2001; Hilbert 2011; Norris 2003), tech-
nological ‘haves’ and ‘have nots’ (Howland 1998), or ‘digital natives and
digital immigrants’ (Prensky 2001). The digital divide is no longer seen
as merely an issue of access to hardware. There is now a growing concern
that the lack of design foresight is creating social exclusion (Bargh and
McKenna 2004). Now more than ever, the unequal adoption of (and
opportunities to access) ICT excludes many from benefiting from its
advantages in many fields of social life (Mancinelli 2008).
As technologies have evolved and their use has qualitatively changed,
the divide is seen as separating users from non-users, and distinguishing
between different types of users. There are now multiple divides that
12 Designing for Life
3
www.cdc.cov.tobacco
1 Technology in Life 13
Human Turn
The fast development of ICT, the multifunctionality of devices, and
future emerging technologies such as sensors and robots challenge us
to discuss the future scenario from the point of view of human beings.
New ICT issues, such as embeddedness and mobility and the presence of
technology in all aspects of our life, are changing our relationship to the
environment, to other people and even with ourselves.
Although technologies have always been targeted at people—that is,
to serve and improve people’s lives in some way—there has been little
systematic use of human research knowledge in designing technologies.
The main interest in design has been how to create functional technical
artefacts, which has been based on knowledge of mathematics, physics,
and chemistry as well as the systematic use of the laws of nature (Pahl
et al. 2007).
Today, as technologies have become increasingly complex, designers
have been forced to focus more and more on ensuring that users really
can use the technical artefacts. Ease of use, usefulness and ease of adop-
tion, together with trust, have been found to be important elements of
user satisfaction and acceptance (Davis 1989; Venkatesh 2000). The
increased attention to these basic design questions is not surprising, given
the countless ICT products that have been unable to interest or satisfy
18 Designing for Life
potential users. Too often, products have failed due to poor design in
terms of usability, risk and error prevention, adoption, and innovation.
Throughout history, technology has been a catalyst for human social
development. To make the most of new technological inventions, design-
ers should turn their attention to how their ideas affect people’s lives.
They should acknowledge that they are not merely designing artefacts but
instead different ways for people to live their everyday lives. The gradually
increasing importance of human research in technology design became
more obvious during the Second World War, when such research areas
as ergonomics and human factors were introduced (Karwowski 2006).
With computers and computing, human research has become even more
important than with mechanical technologies. Since user interfaces have
become more complex, usage issues are now an important challenge for
designers. Today there is widespread acknowledgement that technology
should be integrated into everyday life. The gradual refocusing in design
thinking has involved a new ‘human turn’. It cannot replace classical
technical design, but it can refocus design thinking as a whole. Since
technology is to be used by people, it is essential to consider how the new
products and services should be integrated into their lives.
References
Adams, J. L. (1992). Flying buttresses, entropy, O-rings: The world of engineer.
Cambridge, MA: Harvard University Press.
Adorno, T. H. (1976). The positive dispute in German sociology. London:
Heinemann.
Aklett, K., Höök, M., Jacobsson, K., Lardelli, M., Snowden, S., & Söderberg, B.
(2010). The peak oil age—Analyzing the world oil production reference sce-
nario in world energy outlook. Energy Policy, 38, 1398–1414.
Anderberg, S., Prieler, S., Olendrzynski, K., & de Bruyn, S. (2000). Old sins.
Tokyo: United Nations University Press.
Attewell, P. (2001). Comment: The first and second digital divides. Sociology of
Education, 74, 252–259.
Bargh, J. A., & McKenna, K. Y. (2004). The internet and social life. Annual
Review Psychology, 55, 573–590.
Basalla, G. (1988). The evolution of technology. Cambridge: Cambridge University
Press.
1 Technology in Life 19
Howland, J. S. (1998). The ‘digital divide’: Are we becoming a world of techno-
logical ‘haves’ and ‘have-nots?’. Electronic Library, 16, 287–289.
International Organization for Standardization. (1998a). ISO 9241-11:
Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs):
Part 11: Guidance on Usability.
International Organization for Standardization. (1998b). ISO-14915: Ergonomic
Requirements for Office Work with Visual Display Terminals (VDTs): Part 11:
Guidance on Usability.
Jonas, H. (1973). Technology, responsibility: Reflections on the new task ethics.
Social Research, 40, 31–54.
Karwowski, W. (2006). The discipline of ergonomics and human factors. In
G. Salvendy (Ed.), Handbook of human factors and ergonomics (pp. 3–31).
Hoboken, NJ: Wiley.
Kim, K., Jacko, J., & Salvendy, G. (2011). Menu design for computers and cell
phones: Review and reappraisal. International Journal of Human–Computer
Interaction, 2, 383–404.
Landels, J. G. (2000). Engineering in the ancient world. Berkley, CA: University
of California Press.
Leikas, J. (2008). Ikääntyvät, teknologia ja etiikka—näkökulmia ihmisen ja tekno-
logian vuorovaikutustutkimukseen ja—suunnitteluun [Ageing, technology and
ethics—views on research and design of human-technology interaction]
(VTT Working Papers No. 110). Espoo: VTT.
Leikas, J. (2009). Life-based design—A holistic approach to designing human-
technology interaction. Helsinki: Edita Prima Oy.
Lutz, W., Sanderson, W., & Scherbov, S. (2001). The end of world population
growth. Nature, 412, 543–545.
Lyytinen, K., & Klein, H. K. (1985). The critical theory of Jurgen Habermas as
a basis for a theory of information systems. In E. Mumford, R. Hirschheim,
G. Fitzgerald, & A. T. Woods-Harper (Eds.), Research methods in information
systems (pp. 219–236). New York: North Holland.
Mancinelli, E. (2008). E-inclusion in the information society. In R. Pinter (Ed.),
Information society: From theory to political practice: Course book. Budapest:
Gondolt–Új Mandátum.
Marks, J. (1988). Science in the making of the modern world. London: Heinemann.
Maslow, A. H. (1954). Motivation and personality. Oxford: Harpers & Row.
Meier-Oeser, S. (1998). Technologie [Technology]. In J. Ritter & K. Gründer
(Eds.), Historisches Wörterbuch der Philosophie (Vol. 10, pp. 958–961).
Darmstadt: Wissenschaftliche Buchgesellschaft.
22 Designing for Life
Wolleswinkel-van den Bosch, J. H., van Poppel, F. W., Tabeau, E., &
Mackenbach, J. P. (1998). Mortality decline in the Netherlands in the period
1850–1992: A turning point analysis. Social Science and Medicine, 47,
429–436.
Young, A. L., & Quan-Haase, A. (2009). Information revelation and internet
privacy concerns on social network sites: A case study of Facebook. In:
Proceedings of the Fourth International Conference on Communities and
Technologies (pp. 265–274).
2
Design Discourses
1
This type of engineering uses semantic differential-based methods to investigate people’s prefer-
ences for various products.
2 Design Discourses 27
Anatomy of Paradigms
Paradigms are based on scientific achievements that have become ideals
or models of thought for other researchers (Kuhn 1962). They are created
when a community of practitioners (including researchers and design-
ers) defines problems, concepts, credos, procedures, and even acceptable
facts within certain frameworks (Kuhn 1962: viii). Some paradigms are
extensive, such as Galileo’s experimental natural science, while some are
much narrower, such as the working memory paradigm in psychology
(Baddeley 1986). In all cases, they are approaches adopted by a number
of researchers or designers who share an ideal model of thinking and
shaping their work.
One can find fields of research and design that diversify paradigms
in all fields of learning (Kuhn 1962). One could compare a paradigm
with a line of inquiry in a detective novel. For example, when a detec-
tive focuses only on the blond lady and asks questions concerning this
particular individual, the true criminal (the middle-aged barber) may be
ignored and forgotten. Yet while the lines of inquiry are different—and
different issues become important in them—both lines might be needed
to help the detective solve the crime (Hanson 1958).
In every field of professional work, good conceptual organization is
essential for the work to be rich and productive. In art and architecture,
Scandinavian design, for example, is a different paradigm from older styles
(Alexander 1977; Koivisto 2011; Saarinen and Saarinen 1962; Whitford
and Ter-Sarkissian 1984). In technical design, touchscreen technology,
what you see is what you get (WYSIWYG), or direct manipulation define
their own spheres of thinking (Galiz 2002; Shneiderman and Plaisant
2005; Shneiderman and Maes 1997), while in programming, structural
or object-oriented programming paradigms are commonly used (Budd
1991; Dijkstra 1972).
Paradigms define the topic of research, the concepts researchers use,
and the methods that are considered suitable (Barbour 1980; Kuhn
1962). They also detail the questions that are legitimately asked, the types
of explanations sought, and the types of solutions that are seen as accept-
able (Barbour 1980; Kuhn 1962). Paradigms also define the implicit
assumptions about what the world is like that is shared by their adherents
2 Design Discourses 29
(Kuhn 1962; Saariluoma 1997). They also entail knowledge about how
to apply the findings in practice. From a sociological point of view, para-
digms provide information about the suitable outlets for research and
the recognized congresses and events. They even contribute to creating
‘gurus’ in the field, whose opinions carry significant weight within the
paradigm.
In sum, a paradigm specifies:
the social role) and how its use should be integrated with what people do.
The instrumentation and methods of these paradigms are close to those
typical to the social sciences.
This brief overview of the main approaches in the field of HTI today
illustrates that it is dominated by a set of more or less well-organized
paradigms. Of these, the paradigms related to HCI are much more orga-
nized than the others illustrated in this chapter. Of course, the notion of
paradigms cannot be considered here in a narrow sense or as a theoretical
paradigm only. The paradigms illustrate how the work in the field should
be done, and how researchers and designers are organized around the
paradigms into more or less rigid groups. Below, other factors that can
help structure the field are examined.
Organizing HTI
Resolution and composition is a renaissance method of thought (Hobbes
1651/1965) that refers to a process in which a topic is cut into pieces and
then put together again. In modern terms, one could call it reverse engi-
neering (Chikofsky and Cross 1990). The benefit of reverse engineering
is that it helps the engineers understand how a machine or device oper-
Fig. 2.1 The basic design discourses and respective research programmes
38 Designing for Life
ates, including its components (and their relationships with each other)
and their functions. Similarly, categorizing the field of HTI thinking into
research programmes and further dividing them into different research
questions leads to important insights about how designers’ minds work.
Path-breakers in the field of design research have identified practical
problems and found ways to solve them. Other designers have shared and
modified their problems in their own research, which has led to a set of
research projects with more or less similar goals. The field keeps evolving,
as new pieces of information are collected in practical empirical research
every day.
However, new solutions do not totally revolutionize the problem
structure. For instance, the development of touchscreen technology
shook up both usability and interaction styles in mobile technology, but
designers still had to figure out how to efficiently input knowledge and
present feedback, and continue making easy-to-use devices. The techni-
cal solutions had changed, but the challenges and problems for designers
remained the same.
Stone Age tools were not only practical, they also had additional
design features for pleasure and aesthetic reasons. Likewise, in the ancient
world as well as today jewels were considered objects of pleasure as well
as socially significant symbols. Thus, the issues of organizing life and cre-
ating a positive user experience were already tacitly or explicitly in the
minds of ancient designers. They asked the important design questions
about artefacts concerning ‘liking’ and social organization, and made the
extra effort to solve them. Today these questions remain, although the
answers (in term of people’s preferences) have changed.
The observation that the fundamental design questions stay the same
although the answers vary suggests that it may be possible to structure
the field of HTI according to how the basic questions underlying the
research programmes are related to each other. Indeed, one could imag-
ine that these fundamental questions form a large web underlying the
designers’ thinking.
The questions also have another important property: they are task nec-
essary (Saariluoma 1984). This means that people have to solve the prob-
lems either consciously or tacitly, but they cannot avoid solving them
(Saariluoma 1997). All smartphone designers have to answer numerous
2 Design Discourses 39
questions concerning the attributes of the final product: the form, size,
and placement of the touchscreen and the appearance and functions of
the icons must all be designed.
Yet is the design of a mobile phone essentially so different from Stone
Age designs? In designing a stone axe, people of the time had to consider
materials and forms, such as fixing the components together safely. A
broken axe was less useful than the original when battling with wolves.
Hence, the basic problems of current designs have already been addressed
in designing technical artefacts from the start. Of course, there are also
a huge number of differences, but the basic issue remains the same: the
outcome depends on the questions as much as on the answers.
It is vital to understand that HTI research and design is dominated by
the logic of questions, and that basic questions provide a fundamental
structure to the field. Questions and answers guide the design and devel-
opment irrespective of the prevailing design research programmes. HTI
introduces a complex set of problems, as the major research programmes
differ in what they consider to be the field’s most fundamental questions.
However, the basic questions of HTI research are complementary. Each
of the research programmes adds something essential to the field and thus
opens up a new perspective on the problems of HTI.
The idea of using a stable set of questions and their relationships to
structure the field of HTI follows what is known about human think-
ing. The standard psychological definition of thinking is illustrative with
respect to this point. As Newell and Simon (1972) put it: ‘Thinking is a
process which arises when a person has a goal but does not know how to
reach it.’ Questions, naturally, define these goals, which, together with
answering the questions, form the natural structure of human thinking.
Yet the questions are normally not very explicit in design thinking,
perhaps because for designers, it may be faster to find design solutions
than to address the design questions that should be answered. This kind
of tacit structure is common in human knowledge. Grammar, for exam-
ple, illustrates how people use language and structure sentences, though
non-linguists rarely have a clear idea about the structures they follow in
building sentences to express their ideas (Chomsky 1957).
The problem of tacit knowledge has historical roots. To understand the
hidden structures of our thought, we should explicate tacit knowledge
40 Designing for Life
The research and design activity related to these questions can be seen
as organized into four discourses, which search for solutions to meet the
problems that emerge in practical design and related scientific research.
The basic questions can be investigated and answered on the grounds of
different types of scientific knowledge. The contents of this book mainly
rely on human research (psychology, physiology, action therapy, and soci-
ology), as knowledge of the preconditions and laws of the human mind
and actions provides a firm grounding for HTI design. The answers to
technical interaction issues, however, mostly rely on machine engineer-
2 Design Discourses 41
References
Abras, C., Maloney-Krichmar, D., & Preece, J. (2004). User-centered design. In
W. S. Bainbridge (Ed.), Encyclopedia of human-computer interaction (pp. 445–
456). Thousand Oaks, CA: Sage.
Alexander, C. (1977). A pattern language: Towns, buildings, construction. Oxford:
Oxford University Press.
Anderson, J. R., Farrell, R., & Sauers, R. (1984). Learning to program Lisp.
Cognitive Science, 8, 87–129.
Annett, J. (2000). Theoretical and pragmatic influences on task analysis meth-
ods. In J. Schraagen, S. Chipman, & V. Shalin (Eds.), Cognitive task analysis
(pp. 25–40). Mahwah, NJ: Erlbaum.
Annett, J. (2004). Hierarchical task analysis. In D. Diaper & N. Stanton (Eds.),
Handbook of cognitive task design (pp. 63–82). Hillsdale, NJ: Erlbaum.
Annett, J., & Duncan, K. D. (1967). Task analysis and training design. Report
resumes. Hull: Hull University.
Aykin, N., Quaet-Faslem, P., & Milewski, A. (2006). Cultural ergonomics. In
G. Salvendy (Ed.), Handbook of human factors and ergonomics (pp. 3–31).
Hoboken, NJ: Wiley.
Baddeley, A. D. (1986). Working memory. Cambridge: Cambridge University
Press.
Barbour, I. (1980). Paradigms in science and religion. In G. Gutting (Ed.),
Paradigms and revolutions: Appraisals and applications of Thomas Kuhn’s phi-
losophy of science (pp. 223–245). Notre Dame, IN: University of Notre Dame
Press.
Behrent, M. C. (2013). Foucault and technology. History and Technology, 29,
54–104.
Booch, G., Rumbauch, J., & Jacobson, I. (1999). The unified modelling lan-
guage. Reading, MA: Addison-Wesley.
Bouma, H., Fozard, J. L., & van Bronswijk, J. E. M. H. (2009). Gerontechnology
as a field of endeavour. Gerontechnology, 8, 68–75.
2 Design Discourses 43
Deacon, T. (1997). The symbolic species: The co-evolution of language and the
human brain. London: Penguin Press.
Desmet, P., Overbeeke, K., & Tax, S. (2001). Designing products with added
emotional value: Development and application of an approach for research
through design. The Design Journal, 4, 32–47.
Di Stasi, L. L., Antolí, A., & Cañas, J. J. (2013). Evaluating mental workload
while interacting with computer-generated artificial environments.
Entertainment Computing, 4, 63–69.
Dijkstra, E. (1972). Notes on structured programming. In O. Dahl, E. Dijkstra,
& C. Hoare (Eds.), Structured programming. London: Academic Press.
Dix, A., Findlay, J., Abowd, G., & Beale, R. (1993). Human-computer interac-
tion. New York: Prentice-Hall.
Dym, C. L., & Brown, D. C. (2012). Engineering design: Representation and
reasoning. New York: Cambridge University Press.
Elmasri, R., & Navathe, S. (2011). Database systems. Boston, MA: Pearson
Education.
Feyerabend, P. (1975). Against method. London: Verso.
Forlizzi, J., & Battarbee, K. (2004). Understanding experience in interactive
systems. In Proceedings of the 5th Conference on Designing Interactive Systems:
Process, Practices, Methods, and Techniques (DIS 2004) (pp. 261–268).
Foucault, M. (1972). The archaeology of knowledge and the discourse on language.
New York: Pantheon Books.
Galiz, W. O. (2002). The essential guide to user interface design. New York: Wiley.
Gopher, D., & Donchin, E. (1986). Workload: An examination of the concept.
In K. R. Boff, L. Kaufman, & J. P. Thomas (Eds.), Handbook of perception and
human performance: Cognitive processes and performance (pp. 1–46). Hoboken,
NJ: Wiley.
Habermas, J. (1973). Erkentniss und interesse [Knowledge and interests].
Frankfurth am Main: Surkamp.
Habermas, J. (1981). Theorie des kommunikativen Handelns [Theory of commu-
nicative behavior] (Vols. 1–2). Frankfurt am Main: Suhrkamp.
Hanson, N. R. (1958). Patterns of discovery. Cambridge: Cambridge University
Press.
Hassenzahl, M. (2011). Experience design. San Rafael, CA: Morgan & Claypool.
Hassenzahl, M., & Tractinsky, N. (2006). User experience—A research agenda.
Behaviour and Information Technology, 25, 91–97.
2 Design Discourses 45
Helander, M., & Khalid, H. M. (2006). Affective and pleasurable design. In
G. Salvendy (Ed.), Handbook of human factors and ergonomics (pp. 543–572).
Hoboken, NJ: Wiley.
Hobbes, T. (1651/1950). Leviathan. New York: EP Dutton.
International Organization for Standardization. (1998). ISO 9241-11:
Ergonomic Requirements for Office Work with Visual Display Terminals (VDTs):
Part 11: Guidance on Usability.
Jordan, P. W. (2000). Designing pleasurable products: An introduction to the new
human factors. Boca Raton, FL: CRC Press.
Kaptelinin, V. (1996). Activity theory: Implications for human-computer inter-
action. In B. A. Nardi (Ed.), Context and consciousness: Activity theory and
human-computer interaction (pp. 103–116). Cambridge, MA: MIT-Press.
Karwowski, W. (2006). The discipline of ergonomics and human factors. In
G. Salvendy (Ed.), Handbook of human factors and ergonomics (pp. 3–31).
Hoboken, NJ: Wiley.
Kieras, D. E., & Meyer, D. E. (1997). An overview of the EPIC architecture for
cognition and performance with application to human-computer interac-
tion. Human-Computer Interaction, 12, 391–438.
Koivisto, K. (2011). Kaj Frank and the art of glass. In H. Matiskainen (Ed.), The
art of glass—Kaj Frank 100 years (pp. 8–61). Saarijärvi: Design museo.
Kuhn, T. (1962). The structure of scientific revolutions. Chicago: University of
Chicago Press.
Kuniavsky, M. (2003). Observing the user experience: A practitioner’s guide to user
research. San Mateo, CA: Morgan Kaufmann.
Kuutti, K. (1996). Activity theory as a potential framework for human–com-
puter interaction research. In B. A. Nardi (Ed.), Context and consciousness:
Activity theory and human–computer interaction (pp. 17–44). Cambridge,
MA: MIT Press.
Lakatos, I. M. (1970). Falsification and the methodology of research pro-
grammes. In I. Lakatos & A. Musgrave (Eds.), Criticism and the growth of
knowledge. Cambridge: Cambridge University Press.
Laudan, L. (1977). Progress and its problems: Towards a theory of scientific growth.
London: Routledge and Kegan Paul.
Leikas, J. (2009). Life-based design—A holistic approach to designing human-
technology interaction. Helsinki: Edita Prima Oy.
Leonard, D., & Rayport, J. F. (1997). Spark innovation through empathic
design. Harvard Business Review, 75, 102–115.
46 Designing for Life
Lewis, M., & Jacobson, J. (2002). Game engines. Communications of the ACM,
45, 27–31.
Mao, J., Vredenburg, K., Smith, P. W., & Carey, T. (2005). The state of user-
centered design practice. Communications of the ACM, 4, 105–109.
Mattelmäki, T., & Battarbee, K. (2002). Empathy probes. In: T. Binder, J.
Gregory and I. Wagner (Eds.), Proceedings of the 7th Biennial Participatory
Design Conference 2002, June 23 - June 25, 2002, Malmø, Sweden. (pp.
266–271).
Markopoulos, P., & Bekker, M. (2003). Interaction design and children.
Interacting with Computers, 15, 141–149.
Marti, P., & Bannon, L. J. (2009). Exploring user-centred design in practice:
Some caveats. Knowledge, Technology and Policy, 22, 7–15.
McCarthy, J., & Wright, P. (2004). Technology as experience. Interactions, 11,
42–43.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some lim-
its on our capacity for processing information. Psychological Review, 63,
81–97.
Monk, A. F. (2002) Fun, communication and dependability: Extending the
concept of usability. In Faulkner, X, Finlay, J, and Detienne, F. (Eds)16th
British-Human-Computer-Interact-Group Annual Conference/European-
Usability-Professionals-Association London, England Sept. 02-06, 2002, p.
3–14.
Nagamashi, M. (2011). Kansei/affective engineering and history of Kansei/
affective engineering in the world. In M. Nagamashi (Ed.), Kansei/affective
engineering (pp. 1–30). Boca Raton, FL: CRC Press.
Nardi, B. A. (1996). Context and consciousness: Activity theory and human-
computer interaction. Cambridge, MA: MIT Press.
Newell, A., & Simon, H. A. (1972). Human problem solving. Engelwood Cliffs,
NJ: Prentice-Hall.
Nielsen, J. (1993). Usability engineering. San Diego, CA: Academic Press.
Nielsen, J., & Norman, D. (2014). Definition of user experience. Retrieved
January 24, 2015, from http://www.nngroup.com/articles/definition-user-
experience/
Niezen, G. (2012). Ontologies for interaction. Eindhoven: Eindhoven University
Press.
Norman, D. (2004). Emotional design: Why we love (or hate) everyday things.
New York: Basic Books.
Norman, D. A., & Draper, S. W. (Eds.). (1986). User centered system design; New
perspectives on human-computer interaction. Hillsdale, NJ: Erlbaum.
2 Design Discourses 47
Norman, D., Miller, J., & Henderson, A. (1995). What you see, some of what’s
in the future, and how we go about doing it: HI at Apple Computer. In
Conference Companion on Human Factors in Computing Systems (p. 155).
ACM.
Popper, K. R. (1959). The logic of scientific discovery. London: Hutchinson.
Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., & Carey, T. (1994).
Human-computer interaction. Harlow: Addison-Wesley.
Preece, J., Rogers, Y., & Sharp, H. (2004). Interaction design. New York: Wiley.
Rauterberg, M. (2004). Positive effects of entertainment technology on human
behaviour. In: R. Jacquart (Ed): Building the Information Society (pp.
51-58). Kluwer Academic press.
Rauterberg, M. (2006). HCI as an engineering discipline: To be or not to be?
African Journal of Information and Communication Technology, 2, 163–183.
Rauterberg M. (2010). Emotions: The voice of the unconscious. In: H.S. Yang,
R. Malaka, J. Hoshino, J.H. Han (eds.) Entertainment Computing - ICEC
2010 (Lecture Notes in Computer Science, vol. 6243, pp. 205–215), (c) IFIP
International Federation for Information Processing, Heidelberg: Springer.
Ross, A., & Chiasson, M. (2011). Habermas and information systems research:
New directions. Information and Organization, 2, 123–141.
Saariluoma, P. (1984). Coding problem spaces in chess. In Commentationes
Scientiarum Socialium (Vol. 23). Turku: Societas Scientiarum Fennica.
Saariluoma, P. (1997). Foundational analysis: Presuppositions in experimental psy-
chology. London: Routledge.
Saariluoma, P. (2005). Explanatory frameworks for interaction design. In
A. Pirhonen, H. Isomäki, C. Roast, & P. Saariluoma (Eds.), Future interaction
design (pp. 67–83). London: Springer.
Saariluoma, P., & Jokinen, J. P. (2014). Emotional dimensions of user experi-
ence: A user psychological analysis. International Journal of Human-Computer
Interaction, 30, 303–320.
Saariluoma, P., & Oulasvirta, A. (2010). User psychology: Re-assessing the
boundaries of a discipline. Psychology, 1, 317–328.
Saarinen, E., & Saarinen, A. B. (1962). Eero Saarinen on his work. New Haven,
CT: Yale University Press.
Schmidt, R. A., & Lee, T. D. (2011). Motor control and learning: A behavioral
emphasis. Champaign, IL: Human Kinetics.
Seligman, M. E. P., & Csikszentmihalyi, M. (2000). Positive psychology—An
introduction. American Psychologist, 55, 5–14.
Shneiderman, B., & Maes, P. (1997). Direct manipulation vs. interface agents.
Interactions, 4, 42–61.
48 Designing for Life
Shneiderman, B., & Plaisant, C. (2005). Designing user interfaces. Boston, MA:
Pearson.
Sikka, T. (2011). Technology, communication, and society: From Heidegger
and Habermas to Feenberg. The Review of Communication, 11, 93–106.
Simon, H. A. (1969). The sciences of artificial. Cambridge, MA: MIT Press.
Stahl, B. C. (2006). Emancipation in cross-cultural IS research: The fine line
between relativism and dictatorship of intellectual. Ethics and Information
Technology, 8, 97–108.
Stahl, B. C. (2010). 6. Social issues in computer ethics. In L. Floridi (Ed.), The
Cambridge handbook of information and computer ethics (pp. 101–115).
Cambridge: Cambridge University Press.
Trist, E. L. (1978). On socio-technical systems. Sociotechnical systems: A sourcebook.
San Diego, CA: University Associates.
Väänänen-Vainio-Mattila, K., Väätäjä, H., & Vainio, T. (2009). Opportunities
and challenges of designing the service user eXperience (SUX) in web 2.0. In
P. Saariluoma & H. Isomäki (Eds.), Future interaction design II (pp. 117–
139). Berlin: Springer.
van Schomberg, R. (2013). A vision of responsible research and innovation. In
R. Owen, M. Heintz, & J. Bessant (Eds.), Responsible innovation (pp. 51–74).
Oxford: Wiley.
Vygotsky, L. S. (1980). Mind in society: The development of higher psychological
processes. Cambridge, MA: Harvard University Press.
Weinberg, G. M. (1971). The psychology of computer programming. New York:
Van Nostrand Reinhold.
Whitford, F., & Ter-Sarkissian, C. (1984). Bauhaus. London: Thames and
Hudson.
Wickens, C., & Holands, J. G. (2000). Engineering psychology and human per-
formance. Upper Saddle River, NJ: Prentice-Hall.
3
The Logic of User Interface Design
Technical artefacts exist so that people can use them to make something
happen. Their capacity to do so depends on the functions and functional-
ities of the technology, which requires users. Technologies thus have to give
users the ability to control them, and the designer’s role is to create the
actions and work processes for which the artefacts are intended. This basic
HTI pursuit is called user interface design. It applies technical interaction
concepts to solve design problems. This chapter presents the overall princi-
ples and goals for the user interface design of any technical artefact.
Technical artefacts are physical systems that only follow the laws of
physics; they are not able to set sense-making goals for their operation.
Only people can. All possible goals (end states) are equally ‘valuable’ for
artefacts. Therefore from the point of view of the laws of physics, it is
irrelevant whether an aeroplane lands successfully or crashes. Yet from a
human point of view, a successful landing is desirable and a crash is a catas-
trophe. Sensible behaviours of any technical artefacts are thus a sub-set of
all their possible behaviour alternatives. Since artefacts as physical systems
cannot distinguish between successful and non-successful behaviours, it is
essential that people control them.
1
To be exact, scissors are analogous machines that can have an infinite number of possible input
and output states. However, they only reach a finite number of possible states before they are
destroyed.
3 The Logic of User Interface Design 51
that makes it possible for the user to reach her goal. In this sense, the
expected state of a sailing boat can be as much about sailing on the sea as
reaching a destination.
How to control the behaviour of a technical artefact is a fundamental
problem in HTI design. No technical artefact can exist without provid-
ing its users the methods to use it. Despite its apparent simplicity, the
problem of how to control (a technical artefact) has its notable dimen-
sions. First, the behaviour of the artefact must be logically linked to the
human action in question. In the case of the lift, the technical capacity
to move from one floor to another is the artefact behaviour that makes
it possible to support people’s movement in a block of flats. Second, it is
essential to link the behaviour of the artefact to users’ actions via a user
interface. In the case of a lift, this often refers to the set of control but-
tons referring to which floor the lift should stop on. However, the latter
presupposes design knowledge of how people use the artefact.
Functions and Events
Task analysis, the schematic structure it gives to tasks and the actions
of using a technology form the basis of technology design. The role of a
technical artefact in human actions is normally called its function, and its
respective capacities are known as functionalities (Gero 1990; Gero and
Kannengiesser 2004; Ulrich and Eppinger 2011). One could say that
functionalities detail the uses of the artefact. As a property of a technical
artefact or system, they form the first step in defining technical interac-
tion (Chandrasekaran 1990; Gero 1990; Gero and Kannengiesser 2014).
They express the performance capacities of the designed artefact (Gero
1990). In the case of a window, for example, the notion of functionality
refers to such properties as providing daylight, making ventilation pos-
sible, preventing heating loss, and eliminating noise.
The concept of function in HTI discourse explains what technology
can offer to people. Sometimes researchers also use the notion of affor-
dance, the origins of which can be found in Gibson’s (1979) psychology
of perception (Gaver 1991). Functionality defines the effects that operat-
ing an artefact can have on the environment. The goodness or purpose-
3 The Logic of User Interface Design 53
that are linked together in a single site. They aid navigation and make it
possible for users and search engines to find pages with related content.
An event flow is a sense-making concept in designing user interfaces
and respective processes of technical artefacts. In order to slow down the
speed of a car, the driver has to choose a lower gear. Doing this requires
pressing the clutch to shift into another gear. This kind of common
control operation is similar to interacting with GUIs. Events must be
defined so that the user can control the technical artefact in a sense-
making manner.
Functionalities, requirements, events, and event flows are intimately
related to technical concepts. They define the potential processes of a
technical artefact and how users can control machine processes or choose
between alternative event paths, so that the artefact can do what it is sup-
posed to do. Their design is based on technical theory languages. The crit-
ical design question is to define the best structure for event flows from an
initial state to a goal state in realizing a particular function (Rauterberg
1996; Ulich et al. 1991).
Event flows can be presented as graphs. In these graphs, nodes repre-
sent the input actions of the user and arrows the transformation of the
technical artefact from one state to another. On this level the interaction
process can be understood as a state-action graph; such graphs have often
been used to describe human mental representations (Newell and Simon
1972). This is an equally valid conceptualization for both mechanical and
information systems. Thus, riding a bike can be presented as a graph of
turning left and right and using hand or pedal brakes. These actions are
analogical—that is, they are not really discrete states but can be approxi-
mated by finite state graphs just as, for example, voices can be digitalized.
In information systems, finite state structures are natural.
From the user’s point of view, information systems and other compu-
tational technologies are finite by nature. Moreover, the nodes in an event
tree entail two types of information, indicating the two modes that users
have. First, users give control and steering information to the system by
‘telling’ the system what the next goal is (mostly by using dialogue boxes
or menus). Second, this type of information is relevant for actual tasks.
For example, editing a text presupposes writing the text and using the dif-
ferent control facilities of a text-editing program. No interaction process
56 Designing for Life
2
We do not discuss here the differences between task and work analysis, though the difference is
essential in the context of work processes. This section focuses on technical interaction, not on the
way work groups are organized.
3 The Logic of User Interface Design 57
It can be divided into different sub-actions that are executed when typing
a sentence—such as insert, replace text, save, delete, shift line, and access
text from a file (Annett 2000; Card et al. 1983; Hollnagel 2006). These
sub-actions can be analysed in terms of the keystrokes used (e.g., push-
ing a line feed or using the delete button). In this way it is possible to
model the process of typing down to the keystroke level, or even down
to the level of the typists’ muscle control. This method can be used to
represent the hierarchical actions and sub-actions in any human task,
from typing to steering a ship (Annett 2004). Yet task analysis can also be
used to conceptualize tasks in many other kinds of conceptual systems,
such as scenarios (Go and Carroll 2004) or computational models (Card
et al. 1983; Kieras 1997). Although their perspectives may be different,
all these models of task analysis work to describe tasks as combinations
of units. They differ in their characterization of the elements and the way
they are unified.
When defining the structure of human actions for a specific task,
it is equally important to consider people’s goals and the purposes of
their actions. Human actions are always intentional and goal directed
(Brentano 1874/1955; Dennett 1991; Husserl 1901–1902; Searle 1992).
People have reasons for what they do, which are usually explained by the
goals they wish to reach (von Wright 1971; Taylor 1964). People do not
use TV remote controls to play with or control technology. Instead, they
use them to access news or movies on TV. Each action has its goal, for
example watching a movie, and this action generates sub-actions to reach
this goal, in this case pushing the buttons of a remote control.
Task analysis explicates tasks as a function of sub-tasks: the main action
is realized to reach the goal of the main task. However, reaching a goal
presupposes carrying out all the necessary sub-tasks and sub-actions. The
goals of the sub-tasks are defined by their purpose in helping to achieve
the main task. In this sense, sub-tasks and their related actions become
meaningful only as a part of the main task. If a father wants to buy a book
as a present for his child, he may get in a car and drive to a bookshop, buy
a book and drive back home. The purpose of buying a book may be to
make his child happy or to encourage his mental development. Going to
a bookshop requires taking a car. Buying a book presupposes paying for
it and, quite likely, using a credit card. Thus the sub-tasks and sub-actions
3 The Logic of User Interface Design 59
are ways to reach the goal of the main task, and their purpose is defined
as part of the main task.
The core of task analyses is to illustrate the hierarchical action and
sub-action sequences or histories in a detailed task hierarchy. It requires
defining the functions of the actions in using the artefact, and their
mutual relations and sub-actions as a whole, as this knowledge is needed
to outline requirements and solutions for technologies supporting the
actions. The functional structure of a task thus provides designers with
information about design requirements, which can be used to explain
why individual design solutions make sense and what their optimal form
should be.
A proper task analysis helps designers decide which kinds of technical
solutions and user requirements are needed to accomplish the chosen
task: that is, the kind of functional roles the technology could have and
the kind of opportunities the user should be given to control the opera-
tion of the artefact. The performance capabilities needed for technology
depend on the nature of the task and the human intentions in carrying it
out. Task analysis provides this information about what the new technol-
ogy is supposed to do.
manipulation, users are given every means to directly influence the way
technical artefacts operate, and to be aware of how operations precede.
Users are allowed to use visible or directly sensible objects that are directly
linked to the operations of the artefacts. When direct manipulation is
allowed, computer users can, for example, drag and drop items or change
their sizes at will. Today many technologies have internal and tacit user-
interaction solutions, which are used to help control the behaviour of a
technical artefact. Cars, for example, may have numerous tacit control
systems of which users are barely aware. These systems operate ubiqui-
tously in the background and make driving easier, although the driver
has no direct contact with them (or a need to know what is happening
in the system).
Also in some devices that resemble a bike, such as balance electric
scooters, it is possible to automate a balance-keeping function with a
smart computer program. This kind of automation represents a com-
mon interaction design technique. For example, when a smartphone is
picked up, the device adapts the display so the user can easily read the
information on the screen. Repositioning the phone thus indirectly acti-
vates automatic operations that present the content on the display in
an optimal manner. This type of machine operation, which directs the
machine towards an anticipated and optimal state, can be called internal
or machine interaction operations.
Another example of tacit functions is related to navigation on a screen.
Main menus usually provide limited information and hide the rest until
selected by the user. The users must thus have an idea about what is hid-
den and what kind of information can be expected behind the selections.
In the design, it is important to perceive the correct order for this kind of
invisible information.
Different kinds of memory support systems, such as balloon tips, can
help users navigate such menus. However, current user interface solu-
tions often do not provide particularly efficient solutions due to techni-
cal limitations. Limits on the space for GUI or the size of touchscreens,
for example, makes it necessary to hide certain functionalities and com-
mands from the primary display. Therefore finding the commands must
be so intuitive, and the information concerning them so purposeful, that
people can reach their task goals using the given controls. This is possible
3 The Logic of User Interface Design 61
only when user tasks, and the structure of particular actions by means of
tasks analyses, are intuitively perceived.
Arguably, the most important form of tacit interaction today is ubiq-
uitous computing (Weiser 1993). In such variants as pervasive comput-
ing or computing everywhere, the traditional command- or GUI-based
interfaces are replaced by interfaces that automatically register the behav-
iour of the user and respond accordingly. This kind of interaction can
support human actions without the active involvement of users.
Ubiquitous computing is an important paradigm in HTI, as it enables
people to use technologies without special training or skills. These tech-
nologies aid people in their actions unnoticed and make the users’ goals
easier to reach. Ubiquitous computing can be used, for instance, in
gerontechnology to monitor the health of older people or to adapt the
living environment according to changes in human behaviour in order
to prevent accidents (Bouma et al. 2009; Charness 2009; Czaja and Nair
2006; Leikas 2009).
Intentional and tacit interaction always presupposes the analysis of
tasks. Most of the current application programs have numerous tacit ele-
ments, though their overall use is intentional. The crucial questions in
design are associated with dividing tasks between intentional and tacit
processes, as well as recognizing the relevant characteristics of human fac-
tors that can be used to launch tacit technical processes.
into the artefact (such as text boxes or forms), and some that have been
developed to directly control the behaviour of the programs. Typical
examples of the latter include dialogue boxes, gizmos (or controls), scroll-
bars, radio buttons, menus, and toolbars (Cooper et al. 2007; McKay
1999). The main advantages of such standardized interface components
are the positive transfer effects created by the consistency of user inter-
faces and the ease of learning how to operate new interfaces (Helfenstein
and Saariluoma 2006; Singley and Anderson 1987). Another benefit can
be found in programming interfaces: standardized parts allow the wide
reuse of codes and make visual programming easier.
Thus user interfaces can be based on different technological solutions,
which can be used to classify them into different types. Some user inter-
faces are mechanical (such as a steering wheel, a gear, or a keyboard),
while others are based on commands as in computer languages and still
others are graphical such as modern web pages. Some interfaces are based
on such ubiquitous interaction solutions as the direct registration of a
human body and its neural states (Weiser 1993).
These kinds of interface categories are useful but not absolute, because
graphical interfaces can have mechanical features, and mechanical inter-
faces may have electromechanical and graphical features. For instance,
touchscreen interfaces have mechanical aspects (the user presses a sur-
face), and power steering systems are hydraulic and electronically real-
ized, whereas steering wheels are mostly mechanical devices. Hence, from
the users’ point of view, the discussion of the technical classification of
user interfaces is of limited importance when constructing theoretical
concepts for user interface design.
mand are the ideal form of interaction, but unfortunately this is increas-
ingly rare in modern information technology.
Technical artefacts and goods have become increasingly complex.
People are required to operate using many kinds of input modes and
hierarchies when performing a task, and users have to control event flows
in different states of operation. These tasks are usually carried out in the
form of a dialogue or a game between the user and the artefact. Designers
must create sense-making dialogues to enable the artefact’s effective use
(Cooper et al. 2007; Moran 1981).
In constructing dialogues, it is essential to provide users with as much
relevant knowledge as possible to help them choose between different
actions. This presupposes goal-relevant decisions during the dialogue.
Designers use task analysis to understand what people intend to achieve
during different sub-tasks in order to define how the given technology
should respond to users’ actions and what kind of feedback should be
developed in the user interface for different inputs.
User interfaces are sign systems. They may provide information about
many different types of issues, and users are expected to comprehend the
delivered messages. The sub-set of semiotics, the study of signs, which
relates to user interfaces and other HTI and communication issues, is
called semiotic engineering (de Souza 1993, 2005; Rousi 2012). In the
following, we examine the overall principles of semiotics and semiotic
systems of user interfaces.
The roots of semiotics (Eco 1976) can be found in the classic works of
Peirce (1931–1958) and de Saussure (1916). The key problem of semiot-
ics is using a sign to stand for something else, such as opening a web page
(i.e., a path in an event tree). Signs have the capacity to evoke mental
representations of references in the human mind. They can be words or
linguistic signs; they can be formal signs such as mathematical symbols,
or commands in computer languages. They can also be natural phenom-
ena such as smoke, which signifies fire, or fever, which signifies illnesses
(Eco 1976). They also provide us with a valuable conceptual tool for
investigating and designing interfaces.
Semiosis is the process by which people give meanings to signs. The
context of giving meanings can be called semiosphere or a language game
(Lotman 2005; Wittgenstein 1953). These concepts refer to how the
66 Designing for Life
Design Patterns
Many existing components are used in current technology development.
Game programming, for example, relies on previous programs and their
components. In most game development the story and the design of the
figures are more demanding than the actual programming work, which
employs several standard user interface components, technology stan-
dards, and styles (Cooper et al. 2007; McKay 1999). Standardized design
solution patterns are also commonly used in the physical interaction
designs that are typical of mechanical technologies (Alexander 1977).
Standardized solution patterns also make it easier for users to learn and
accept new systems.
Mechanical interfaces have standard input and output components
(Dieter and Schmidt 2009). Some of them are used to control the spa-
tial position of moving objects (such as a steering wheel or a rudder),
while others (such as an accelerator or brake pedal) are used to control
the speed of a process. It is also essential to have information about
the internal and external states of a given artefact in order to use the
technology. This information cannot always be detected by the human
senses. The temperature of combustion processes, for example, is not
definable by human senses. Similarly, it is impossible to be aware of the
thousands of indicators that ensure the proper paper-making processes
in a modern paper machine. Only with the help of adequate user inter-
face components is it possible for users to supervise the processes of
machines.
Many standard control components have been developed in mechani-
cal engineering in recent decades. Such items as hand wheels, knobs,
levers, joysticks, rocker switches, pedals, push buttons, and sensor keys
have been used for a long time in technical interaction solutions as input
controls for machines. Their design parameters are well known and their
usage has frequently been tested.
Likewise, there are also numerous more or less standardized output
instruments, such as speedometers, thermometers, and artificial horizons.
Modern paper machines, for example, can have over 4000 sensors regis-
tering different aspects of this complex process. Successfully integrating
68 Designing for Life
these sensors (and the feedback they give controllers) is crucial for keep-
ing these complex systems effectively operable.
Standardized interaction components can also be found in the area of
HCI (Cooper et al. 2007; Griggs 1995; Goodwin 2011). Event handlers
can steer computational processes on the basis of information that users
have directly or indirectly input into the computer. Such standard ele-
ments as text and dialogue boxes, radio and option buttons, toolbars,
icons, pop ups, drop-down menus, or scrollbars are everyday compo-
nents used to build standard GUIs (Cooper et al. 2007; Griggs 1995;
McKay 1999). Feedback is mostly given on a screen, but it can also be
auditory or printed. The main function of feedback is to provide users
with information to help control the process or support the task.
In the design, it is necessary to anticipate the usage situation and the
related requirements in order to develop the best possible means for the
interaction. Depending on the situation, the solution can involve any-
thing from hand and foot controls and visual and auditory displays to
designing suitable illumination conditions and organizing the space for
effective use of the given input tools (Sanders and McCormick 1993). To
approach the questions of user interface design systematically, the ICT
design community has developed a set of recommendations, guidelines,
technology standards, and de facto standards for user interfaces (Cooper
et al. 2007; McKay 1999). These mainly discuss interface components
that have been widely applied in user interface design.
Usability standards and style guidelines also support reusing design
ideas. Standards are intended as norms to help designers unify the tech-
nical culture; they are often worldwide requirements that designers fol-
low globally. The content of standardization of HTI issues has improved
along with the recent ISO-9241 standards, which include more specific
information about ergonomics, safety and interaction issues than earlier
versions (ISO 1998a, b). There are also laws and directives that regulate
the technical aspects of interaction. These are mostly technical details
about health risks, such as the influence of magnetic fields on pacemakers.
Though the above-mentioned guidelines and directives provide sup-
port information for constructing technical HTI processes, designers
must decide between alternative ways of creating interaction solutions (or
develop their own variations). This leaves room for many style standards
3 The Logic of User Interface Design 69
for web pages, for example. Style sheets have standardized many proper-
ties of web pages, for example typical interaction components such as
fonts, boxes, lists, and positioning have been given their own require-
ment recommendations (Collison et al. 2009; w3schools 2013). There
are also a huge number of templates and other ready-to-modify solution
models for solving concrete design problems.
Standardized solution models simplify the design of technical solu-
tions and interfaces. Programmers and mechanical designers can use the
same solutions again, which saves considerable time and facilitates a more
unified and easy-to-learn design culture. However, in most cases standard
solutions and best practices offer only a framework within which one can
work, rather than a final solution. In the design, it is essential to under-
stand why given standard solutions and standards make sense, as well as
why they are not optimal in some situations.
its goal state. Finally, it is essential to define the technical and natural
feedback systems that allow users to follow the artefact’s behaviour.
All artefacts have this type of conceptual structure. It defines the fun-
damental questions associated with the problems of how-to-control tech-
nologies, which must be solved in any HTI design project.
The main concepts and design problems of how-to-control are:
• The main aim of the action (the user’s goal) must be defined.
• The technology’s effect on the environment (natural, mental, informa-
tion, or socio-cultural) must be determined.
• The user’s tasks and sub-tasks must be defined.
• The technical artefact or system is transformed from its current initial
state to its expected goal state to achieve the effect.
• Users employ tasks when using a technology and each task has multi-
ple sub-tasks.
• All tasks and sub-tasks have a purpose in human action when using the
technical artefact or system.
• Each technical artefact or system has input channels with different
operational states.
• Each element of user input affects the behaviour of the technical arte-
fact or system.
• Each technical artefact or system has a number of output channels
with multiple output states that give the user feedback information
about the prevailing state of the artefact.
• Natural observation (following how the artefact behaves) serves as an
output channel.
• The combination of input and output channels constitutes the human–
artefact interface.
• A user interface must be implemented to direct the actions of the
artefact.
standards and style guides are valuable and necessary in technical design.
However, they are not sufficient for understanding and designing HTI
processes. Before using them, it is essential to consider which standards
and guidelines are justifiable (and why) by proceeding from technical
discourse to psychological and sociological discourse and examining con-
cepts of human research in the field of technology design. Technological
standards and design solutions must be justified by their positive effects
on human physical and biological environments, mentality, and society.
References
Alexander, C. (1977). A pattern language: Towns, buildings, construction. Oxford:
Oxford University Press.
Annett, J. (2000). Theoretical and pragmatic influences on task analysis meth-
ods. In J. Schraagen, S. Chipman, & V. Shalin (Eds), Cognitive Task Analysis
(pp. 25–40). Malwah, NJ: Erlbaum.
Annett, J. (2004). Hierarchical task analysis. In D. Diaper & N. Stanton (Eds.),
Handbook of cognitive task design (pp. 63–82). Hillsdale, NJ: Erlbaum.
Annett, J., & Duncan, K. D. (1967). Task analysis and training design. Report
resumes. Hull: Hull University.
Barach, P., & Small, S. D. (2000). Reporting and preventing medical mishaps:
Lessons from non-medical near miss reporting systems. British Medical
Journal, 320, 759–763.
Blumenthal, D. (2010). Launching HITECH. New England Journal of Medicine,
362, 382–385.
Bouma, H., Fozard, J. L., & van Bronswijk, J. E. M. H. (2009). Gerontechnology
as a field of endeavour. Gerontechnology, 8, 68–75.
Boyle, E., Van Rosmalen, P., & Manea, M. (2013). Cognitive task analysis.
Retrieved April 23, 2015, from http://dspace.learningnetworks.org/bit-
stream/1820/4848/1/CHERMUG-Deliverable%2014-CognitiveTaskAnalysis-
WP2.pdf
Brentano, F. (1874/1955). Psychologie vom empirischen Standpunkt. Hamburg:
Felix Meiner.
Card, S., Moran, T., & Newell, A. (1983). The psychology of human-computer
interaction. Hillsdale, NJ: Erlbaum.
Chandrasekaran, B. (1990). Design problem-solving—A task-analysis. Ai
Magazine, 11, 59–71.
3 The Logic of User Interface Design 73
Newell, A., & Simon, H. A. (1972). Human problem solving. Engelwood Cliffs,
NJ: Prentice-Hall.
Newman, M. W., & Landay, J. A. (2000). Sitemaps, storyboards, and specifica-
tions: A sketch of web site design practice. In Proceedings of the 3rd Conference
on Designing Interactive Systems: Processes, Practices, Methods, and Techniques
(pp. 263–274).
Nickerson, R., & Landauer, T. (1997). Human-computer interaction:
Background and issues. In M. Helander, T. Landauer, & P. Prabhu (Eds.),
Handbook of human-computer interaction (pp. 3–31). Amsterdam: Elsevier.
Nielsen, J. (1993). Usability engineering. San Diego, CA: Academic Press.
Ogden, C., & Richards, I. (1923). The meaning of meaning. London: Routledge
and Kegan Paul.
Pahl, G., Beitz, W., Feldhusen, J., & Grote, K. H. (2007). Engineering design: A
systematic approach. Berlin: Springer.
Peirce, C. S. (1931–1958). In C. Hartshorne, P. Weiss, & A. Burks (Eds.),
Collected papers of Charles Sanders Peirce (Vols. 1–8). Cambridge, MA:
Harvard University Press.
Rasmussen, J., Mark Pejtersen, A., & Goodstein, L. P. (1994). Cognitive systems
engineering. New York: Wiley.
Rauterberg, M. (1996). How to measure the ergonomic quality of user inter-
faces in a task independent way. In A. Mital, H. Krueger, S. Kumar,
M. Menozzi, & J. E. Fernandez (Eds.), Advances in occupational ergonom-
ics and safety I (pp. 154–157). Cincinnati, OH: International Society for
Occupational Ergonomics and Safety.
Rosson, B., & Carroll, J. (2002). Usability engineering: Scenario-based develop-
ment of human-computer interaction. San Francisco, CA: Morgan Kaufmann.
Rousi, R. (2012). From cute to semiotics.
Saariluoma, P., & Rousi, R. (2015). Symbolic interactions: Towards a cognitive
scientific theory of meaning in human technology. Journal of Advances in
Humanities, 3, 310–323.
Salvendy (2006), is editor of G. Salvendy (Ed.), In Handbook of Human Factors
and Ergonomics. Hoboken, NJ: John Wiley & Sons.
Sanders, M. S., & McCormick, E. J. (1993). Human factors in engineering and
design (7th ed.). New York: McGraw-Hill.
Searle, J. (1992). The rediscovery of mind. Cambridge, MA: MIT Press.
Shneiderman, B. (1983). Direct manipulation: A step beyond programming
languages. IEEE Computer, 16, 57–69.
Shneiderman, B., & Maes, P. (1997). Direct manipulation vs. interface agents.
Interactions, 4, 42–61.
3 The Logic of User Interface Design 77
Singley, M. K., & Anderson, J. R. (1987). A keystroke analysis of learning and
transfer in text editing. Human-Computer Interaction, 3, 223–274.
Smith, B. (2003). The logic of biological classification and the foundations of
biomedical ontology. In Invited Papers from the 10th International Conference
in Logic Methodology and Philosophy of Science (pp. 19–25). Oviedo, Spain.
Stanton, N. A. (2006). Hierarchical task analysis: Developments, applications,
and extensions. Applied Ergonomics, 37, 55–79.
Taylor, F. (1911). Shop management. New York: McGraw-Hill.
Taylor, C. (1964). The explanation of behaviour. London: Routledge and Kegan
Paul.
Turing, A. M. (1936–1937). On computable numbers, with an application to
the entscheidungsproblem. Proceedings of the London Mathematical Society,
42, 230–265.
Ulich, E., Rauterberg, M., Moll, T., Greutmann, T., & Strohm, O. (1991). Task
orientation and user-oriented dialog design. International Journal of Human-
Computer Interaction, 3, 117–144.
Ulrich, K. T., & Eppinger, S. D. (2011). Product design and development.
New York: McGraw-Hill.
Vicente, K. J. (1999). Cognitive work analysis: Toward safe, productive, and
healthy computer-based work. Mahwah, NJ: Erlbaum.
von Wright, G. H. (1971). Explanation and understanding. London: Routledge
and Kegan Paul.
w3Schools. (1999–2016). Retrieved February 12, 2011, from http://www.
w3schools.com/css/
Weiser, M. (1993). Some computer science issues in ubiquitous computing.
Communications of the ACM, 36, 75–84.
Wittgenstein, L. (1953). Philosophical investigations. Oxford: Basil Blackwell.
Woods, D. D., & Roth, E. M. (1988). Cognitive engineering: Human problem
solving with tools. Human Factors: The Journal of the Human Factors and
Ergonomics Society, 30, 415–430.
Yuan, X., Cohen, M. B., & Memon, A. M. (2011). GUI interaction testing:
Incorporating event context. IEEE Transactions on Software Engineering, 37,
559–574.
4
The Psychology of Fluent Use
et al. 1999; Laursen et al. 2008). In principle, people should be able to use
all the facilities and functionalities that technologies offer, but in practice
this is not always the case. This is one of the main reasons that products
fail to find markets (Norman 1999; Shneiderman 2011). Designing rel-
evant and attractive functionalities for technologies is important, but it is
even more necessary to ensure that people can really use them.
User interface solutions are meant to provide people with a realis-
tic possibility of reaching their goals (Card et al. 1983; Nielsen 1993;
Rosson and Carroll 2002; Shneiderman, and Plaisant 2005; Wickens and
Holands 2000), regardless of users’ capacity limitations (Baddeley 2007,
2012; Broadbent 1958; Covan 2000; Miller 1956), comprehension
problems (Kitajima and Polson 1995), or lack of skills (Green and Petre
1996; Navarro-Prieto and Canas 2001; Visser and Hoc 1990). Therefore,
we focus the design discourse now on the issues how to make technolo-
gies easier for people to use (Card et al. 1983; Olson and Olson 2003;
Rosson and Carroll 2002).
The ability to use technology depends on mental processes (Karwowski
2006). The issues of ‘to be able to’ and ‘ease of use’ are typical problems
of usability, accessibility, ergonomics, and human factors (Bridger 2009;
Karwowski 2006; Kivimäki and Lindström 2006; Rosson and Carroll
2002; Nielsen 1993). In addition to these research domains, which are
based mainly on natural scientific laws, understanding the characteristics
of the human mind is essential when designing non-functional human
requirements for technical artefacts.
The next question is thus how to apply the knowledge of psychol-
ogy and other related human sciences in designing HTI solutions. This
knowledge is often analytical, in the sense that it is composed of detailed
pieces of information from different domains of human sciences. This
information, along with many different sub-problems, forms a holistic
approach to the human dimension of technology.
human mind (Baddeley 2007, 2012; Chase and Ericsson 1981; Covan
1999, 2000; Ericsson 2006; Gopher and Donchin 1986).
Some alternatives to the Ericsson and Kintsch (1995) model have been
suggested in literature. Alternative views have been presented by Baddeley
(2007). His model is called episodic long-term memory. Another alter-
ative has been suggested by Gobet (2000). Nevertheless, all models imply
that large active mental representations must be partly deposited in more
permanent storage than the easily inferable working memory. This stor-
age component is better known in terms of what Ericsson and Kintsch
(1995) describe as long-term working memory (LTWM). From a design
point of view, the important thing is that training makes it possible for
people to activate sizable mental representations and store them in mem-
ory for longer periods of time and in larger amounts than predicted from
the traditional working memory models. Thus, LTWM explains why
some actively used materials are beyond standard interference. One can
use this storage to analyse and explain phenomena such as interruptions,
remembering computer program code, and learning to use computers
(Ericsson and Kintsch 1995; Oulasvirta and Saariluoma 2004, 2006).
Using technical artefacts does not only involve perceiving and respond-
ing to the operation of the artefact. Users are also required to learn which
responses are relevant in which situations. This can be called the prob-
lem of perceptual-motor learning (or in a wider sense, the problem of
learning to use technologies). The use of most technical artefacts requires
some learning, and sometimes a learning process can be very complicated
and demanding, for example, programmers must study for years to reach
the required skill level (Anderson et al. 1984; Mayer 1997; Robins et al.
2003).
Acquiring expertise in using advanced technologies often takes a long
time, and even experts must keep in mind the issues they have learned
(Mayer 1997). This brings about the next interaction task, which can
be called long-term remembering; the psychological ability that is respon-
sible for it can be called long-term memory. It is a vital concept in learn-
ing. Everything that has been learned is kept in the long-term memory
(Baddeley 2007, 2012; Brady et al. 2009).
Additional tasks must be studied before one can draw a road map of
the psychological sub-tasks associated with using a technical artefact.
84 Designing for Life
each other. Users of technology are also different. They may have differ-
ent societal, occupational, educational, and generational backgrounds,
as well as different experiences as technology users (Bouma et al. 2009;
Charness 2009; Czaja and Nair 2006; Leikas 2009). They may be experi-
enced or novices (Mayer 1997). In each case, it is essential to understand
what kinds of characteristics are shared by users of a specific technical
artefact. This knowledge is required when designing technologies for spe-
cific groups of people. The psychology of individual differences and per-
sonality provides information and methodologies for solving these kinds
of questions (Cronbach 1984; Kline 2013).
Finally, people belong to—and co-operate in—groups of different
sizes. These groups may be pairs, families, small work teams or organi-
zations, and even nations and cultures. Social media (services allowing
people to exchange text, speech, pictures, videos, and information over
the Internet) is currently in a decisive position to create different groups.
Irrespective of whether the group is organized around culture, work, or
social pleasure, there are always common norms that are accepted and
followed (Brown 2000).
Social interaction with the help of technology can involve either
one-dimensional communication (interaction between the human
and the machine) or two-dimensional human–human interaction (the
communication is transferred from one person to another in turn).
Communication can be seen as a multidimensional interaction when
several people interact with each other at the same time—for example,
social media applications that enable a group of persons to share each
other’s contexts, locations, and contextual content. Many solutions for
multi-user questions and group behaviour can be found in social and
organizational as well as cross-cultural psychology (Brown 2000).
A seemingly simple situation can include several major psychological
aspects, for example:
The above-presented design and research tasks all require solid knowl-
edge of the human mind and its operations in order to improve the user’s
capacity to effectively use technical systems. Good design solutions,
which are in harmony with the principles of the human mind, form pre-
conditions for the effective use of technologies. Obtaining a more con-
crete view of the nature of these design challenges calls the attention for
systematic consideration of different aspects of psychological knowledge
bases. This chapter focuses on cognitive issues.
To Perceive and to Attend
When using a technical artefact, it is necessary to receive information
concerning different states of the technology and the context in which it
is used. If the user is unable to acquire all the relevant information con-
cerning its usage, an artefact will be impossible to control. For example,
when loading a container onto a ship, a crane driver working on a rainy
night may not see a workmate in a dark jacket and hit him with the crane.
This kind of work accident can be a consequence of poor visibility and
difficulty in discriminating relevant information.
Discrimination has two important stages: (1) discriminate between
objects in general and (2) discriminate the relevant targets from among
all the available objects. In the crane example, the challenge for the crane
driver is to first notice everything that happens in the loading area, and
second to discern the walking person from among other possible objects.
In psychological terms, relevant information about the state of an arte-
fact and the task to be completed is acquired via perceptual information
processes (Goldstein 2010; Rock 1983). The information is most often
visual but can be auditory, tactile, somatic sensory, or even olfactory
(Goldstein 2010; Proctor and Proctor 2006). The main requirement is
4 The Psychology of Fluent Use 87
that the user is able to find the right information at the right time and in
the right location, and to use it to control the technical artefact or system.
However, the user may have to perform several sub-tasks before being able
to accurately discriminate the crucial information. For example, he or she
has to be able to detect the target and its colour, discriminate between the
target and its background, locate the target in three-dimensional space,
and register its movements.
The first psychological precondition for acquiring information is
thus how to find relevant information in I/O components and action
contexts. The psychology of sensory and visual discrimination provides
much of the key information about human performance related to these
types of issues (Bruce and Tsotsos 2009; Duncan and Humphreys 1989;
Neisser 1967, 1976). For example, if the sound of the given task is too
similar to the background noise, the threshold of discrimination becomes
too high and may lead to dangerous situations (Goldstein 2010; Proctor
and Proctor 2006).
The first level of user psychological issues is information encoding by
a human’s sensory systems. People need to see where I/O components
and contextual items are, how they are located, and how they differ from
other equivalent components. These processes are essential in monitor-
ing and controlling how technology works. One cannot interact with a
technical artefact without a sensory connection to the technology and its
relevant environment. However, this very basic task entails a large num-
ber of important scientific questions.
Before users can operate any I/O component in an interface, they have
to be able to obtain information about the component, such as an icon
on a computer display or a button in a car interface. However, perception
in the scientific sense is much more complicated than simply experienc-
ing the target I/O element. There is a wide gap between intuitive experi-
ence and scientific understanding.
In order to perceive an object, one must be able to generally discrimi-
nate it from other objects. First, there must be a physical target. This is
called the absolute discrimination threshold (Goldstein 2010). Second, the
target has to be discriminated from the background noise: the relative dis-
crimination threshold (Goldstein 2010; Hecht 1924). These two aspects
are present in every interaction event.
88 Designing for Life
and depth; it can be called the problem of three dimensionality (Bruce and
Green 1992) and it has two main variants. First, it is essential that users
can see the location of objects in their physical surroundings in real space.
Second, if using a computer screen, the user must be able to perceive a
three-dimensional presentation of the object on the screen.
There are also contextual factors to be considered. The perceiving per-
son can move quickly or be placed above the surface of the earth. Pilots,
for example, must be able to correctly locate the take-off and landing
zone as well as the height of the plane (Gibson 1950, 1979). Similarly,
crane operators must be able to move containers into the correct places
in the narrow cargo hold of a ship. This is possible only if they can see
the objects and their movements in space. The question concerns the
ability to place objects in a spatial arrangement as well as the movement
of the perceiving person, because the perceiver’s movements continuously
change the retinal picture and affect the way a person perceives the envi-
ronment. Three dimensionality and object perception thus form a com-
plex set of problems for the psychology of perception (Bruce and Green
1992; Rock 1983).
The three dimensionality of a perceptual space is partly constructed
on the grounds of asymmetry between retinal pictures, and partly due to
distance cues (i.e., the properties of the environment, which the human
mind can use to infer the distances between different objects). A typi-
cal example of this is gradient, which means that the surface has similar
structures that give smaller and smaller retinal images (Fig. 4.2).
In this figure, the target circles are of equal size, but the upper circle
seems to be larger. The gradient of the railway track suggests that the
upper circle is further away, and as the retinal size of the targets is the
same, it suggests that the upper circle must be larger (Bruce and Green
1992; Goldstein 2010).
This type of visual illusion can be used to construct three-dimensional
spaces on two-dimensional surfaces, such as displays or paintings. Three
dimensionality is also often an important problem in constructing games
or other three-dimensional virtual realities. The psychology of depth and
movement perception is vital in solving these design issues (Bruce and
Green 1992).
The perception of movements is also based on perceptual cues. The
edges of objects, for example, for a time cover a background that causes
the image of a moving object (Ullman 1996). As mentioned above, an
additional element in the perception of movement comes from the move-
ments of the observer (Gibson 1950, 1979). Constructed movements are
often important in improving displays. For example, when constructing
virtual reality, it is often important to create body movements to avoid
contradictory sensory information between the eyes and body. Without
logical coordination between different sensory modalities, it is impossible
to create, for example, a genuine flight simulator.
Perceptual processes in interaction include colours as well as three-
dimensional moving objects (Goldstein 2010; Mollon 1982). Colours
can be used for several purposes in organizing HTI processes, for exam-
ple, to discriminate items, camouflage certain objects, or improve emo-
tional usability. In HTI, colours are important in discriminating critical
information. Warning signals and alarm buttons are typically discrimi-
nated by colour, and it is also important in many semiotic presentations
(Tufte 1990, 1997), for example, red is the symbol for the highest tem-
perature and blue for the lowest. Colours are also important in creating
emotional dimensions for texts and other products (Desmet et al. 2001;
Liu et al. 2003).
Thus, colour perception is an additional issue in encoding I/O com-
ponents. While a normal human can distinguish between over a million
different colours (Boyton 1988; Mollon 1982), not all people can discrim-
inate colour information. Colour blindness can be genetic or acquired,
4 The Psychology of Fluent Use 91
and is much more common in men than women. There are also ethnic
differences so that Eskimos have more names for the types of snows than
Southern nations.
It is possible to find similar types of problems in other modalities, as
discussed above about vision. Sensory information is not only visual, it
may also be auditory, tactile, or vestibular (Bregman 1994; Goldstein
2010). Senses connect people with the immediate environment and pro-
vide information about the current state of their body in space or infor-
mation about its current internal states.
The problem of discriminating relevant information cannot be solved
by using concepts derived from perceptual information processes. The
question of relevant information does not concern only the experience
of perceptual space, it also requires discriminating the relevant piece of
information from all other available information. However, to effectively
use technology it is essential to discriminate important action-relevant
information from the less important background. This process has
its own principles that are studied in the psychology of human atten-
tion (Broadbent 1958; Covan 1999, 2000; Egeth and Yantis 1997;
Kahnemann 1973; Pashler 1998; Styles 1997).
In emergency situations, as in the Three Mile Island nuclear accident
or in industrial process breakdowns, it is often critical to extract critical
information quickly so that users can respond swiftly to the demands of
the situation. This may be an alarm light, a vibration, or a sound. The
psychological laws of human attention help solve the design issues of
discriminating between relevant and irrelevant information (Broadbent
1958, 1975; Kahnemann 1973; Chun et al. 2011; Pashler et al. 2001;
Styles 1997). This can also be called the difference between figure and
ground.
Thus, perception offers two interpretations of the same retinal image.
On the one hand a duck is visible, and on the other hand a rabbit. When
the duck is seen, the rabbit disappears, and vice versa. These kinds of
ambiguous pictures illustrate how perception and attention are selective
and distinguish between the foreground and the background (Luckiesh
1965; Styles 1997).
Perception is selective in the sense that it is often necessary to find a
crucial piece of information in the environment. For example, to go out
92 Designing for Life
of a room, one must find the door and its handle in order to grasp it
and push the door open. All these actions presuppose selective percep-
tion, and the ability to differentiate the background from the foreground.
Often, background information can be distracting, for example, using a
mobile phone while driving (Nasar et al. 2008).
The automotive industry is developing HUD indicators, which would
project safety information onto an image of the road. The driver could
see, for example, an indicator of a danger in his line of vision on the image
of the road. The problem with these displays is that the driver can pay
attention either to this indicator or to the road; this continuous adjust-
ment of focus may cause an accident if the display is not well designed.
Several experimental settings have been used to investigate human
attention. Visual search and cocktail party effects are perhaps the most
significant ones (Broadbent 1958; Egeth and Yantis 1997; Kahnemann
1973; Neisser 1963, 1967; Pashler 1998; Treisman and Gelade 1980). In
visual search tasks, target objects such as letters or numbers are searched
for amongst other objects. They together form the figure ‘x’ whilst the rest
of the letters or numbers serve as the background. In this task, people are
asked to report as quickly as possible whether they can perceive an ‘x’ on
the display. When a large number of displays are presented, it is possible
to measure the average time that it takes for subjects in different condi-
tions and contexts to find the target (Sanders and Donk 1996).
Discriminating targets from their background is important in human–
technology design, because targets often entail vital information; thus, a
failure to discriminate them may lead to misinterpreting situations. In
radar, for example, it is absolutely vital to discriminate between an enemy
aircraft and background noise. For this reason, many radars have software
to make discrimination easier.
It is also well known in the theory of human attention that colours
make discrimination easier when there are not too many of them
(Treisman and Gelade 1980). The ability to effortlessly and quickly dis-
criminate is called the ‘pop-up’ phenomenon, which is characterized by
discriminative cues that are easy to pick up and thus helpful in discrimi-
nating target information. Similarly, forest workers and hunters must be
discriminated so that other hunters do not accidentally shoot them. The
pop-up phenomenon works well in this case too:
4 The Psychology of Fluent Use 93
The visibility of the surface material of forest workers clothing was com-
pared in situations with different lighting. These studies discovered that
phosphorized yellow was the easiest to perceive for peripheral vision.
Although white formed the strongest contrast, yellow was the easiest to
discern against the background formed by the green forest (Isler et al.
1997).
gency processes, for instance, have not been tested for years, and then do
not work properly when they are suddenly needed.
If the level of attention lapses due to lower levels of vigilance, the per-
formance level decreases. When this happens, the probability of failure
rises and the number of errors increases. Although these effects are to some
extent individual, problems related to vigilance can be taken into account
in technology design (Koelega 1996). For example, tasks that require con-
tinuous attention should not last long, and should be divided into periods
with breaks. Vigilance begins to weaken after half an hour of continuous
attention and a continuous work performance of two hours can be too
demanding. People should also be trained efficiently in order to minimize
the number of processes one has to keep in mind at the same time.
Fatigue of night shift workers is a typical vigilance-related problem.
Bonnefond et al. (2001) conducted a one-year study of workers who
were allowed to sleep for an hour during a night shift and found that this
arrangement improved vigilance. This research suggests that this type of
arrangement is useful where the work task requires a person’s full atten-
tion, and if it is possible to arrange a sleeping break at the work place.
The study itself is a good example of usability design analysing vigilance.
Another important concept of perceptual psychology is called affor-
dance (Gibson 1979), which refers to how a visual object is perceived
from the point of view of action. For example, a glass is perceived dif-
ferently when taking it from a cupboard, filling it, and drinking from
it as opposed to throwing it at a referee in a football match (Gibson
1979; Gielo-Perczak and Karwowski 2003; Rasmussen 1983). Gibson
(1979) and colleagues note that affordance also entails information about
the state of the object that is related to what people do. Because of this
close relationship between affordance and action, it is possible to think of
affordances as attention phenomena.
If there are several alternative courses for actions, affordance allows
individuals to attend to objects from the correct point of view (Gibson
1979). Thus, their eyes, hands, limbs, and body anticipate the next
moment of the action depending on its affordance. When a person takes
up a pen to sign a document, he or she does this in a different way than if
they throw it away, and when they drink from a teacup it is not the same
action as washing the cup. The phenomenon of affordance requires not
4 The Psychology of Fluent Use 97
To Respond and to Act
Senses have a significant role in collecting information about different
states of technology. Some kinds of tools (e.g., hands or feet) are needed
in order to respond to this information and to control the technical arte-
fact. Human psychomotor action plays a central role in using technology.
Action itself is a hierarchical process with several different levels (moving
a finger, moving an arm), but the control level consists of the thoughts
about what a person is doing and why. Each of these levels makes it pos-
sible to understand some aspects of the action in question.
Within a specific context, the ultimate action goal serves as a person’s
highest level of action and as a trigger in using technologies. A ship is
built to transport cargo and to ferry passengers from one port to another.
Having this particular everyday action (moving a ship to a new port) as
the ultimate action goal explains why people use ships in general. In addi-
tion to this overall task, each individual act needed in steering the ship
has its own goal and intentional content (Jeannerod 2007). This thinking
is supported by Searle (1993), who distinguishes between prior inten-
tions and intentions in action (Jeannerod 2007) and effectively illustrates
the distinction between action in the life of users and action in using a
technical artefact or system. This chapter mainly focuses on the action in
using a technology.
Human psychomotor movements are present in every human action
and have a necessary role in interacting with technologies (Bridger 2009;
98 Designing for Life
moving away. Very close to this type of reflex are many learnt automatic
responses that operate unconsciously. Typing or driving a car, for exam-
ple, are built on unconscious low-level motor patterns of fingers and
hands (Card et al. 1983; Rumelhart and Norman 1982; Salthouse 1986).
Instinctive and automatic motor functions are important, as they are
used in situations that require rapid reactions. Such functions are vital
since they help avoid danger. Even today, one of their main purposes
is to help humans react to immediate changes in their environment. In
earlier days, these kinds of self-protection mechanisms were important in
fighting off external attacks, but today they are helpful, for example, in
avoiding traffic accidents.
The second type of motor function is formed by a large set of simple
intentional motor functions. When a hand reaches out towards a control
device, the individual is intentionally leading it towards a goal. Whereas
automatic motor functions are subconscious and unintentional, inten-
tional motor functions—although based on evolutionarily developed
patterns—essentially rely on higher cognitive processes. Various standard
functions such as walking, reaching, or grasping are examples of these
action patterns in humans, and therefore they also hold a central position
in usability design. People employ these functions when they use tech-
nologies such as a mouse or keyboard (Lewis et al. 1997).
Input modes for user interfaces are becoming increasingly advanced
with the development of ubiquitous, ambient, and pervasive technol-
ogy. For example, many modern artefacts from TVs to museum guides
are already based on gesture recognition. In addition to recognizing the
intentional motor functions of users, a great deal of effort is being put
into developing smart environments, which respond automatically to the
behaviour of the user. That is to say, to the unintentional motor functions
of the user.
Finally, complex schematic motor functions also need to be consid-
ered (Schmidt 1975; Schmidt and Lee 2011). In addition to simple and
basically automatic procedures, human beings also execute complex and
learnt series of motor functions. Such series can be recalled from memory
and applied in an adjustable manner depending on the current situation.
In technology usage, to control complex machines generally requires
motor actions of the highest level, and it usually takes a long time to
100 Designing for Life
To Learn and to Remember
The technologies that human beings have created are far beyond the
capacity of any other animals. People’s ability to create and construct is
mainly a result of the human capacity to keep large associative structures
in the memory and, consequently, to learn large systems of knowledge
such as languages. Memory and learning are central factors in understand-
ing HTI; without them, people would be unable to create technologies
to serve them. These two processes are always present when people use
technologies, from learning user guides to highly educated professional
use of technologies. Computer programmers, for example, must keep in
mind aspects of programs such as reserved words, function names, typical
algorithms, the way they are used to construct well-formed expressions,
102 Designing for Life
and the structure of the program they are working with at that moment.
They have to remember large amounts of symbolic information to be
able to carry out programming tasks. In order to understand this kind of
complex activity, it is essential to understand how people have learnt to
program—especially how their memory (which makes the learning pos-
sible) operates.
The importance of memory and learning has been long understood in
the field of interaction. Memorability and learnability are acknowledged
usability design criteria, and one goal of usability testing has been to
illustrate deficits in these areas (Nielsen 1993). One way to do this is to
measure values in remembering and learning compared to other interac-
tion processes. A completely different and more challenging task is to
find out what kind of changes should be made to really improve users’
abilities to remember relevant and required information. The improve-
ments presuppose understanding the underlying memory processes and
how they can be influenced. For this reason, usability design and research
should be intimately connected with the user psychology of the human
memory (Saariluoma and Oulasvirta 2010).
Memory is as basic a cognitive process as perception and attention.
It is needed in motor functions, and it forms the basis of human skills.
The basic memory processes, as seen in memory research, form com-
plex networks of questions that relate to how people encode, store,
and retrieve information in their minds and what kinds of properties
the many important memory systems have (Baddeley 1990, 2007). The
importance of memory has been understood within the usability com-
munity from the very beginning (Nielsen 1993). The present usability
discourse only addresses the concepts of memory on a relatively basic
level, and detailed analysis of the connections between HTI and memory
processes is still in a relatively early state. There is much work to be done
to illustrate what aspects of the human memory are important, and in
what ways, when developing HTI solutions.
An example of such research is the connection between pictorial
recognition memory and GUIs. The superiority of GUIs compared to
symbolic interfaces has had an enormous impact on interaction studies.
Not only smartphones are touchscreens; many other instruments have
been redesigned. For example, dentists today use touchscreens to con-
4 The Psychology of Fluent Use 103
ties of a particular sub-task, they may have a different value in solving the
particular problem.
Miller (1956) discovered one of the main properties of the working
memory when he examined people’s performance in various tasks that
required memory, such as multidimensional evaluations. Based on his
analysis of the test results, Miller discovered that human immediate
memory is very limited in capacity. Many others after him discovered the
same thing, and the number of empirical observations began to increase.
Based on these results, a division between working memory and long-
term memory was established (e.g., Atkinson and Shiffrin 1968; Miller
1956). It is now known that working memory is temporary in nature, and
the information it contains easily decays when new information enters.
The storage capacity of the working memory has been estimated using
various methods (Broadbent 1975; Covan 1999, 2000; Crowder 1976;
Gregg 1986; Miller 1956). These estimates have concluded that the
scope of the working memory is around four to seven separate units.
This capacity is so small that it causes problems in all tasks that load the
working memory with unrelated and new symbolic information as well
as new systems of symbols; it also makes learning cognitive skills very
time consuming (Newell and Rosebloom 1981). Thus, programming is a
good example of the necessity of processing complex symbols (Anderson
1993; McKeithen et al. 1981).
Event sequences in controlling technologies often require accurate
memory for relatively long chains of performance. Even such a sim-
ple task as adjusting the time on an electronic clock may require long
sequences of keystrokes. Learning them may be time consuming, because
the system does not necessarily provide intuitive and logical support for
comprehending what should be the next sub-task to reach the expected
goal state of the artefact.
In scientific experiments, the capacity of the short-term working
memory has proved to be rather small. This creates a new dilemma: how
is it possible for a creature with this little capacity to manage large and
complex devices such as aeroplanes and rockets? The answer is in people’s
ability to code individual bits into larger entities: chunks and memory
units (Fig. 4.4). Working memory limits the number of units, but not
their size. An individual letter, word, or sentence can all comprise a nearly
106 Designing for Life
Fig. 4.4 The first sentence is easier to remember although the sentences
comprise the same letters
equal load for the human memory, although they are unequal in the
sense that a sentence contains numerous letters (Baddeley 1990; Crowder
1976).
Human cognition can circumvent the limits of the working memory
(Miller 1956). Programmers, for example, can keep in mind long pieces
of code by organizing it into sense-making wholes. They can perceive in
the program text familiar ‘chunks’ such as loops, structures, search algo-
rithms, and sorting methods. They also have an idea of the structure of
the whole piece of code they are working with, and can thus effectively
utilize chunking mechanisms. Structure and object-oriented program-
ming paradigms particularly help support information encoding during
programming.
The limited capacity of the human working memory is an essential
component in analysing and explaining cognitive workload (Wickens
and Holands 2000). Researchers have used a series of several equivalent
tests to prove that humans store extensive information networks in their
memory. According to the type of stored information, these chunks can
be called motor, visual, or other schemas, scripts, or semantic networks
(Markman 1999). Different elements of these memory structures become
associated with each other and form vast networks, which can operate as
chunks.
Since chunking may be based on semantic associations between items,
it is easier to remember words that can be naturally categorized (Bower
1970; Bower et al. 1969), for example, into categories such as WEAPONS
(rifle, sword, dagger, pike) and FRUITS (apple, pear, orange, etc.) than
those that are totally unrelated to each other. Moreover, items’ visual and
spatial properties may also enable chunking. Thus, unrelated words that
can be chunked into bizarre images may aid remembering. For example,
words like hat, whale, cigarette, spectacles, and an overcoat can be more
4 The Psychology of Fluent Use 107
that users do not intensively try to learn a whole system, but instead
only learn the most urgent work-related information. After learning this
information, they start an exploratory process of trial and error. This type
of learning has been called ‘exploratory’ and has become one of the most
investigated in cognitive ergonomics (Bridger 2009).
A feature that differentiates exploratory learning from other types of
learning is that in exploratory learning, the trainee performs a search pro-
cess that does not occur in other types of learning. When students learn
physics, they are provided with examples that explain the concepts to be
learned. When they learn through exploration, they look at the possible
interface objects (menus, icons, etc.) that may be related to the task they
want to perform. The search is performed with a task in mind in most
cases, although it is common to find users who explore the interface with
no particular goal in order to find out what they should do. In any case,
the most important aspect to consider if a designer wants to facilitate the
search (and therefore encourage exploratory learning) is to design the
interface so that objects on the screen can be easily related to the task he
or she wants to learn to do.
The most common learning strategy is a search strategy called ‘track-
ing tags’, which involves the user searching the user interface and trying
to find a word, phrase, icon, or any other element that is connected with
the task he wants to perform. For example, if a user of a word processor
wants to know how to type text in bold, he would search on the screen
for the word ‘bold’. If this is not found, he would try other words, such
as ‘source’, ‘format’, and so forth. Novice users typically try to search for
common words that have a meaning in different contexts but not neces-
sarily in word processor usage. Those who are more proficient in using a
word processor can guess that to select a bold font, one should look for
‘Format font’.
Users can also rely on such general strategies to solve search problems,
and open a menu and fully explore it before opening another menu
(Rieman et al. 1996). However, the aspect that has been highlighted by
scanning learning is that the search is performed based on the semantic
similarity between the task the user wants to perform and the label on the
user interface. Soto (1999) has shown that the time needed to learn to
perform a task on a new interface depends on the semantic relationship
110 Designing for Life
between the task and the label. The semantic relationship was measured
empirically using the Latent Semantic Analysis (LSA) (Landauer and
Dumais 1997).
Another important learning phenomenon in the context of HTI is
transfer (Helfenstein and Saariluoma 2006). The transfer of learning
occurs when learning a task facilitates the learning of another, similar
task. Two views have been proposed to explain the effect of transfer.
According to the first, originally proposed by Thorndike and Woodworth
(1901), transfer involves similar or identical elements in the two learning
situations. The second view is known as transfer through principles. It
was originally proposed by Judd (1902), who suggested that transfer does
not occur when there are common elements in the learning situations,
but when principles or rules learned in one situation may be applied to a
new situation (Singley and Anderson 1987). In this light, style guidelines
are important because they keep the interaction culture sufficiently simi-
lar and the design solutions somewhat consistent, which makes it much
easier for users to learn and use different applications.
To Comprehend
HTI is ultimately communication by nature. It is based on an activity in
which the user feeds signs to the artefact, to which the artefact responds
in an adequate way. In information technology, the meaning of signs is
obvious, as the artefacts are all symbol-processing machines. While this
is also the case with traditional electromechanical machines, these sys-
tems of communicative signs are different, as the information exchange
is based on natural signs. In what follows, this central role of signs in
HTI will be examined through semiotic concepts. The way to organize
I/O actions is to develop reasonable communication models and lan-
guages for information exchange between people and artefacts. Systems
of signs—and the respective distinctions between the expected machine
operations—make it possible for people to use and control technologies,
but without understanding the general principles of semiotic systems and
semiosis, it is hard to design human–artefact communication.
4 The Psychology of Fluent Use 111
To Decide and to Think
Psychologists consider and investigate thought processes that emerge in
situations that have typical structures. For example, situations in which
thinking produces two or more alternative courses of action are called
decision situations, and the respective mental processes decision processes
(Kahnemann 2011; Tversky and Kahnemann 1974, 1986). Likewise,
problem-solving situations are those in which people have a goal, but do
not know how to reach it by immediately available means (Newell and
Simon 1972).
Decision making is straightforward in structure: one has to choose
between two or more alternative courses of action. This activity is impor-
4 The Psychology of Fluent Use 121
tant in HTI, especially because machines cannot set goals. They are
processes, by nature, with no independent goals. Artefacts are unable to
know what is relevant. Designers and users are thus responsible for the
human decision-making element: to organize the action of the techni-
cal artefacts and systems so that their behaviour makes sense from the
human point of view (Kahnemann 2011).
In order to select goals and choose between different courses of usage
action—that is to say, to make decisions—users have to collect and inte-
grate information from different sources into a holistic representation of
the situation. Decisions can be made alone or with other people, and they
can be unified or conflict driven (Lehto and Nah 2006). In any case, the
user has to make preferences among the possible alternatives.
On a formal level, decisions are supposed to be rational, which ide-
ally means that the preferences attributed to the alternatives are per-
fect. In practice this is rare, as people tend to make the wrong decisions
(Kahnemann 2011; Tversky and Kahnemann 1974). For a number of
psychological reasons, they may overestimate the benefit they expect
to receive from the chosen alternative. The challenge for designers is to
ensure that people’s mental representations are correct and that they have
realistic expectations. This means that users’ situation awareness should
include adequate contents, which requires providing them with the nec-
essary information, in the right form.
A tragic example of decision making is provided by an aeroplane acci-
dent in Madrid in 2008. The maintenance personnel of Spanair decided
to disconnect a computer system that was about to crash. This system
was supposed to be connected to a warning system that alerts the pilots
of problems. During take-off the warning system was switched off, and
the pilots were not aware that the flaps and slats were not well configured.
As a consequence, they made a wrong decision when piloting the aircraft
because they did not have all the relevant and necessary information.
Though decision making is an essential form of human thinking, when
technical artefacts are used even more complex forms of thinking are
needed. Problem solving is most important (Newell and Simon 1972).
Complex problem solving is essential in designing technologies, but it
also has a role in using technologies. In many professions, technology use
includes complex tasks that require decision making, and carrying them
out incorrectly may lead to serious problems.
122 Designing for Life
References
Anderson, J. R. (1976). Language, memory and thought. Hillsdale, NJ: Erlbaum.
Anderson, J. R. (1983). The architecture of cognition. Cambridge, MA: Harvard
University Press.
Anderson, J. R. (1993). Rules of the mind. Hillsdale, NJ: Erlbaum.
Anderson, J. R., Farrell, R., & Sauers, R. (1984). Learning to program Lisp.
Cognitive Science, 8, 87–129.
Atkinson, R., & Shiffrin, R. (1968). Human memory: A proposed system. In
K. W. Spence & J. T. Spence (Eds.), The psychology of learning and motivation
(Vol. 2, pp. 89–195). New York: Academic Press.
Baccino, T., & Manunta, Y. (2005). Eye-fixation-related potentials: Insight into
parafoveal processing. Journal of Psychophysiology, 19, 204–215.
4 The Psychology of Fluent Use 125
Lewis, J. R., Potosnak, K. M., & Magyar, R. L. (1997). Keys and keyboards. In
H. Helander, T. Landauer, & P. Prabhu (Eds.), Handbook of human-computer
interaction (pp. 1285–1315). Amersterdam: Elsevier.
Liu, H., Selker, T., & Lieberman, H. (2003). Visualizing the affective structure
of a text document. CHI’03 Extended Abstracts on Human Factors in
Computing Systems, 740-741.
Logie, R. H. (1995). Visuo-spatial working memory. Hove: Psychology Press.
Luckiesh, M. (1965). Visual illusions. New York: Dover.
Lyons, J. (1977). Semantics (Vols. 1–2). Cambridge: Cambridge University
Press.
Mackworth, N. H., & Morandi, A. J. (1967). The gaze selects informative
details within pictures. Perception and Psychophysics, 2, 547–552.
Markman, A. (1999). Knowledge representation. Mahwah, NJ: Lawrence
Erlbaum.
Mayer, R. (1997). From novice to expert. In H. Helander, T. Landauer, &
P. Prabhu (Eds.), Handbook of human-computer interaction (pp. 781–797).
Amsterdam: North-Holland.
McKeithen, K., Reitman, J., Rueter, H., & Hirtle, S. (1981). Knowledge orga-
nization and skill differences in computer programmers. Cognitive Psychology,
13, 307–325.
Miller, G. A. (1956). The magical number seven, plus or minus two: Some lim-
its on our capacity for processing information. Psychological Review, 63,
81–97.
Minsky, M. L. (1967). Computation: Finite and infinite machines. Englewood
Cliffs, NJ: Prentice-Hall.
Mollon, J. D. (1982). Color vision. Annual Review of Psychology, 33, 41–85.
Moran, T. P. (1981). Guest editor’s introduction: An applied psychology of the
user. ACM Computing Surveys, 13, 1–11.
Murata, A., Uetake, A., Matsumoto, S., & Takasawa, Y. (2003). Evaluation of
shoulder muscular fatigue induced during VDT tasks. International Journal
of Human-Computer Interaction, 15, 407–417.
Nasar, J., Hecht, P., & Wener, R. (2008). Mobile telephones, distracted atten-
tion, and pedestrian safety. Accident Analysis and Prevention, 40, 69–75.
Navarro-Prieto, R., & Canas, J. (2001). Are visual programming languages bet-
ter? The role of imagery in program comprehension. International Journal of
Human-Computer Studies, 54, 799–829.
Neisser, U. (1963). Decision time without reaction time. American Journal of
Psychology, 76, 376–385.
132 Designing for Life
Schneider, W., Dumais, S., & Shiffrin, R. (1984). Automatic and controlled
processing and attention. In R. Parasuraman & D. Davies (Eds.), Varieties of
attention. Orlando, FL: Academic Press.
Searle, J. (1993). Intentionality. Cambridge: Cambridge University Press.
Shepard, R. N. (1967). Recognition memory for words, sentences, and pictures.
Journal of Verbal Learning and Verbal Behavior, 6, 156–163.
Shneiderman, B. (2011). Tragic errors: Usability and electronic health records.
Interactions, 18, 60–63.
Shneiderman, B., & Plaisant, C. (2005). Designing user interfaces. Boston, MA:
Pearson.
Simon, H. A. (1955). A behavioural model of rational choice. In H. Simon
(Ed.), Models of thought (pp. 7–19). New Haven, CT: Yale University Press.
Singley, M. K., & Anderson, J. R. (1987). A keystroke analysis of learning and
transfer in text editing. Human-Computer Interaction, 3, 223–274.
Soto, R. (1999). Learning and performing by exploration: Label quality mea-
sured by latent semantic analysis. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems (pp. 418–425).
Standing, L. (1973). Learning 10,000 pictures. Quarterly Journal of Experimental
Psychology, 25, 207–222.
Standing, L., Conezio, J., & Harber, R. N. (1970). Perception and memory for
pictures: Single trial learning of 2560 stimuli. Psychonomic Science, 19, 73–74.
Stout, G. (1896). Analytic psychology. London: Routledge.
Styles, E. (1997). The psychology of attention. Hove: Psychology Press.
Thorndike, E., & Woodworth, R. (1901). The influence of improvement in one
mental function upon the efficiency of other functions. Psychological Review,
8, 247–261.
Treisman, A., & Gelade, G. (1980). A feature integration theory of attention.
Cognitive Psychology, 12, 97–136.
Tufte, E. R. (1990). Envisioning information. Cheshire: Graphics Press.
Tufte, E. R. (1997). Visual explanation. Cheshire: Graphics Press.
Turing, A. M. (1936–1937). On computable numbers, with an application to
the entscheidungsproblem. Proceedings of the London Mathematical Society,
42, 230–265.
Tversky, A., & Kahneman, D. (1986). Rational choice and the framing of deci-
sions. Journal of Business, 59, 251–278.
Tversky, A., & Kahnemann, D. (1974). Judgement under uncertainty: Heuristics
and biases. Science, 185, 1124–31
Ullman, S. (1996). High level vision. Cambridge, MA: MIT Press.
136 Designing for Life
Fig. 5.1 After nuclear accidents people lost their trust in this type of
technology
5 Emotions, Motives, Individuals, and Cultures in Interaction 139
The headlines in the media, and concrete actions such as limiting the
distribution of milk or the recommendation to avoid eating fish, made
the consequences of the accidents concrete in everyday life. This con-
creteness is essential in changing situational assessments and attitudes
(Kahnemann 2011; Tversky and Kahnemann 1974). In the case of
Fukushima, the concreteness of the effects was further intensified by
social media (Friedman 2011). People no longer found nuclear energy a
good solution and fought against it. This resistance, caused by changing
opinions, finally forced industry and many governments to take a more
critical position towards nuclear energy.
Questions of likes and wants are vital in understanding why people
accept and adopt new technologies. They are intimately related to emo-
tions, motives, and personality, which are the basic psychological con-
cepts that explain the issues of acceptance and adoption (Brave and Nass
2009; Hassenzahl 2011; Nagamashi 2011; Norman 2004). These aspects
of the human mind are considered under the concepts of dynamic psy-
chology to separate them from cognitive issues (Neisser 1967).
Historically, the concept of dynamic psychology or psychodynam-
ics arises from the clinical personality theories of Freud (1917/2000)
and Jung (1999). Since both of these luminaries are psychoanalysts, it
has occasionally been said that the term ‘psychodynamic’ refers to psy-
choanalysis. This usage of the word ‘dynamic’ in psychology neverthe-
less misses the fact that many other psychologists have also worked to
improve understanding of emotions, motives, and personality (Neisser
1967). The most recent paradigm of dynamic psychology is positive psy-
chology. It focuses on such issues as well-being, contentment, satisfaction,
happiness, and the flow of experience, which have proven to be impor-
tant in many contexts of user experience (Seligman and Csikszentmihalyi
2000).
This chapter uses dynamic psychology to include the psychology of
emotions, motives, personality, and intercultural issues. The concept of
dynamic refers here to the ‘forces’ that control and direct human actions,
but which also give them ‘mental energy’. These make people act in a
defined way and give them the power to meet and overcome different
obstacles.
140 Designing for Life
Emotions and the Mind
Emotions form a central dimension of the human mind (Damasio 2005;
LeDoux 1998; Rolls 2000). A better understanding of emotional pro-
cesses in the analyses of HTI is essential for the development of the field.
The main aspects of human emotional behaviour can offer designers
essential information in their search for good design solutions.
Emotions form a central action control system in the human mind
and have therefore a focal role in explaining people’s behaviour (Ekman
1999; Frijda 1986, 1988, 2007; Oatley et al. 2006). Evolutionarily,
emotions constitute a relatively primitive system, and many characteris-
tics of human emotions can be found in other animals as well (Darwin
1872/1998; Panksepp 1998). Emotional processing is mainly centred in
the subcortical areas of the human brain, which form a primitive (but in
many ways the most fundamental) control system in the mind (MacLean
1990; Luria 1973; Rolls 2000).
Emotions are holistic by nature. In addition to psychological aspects
(Ekman 1999; Frijda 1986, 1988; Oatley et al. 2006; Power and Dalgleish
1997), they include important biological dimensions (Panksepp 1998;
Rolls 2000), as well as a number of important social features (Eisenberg
2000; Frijda 1986, 1988; Lazarus and Lazarus 1994; Niedenthal et al.
5 Emotions, Motives, Individuals, and Cultures in Interaction 141
different moods (Power and Dalgleish 1997). Finally, there are emotional
states that belong to one’s personality, which represent attitudes or features
of personality rather than affective emotional responses to the surround-
ing environment (Gross 1998; Niedenthal et al. 2006). A good example
of a long-lasting emotional tendency is temperament, which is formed in
infancy and early childhood and changes only slowly afterwards (Clark
and Watson 1999).
When focusing on emotional states in the design, one should also
investigate the actual emotional content. Emotional states can differ in
their information content. Euphoria, for example, differs from depression
in its positive nature, and sadness in its negative nature. Consequently,
technical artefacts may include different aspects of emotional content,
for example when considering the feelings aroused by the art design of a
product. In this kind of analysis, two new dimensions of human emotion
become relevant: emotional valence illustrates the negativity or positiv-
ity of the emotion, and emotional theme refers to its general contents
(Lazarus 1991; Niedenthal et al. 2006; Oatley et al. 2006; Saariluoma
and Jokinen 2014).
The concept of valence arises from a pair of opposite emotions—
one positive and one negative (Lazarus and Lazarus 1994; Niedenthal
et al. 2006; Schmitt and Mees 2000; Spinoza 1675/1955). For exam-
ple, joy is a positive emotion and sorrow its corresponding negative
counterpart. In fact, emotional valence is strongly connected to the
pleasantness or unpleasantness of the emotional state, and it deter-
mines the desirability of emotional contacts. The positivity of an
emotional contact is often seen as imperative in HTI (Jordan 2000;
Hassenzahl 2011).
People have many different types of positive and negative emotions,
and valence is not the only dimension along which the contents of emo-
tions can be analysed (Lazarus 1991; Power and Dalgleish 1997). The
concept of emotional theme—or core emotional theme—is also needed
to cover all the aspects of emotional states.
An example may clarify the difference between valence and theme. The
feeling of joy in using technologies usually embodies a sense of positive
emotions such as wellness, commitment, and positivity (Mayring 2000).
Trust is also a positive emotion that, in the context of HTI, refers to
human reliance on a given technology. Thus, these two emotions—joy
5 Emotions, Motives, Individuals, and Cultures in Interaction 143
and trust—have the same positive valence but different themes. Hence,
the theme defines the accurate contents of emotional states in a more
sophisticated manner than valence (Oatley et al. 2006).
Because emotions determine the subjective meaning of a situation to
people, they are closely connected to the tendencies of human action
(Frijda 1986, 1988; Ortony et al. 1990). They indicate the personal
meaning of the target and the possible action that people might carry out
(Frijda 1986, 1988; Lazarus and Lazarus 1994). The feeling of fear, for
example, makes people flee, whereas curiosity often results in approach-
ing a target. Because these emotional characteristics are built into HTI
processes, emotions are a critical element of situational representations.
Emotional responses to things or incidents are not static by nature,
and thus do not stay the same forever. Instead, human emotional reac-
tions continue to develop during a person’s life span (Oatley et al. 2006;
Lazarus 1991; Power and Dalgleish 1997). For instance, people who
reacted hastily or aggressively in certain incidents in their youth may
behave moderately and calmly in the same situations in maturity, as their
emotional make-up has altered. This process of emotional development
is called emotional learning.
Emotional learning processes change the contents of emotions stored
in the memory—so-called emotional schemas—which people use when
they select information during perception and build memory representa-
tions (Beck 1976; Bower 1981; Williams et al. 1997). Emotional learn-
ing is common in HTI. A user who once regarded mobile services as
redundant may later become an advocate of this kind of technology after
having learned to use and understand its practical value. In this case, a
change in an emotional meaning is explained by a change in the contents
of emotional schemas that are used by apperception to construct emo-
tional states.
1
Appraisal can also be seen as an apperceptive process associating emotional values with cognitions.
5 Emotions, Motives, Individuals, and Cultures in Interaction 145
form a task (Bandura 1977, 1986, 1997). Mistaken beliefs of one’s own
incapacity may even lead to a self-fulfilling misconception.
Repeated failures create a negative atmosphere and lower self-efficacy,
whereas success in using technology improves self-efficacy and generates
positive feelings and pride. This makes people more willing to accept, use,
and train to use new technologies to achieve their goals (Juutinen and
Saariluoma 2007). The example illustrates how appraisal is significant
in indicating personal meanings of technologies to people, how people
differentiate their emotions, as well as what their behavioural responses
are on both the cognitive and physiological levels (Ellsworth and Scherer
2003; Moors et al. 2013). Thus, appraisal connects cognitive evaluation
with emotions and actual human actions. Individual preferences and
selections of actions are constructed based on representations made in
appraisal.
The main question in appraisal-based HTI research is to define what
kinds of cognitions activate certain kinds of emotional states. When
people interact with a user interface, human mental representation and
the respective emotional representation are created. These cognitive and
emotional representations control human actions and define whether
people like and accept a technology. Thus even a small emotionally mis-
placed detail in a user interface can easily decrease the value of the tech-
nology in people’s minds. One the whole, appraisal can also be seen as
an apperception process, as it constructs from its parts the contents of
mental representations.
private information about people (e.g., personal codes and account infor-
mation). Citizens have to be able to trust that this information is well
protected, and that no one can use maliciously. In the case of smart cards
for example, users need to be confident that the system will reliably and
correctly identify them and not permit access to any other users.
Negative trust in technology is embodied in such emotional states as
technophobia (Brosnan 2002), the dislike of technology. When confront-
ing new technology, people may feel incapable of using it. Novice and
elderly users often deem poor interaction to be the consequence of their
own inability to use a given technology. At workplaces, poor usability
may even lead to psychosomatic stress among people who feel that they
have been coerced into working with systems they cannot handle (Arnetz
and Wiholm 1997). Technophobia can disappear as a consequence of
direct (Kelley and Charness 1995) or indirect (Rogers and Fisk 2000)
positive experiences and improving skills (Ellis and Allaire 1999).
The concept of trust illustrates the connection between emotions and
cognitions. In the context of appraisal, when people trust each other or a
technology, emotions and cognitions are connected. On a cognitive level,
a person may be aware that the information security system installed on
her computer is powerful enough to prevent all virus attacks. The reasons
for believing this are based on, for example, previous experiences with
online services and information offered by computer support services and
the media. This information constitutes the knowledge base that creates
a feeling of trust. In an immediate usage situation (using a banking sys-
tem, for example), the feeling of trust is realized as determined input
actions when operating the user interface. In a broader context outside
the immediate use of the system, trust defines the overall attitude towards
banking systems, for example at the moment of purchase.
Emotions are present every minute of the day, and their influence is
widespread in HTI as well. For example, an extreme case of the influ-
ence of emotions in Internet dependency—the compulsory and exces-
sive use of the Internet in the form of computer games or social media
(Block 2008). The addiction develops as a result of the short-term plea-
sure enjoyed when playing games or surfing the net (i.e., when the device
is turned on). Internet dependency is classified as a psychiatric illness,
which requires professional intervention. In South Korea alone, it has
148 Designing for Life
been assessed as one of the most serious public health issues, as it has
been claimed to be connected to, for example, childhood obesity. The
ever-increasing time spent in front of the Internet or a computer game
change eating habits, increasing the number of snacks and the amount of
unhealthy food consumed. Also, when spending many hours a day on the
Internet, young people are constantly tired and unable to concentrate in
school, as a result of which they fail their exams.
On the grounds of knowledge of the nature of people’s relevant emo-
tional states, it would be possible to generate rational design goals for
technologies, for example to assess aesthetic requirements for the product
and to look for emotionally inspiring and satisfactory ways of using tech-
nologies. Recent developments to encourage personal health monitoring
systems are a good example. The field of emotional user psychology opens
up an extensive set of design questions and challenges, of which only a
small percentage has yet been investigated.
2
It is important to understand the distinction between the two concepts: engineering interaction
books routinely entail the concept of ‘user need’, which is an intuitive concept that has only an
indirect connection to psychological needs. Thus it is important to separate ‘user need’ from human
needs. The purpose of technology is to have a function, often related to very complex systems of
human life, and therefore it should not be simplified to such overall concepts as user need.
150 Designing for Life
hunger, and sex is one able to proceed to higher-level needs, such as self-
actualization. The important dimension of Maslow’s work is his concrete
introduction to the connection between needs and motives, although
modern research on motivation has illustrated that it cannot be concep-
tualized solely in terms of static need hierarchies.
The basic structure of needs and motivation has been described by
applying the concept of the depletion–repletion cycle (Toates 1986). In
this description, human physiological needs activate people towards cer-
tain goals, which enable them to satisfy their needs. The state before sat-
isfaction can be called desire, as it has a goal but not necessarily the means
to reach it. When a person finds a way to satisfy a need, the need state
is deactivated or depleted. When the need has been satisfied, the action
can be shifted to another direction. In everyday life, the mechanisms of
satisfying needs can be complicated. People may save money to use later
in life without having any clear plan about how they intend to use it, or
what kinds of needs it would satisfy.
Biological psychology and neuroscience can be illuminating in the analy-
sis of motives (Berridge 2004). Likewise, neural phenomena can offer
the possibility of objectively understanding many aspects of motiva-
tion when people use technologies, although biological concepts and
theories alone cannot explain human motivation in technology usage.
This kind of research can be carried out in the field of neuroergonomics
(Parasuraman and Rizzo 2006), which uses biopsychological techniques
to analyse needs and motives. For example, an individual’s heart rate can
be used to analyse stress during technology use (Anttonen and Surakka
2007).
Emotions also have a central role in understanding motivation, as they
inform people about needs and other motivational states. The concept
of pleasure, for example, is usually seen as a motivational goal towards a
certain state. People mostly pursue pleasurable states, and products that
help them achieve this state are considered favourable and motivating
(Hassenzahl 2011; Jordan 2000). For example, the need for water evokes
thirst, which makes people start to look for a drink. Thus the concept
of pleasure, in addition to being an important goal, is also an emotional
state. Depending on the situation, there can of course be numerous other
emotional states that explain the motives behind certain actions.
5 Emotions, Motives, Individuals, and Cultures in Interaction 151
The former concept refers to how people assess their capacity to cope
with a task and the latter to how competence, relatedness, and autonomy
form the core of intrinsic motivation.
Self-determination is a valuable motive when introducing new tech-
nologies to people. People must have the right to decide what kind of
technology they want to use. Although this may not always be possible,
forcing consumers to use new service systems, for example, easily leads to
rejection and negative attitudes towards the service provider in question.
Without a right to decide, extrinsic motivation becomes dominant and
may weaken the likelihood of successful technology adoption (Reinders
et al. 2008).
Human motives cannot only be explained on emotional grounds, for
they also have cognitive dimensions. Perhaps the best-known link between
cognition and motivation is cognitive dissonance, which refers to a moti-
vational state dominated by conflicting cognitions. For example, people
may want to buy a new technical device, but because it is said to have
been produced unethically, they may be afraid of losing face by buying it.
Other people may have purchased a product but feel uncertain about the
wisdom of their decision (Kahnemann 2011).
Cognitive aspects of motivation can be associated with the scope that
technology can offer in improving users’ tasks. In organizations, for
example, people are willing to use new technologies if they understand
how these new tools will improve upon their earlier practices (Orlikowski
2000). Yet poor understanding of the outcome of technology use may
cause opposition. Such cognitive phenomena as intention, learning, per-
ceived self-efficacy, and usefulness are factors that explain the cognitive
aspects of motivation when using technologies (Yi and Hwang 2003).
Finally, there is a social aspect of motivation (Dunning 2004; Ryan and
Deci 2000). In fact, numerous motives in human life are social by nature.
Social dimensions of technology use can often explain why people are
motivated, or why they are less motivated when using certain technical
artefacts. These themes are focal today thanks to computer-supported
cooperative work, and especially to social media applications such as
blogs, forums, wikis, and social networks (Tapscott and Williams 2008).
Social media has important functions in the formation of mod-
ern social motives. Social motives become visible in goal setting, goal
5 Emotions, Motives, Individuals, and Cultures in Interaction 153
contents, and the choices made between different goals (Franken 2002;
Karniol and Ross 1996). This is an important phenomenon, for example,
in creating brands and organizing people around them. Typical examples
are communities around Apple or Linux (Himanen 2010). On this level,
motives can be formulated as goal-related beliefs, and emotions as either
congruent or incongruent with establishing goals (Franken 2002).
Because people use technologies to support their actions, motivation
is a central field in the investigation of human–technology relations. The
motivation to use technologies (and the resulting success rate) relies to a
large extent on human self-image (Santos-Pinto and Sobel 2005). If one
does not believe in one’s competence to use a technology, the positive
learning cycle comes to a stop.
In HTI literature to date, relatively little interest has been paid to
motivation compared to discussions about such matters as usability.
However, as can be seen from the analysis above, when developing HTI
research and design, it is essential to put more effort into connecting
general knowledge about human motivation with specific and important
HTI problems.
A negative image swiftly becomes expensive even for the most powerful
companies (Teo 2002).
Change in attitudes is one of the core questions of user psychology.
New functions in the ICT field are often adopted relatively slowly. For
example, information networks existed long before they became widely
used in the 1990s, and mobile phones were also quite slowly adopted for
everyday use. Today one of the field’s greatest challenges is interesting
users in new mobile services. These problems are not only technical; the
accompanying attitudes also set important challenges for designers.
of behaving that are typical to that culture; people of the same culture
often share mental representations and underlying patterns of thought.
Characteristics that are significant in one culture are often less important
in others, which creates challenges for HTI design.
For example, cultures often have different approaches to communica-
tion and semiotics such as cultural symbols, terms, and even the meaning
of colours. Three principles are particularly useful when designing for
cultural diversity (Matsumoto 2000). First, it is important to understand
the main principles and findings of cross-cultural psychology. Second,
one has to understand other cultures and their internal logic. Finally, it is
essential to make realistic case-specific conclusions and avoid stereotypi-
cal thinking.
The notion of culture will eventually change as people become increas-
ingly networked, creating more or less global cultures (Castells 1997) in
which an endless number of people share an interest and form associ-
ated social groups. ICT has already changed cultures with its capacity
to connect people in new ways (Castells 1997). Social media has cre-
ated numerous sub-cultures and joined people globally with a similar
mindset. Geographical location, though not meaningless, is no longer
as important as it used to be (Castells 1997). Anticipating cultural and
lifestyle changes is essential when designing for the future.
knowledge of what people expect of a product (and why they would use
it) presupposes, in addition to motives, clarifying possible dissonances.
Finally, personality research can introduce a new set of questions that
allow an inspection of the relationship between technology and indi-
viduals with different emotional, cognitive, and motivational patterns.
Understanding what kind of people with what kind of motives will be
interested in different design makes it possible to segment people and
target products to these segments. Thus, the issue of user personality inte-
grates all elements of cognitive and dynamic interaction research.
As the human mind and human action can be seen as a combination
of cognitive, emotional, and dynamic (or conative) processes (Hildgard
1980), a part of the discussion has been targeted at three main sub-fields:
emotions, motives, and personality issues. It is also vital to consider
the joint issues between the fields—that is, how cognitions influence
motives—and to assess to what extent personality is a factor in forming
cognitions, emotions, and motives.
In all user research, it is possible to begin with an analysis of cogni-
tions relevant to the usage situation, for example by using such methods
as thinking aloud, interviews, and surveys (Ericsson and Simon 1984).
Second, it is essential to investigate different emotions that are involved
in the interaction with the technology in question. If the interaction is
long, or includes complicated multistage processes, different usage situ-
ations may require separate analysis. Basic psychology provides infor-
mation about interpreting the results, in terms of what kinds of visual
features or which technical details should be altered in searching for solu-
tions that enable people to reach positive emotional states.
The research on immediate interaction situations raises questions such
as why a user interface should be experienced in a negative way, why users
have difficulties in discriminating between the target and the background
or why the user interface should be non-intuitive. It may also be that the
user interface does not address cross-cultural demands. These examples
highlight the importance of understanding the immediate dynamic inter-
action between people and artefacts. Since human mental processes are
integrated into a whole, it would be impossible to eliminate the sense of
frustration in operating a user interface without understanding its psy-
chological source. The frustration might be caused by a number of factors,
5 Emotions, Motives, Individuals, and Cultures in Interaction 163
such as the location of the user interface components on the screen, a lack
of cultural knowledge, or a lack of computational skills.
References
Adler, A. (1929/1997). Neurosen: Fallgeschichten. Frankfurth am Main: Fischer.
Aiken, L. R. (2002). Attitudes and related psychosocial constructs: Theories, assess-
ment, and research. Thousand Oaks, CA: Sage.
Allport, D. A. (1980). Patterns and actions: Cognitive mechanisms are content
specific. In G. Claxton (Ed.), Cognitive psychology: New directions (pp. 26–64).
London: Routledge and Kegan Paul.
Ambrose, M. L., & Kulik, C. T. (1999). Old friends, new faces: Motivation
research in the 1990s. Journal of Management, 25, 231–292.
Anttonen, J., & Surakka, V. (2007). Music, heart rate, and emotions in the
context of stimulating technologies. In Affective Computing and Intelligent
Interaction: Proceedings of the Second International Conference, ACII, 12–14
September, Lisbon (pp. 290–301). Berlin: Springer.
Arnetz, B. B., & Wiholm, C. (1997). Technological stress: Psychophysiological
symptoms in modern offices. Journal of Psychosomatic Research, 43, 35–42.
Atkinson, J. (1964). An introduction to motivation. Princeton, NJ: Van
Nostradam.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral
change. Psychological Review, 84, 191–215.
Bandura, A. (1986). Social foundations of thought and action: A social cognitive
theory. Englewood Cliffs, NJ: Prentice-Hall.
Bandura, A. (1997). Self-efficacy: The exercise of self-control. New York: Freeman.
Beck, A. (1976). Cognitive therapy of emotional disorders. Harmondsworth:
Penguin Books.
Berridge, K. C. (2004). Motivation concepts in behavioral neuroscience.
Physiology and Behavior, 81, 179–209.
Berry, D. C., & Dienes, Z. (1993). Implicit learning: Theoretical and empirical
issues. Hillsdale, NJ: Erlbaum.
Block, J. (2008). Issues for DSM-V: Internet addiction. American Journal of
Psychiatry, 165, 306–307.
Bono, J. E., & Vey, M. A. (2007). Personality and emotional performance:
Extraversion, neuroticism, and self-monitoring. Journal of Occupational
Health Psychology, 12, 177–192.
164 Designing for Life
Bouma, H., Fozard, J. L., & van Bronswijk, J. E. M. H. (2009). Gerontechnology
as a field of endeavour. Gerontechnology, 8, 68–75.
Bower, G. H. (1981). Mood and memory. American Psychologist, 36, 129–148.
Brave, S., & Nass, C. (2009). Emotion in HCI. In A. Sears & J. A. Jacko (Eds.),
Human-computer interaction: Fundamentals (pp. 53–68). Boca Raton, FL:
CRC Press.
Brosnan, M. J. (2002). Technophobia: The psychological impact of information
technology. London: Routledge.
Caprara, G. V., Barbaranelli, C., & Guido, G. (2001). Brand personality: How
to make the metaphor fit? Journal of Economic Psychology, 22, 377–395.
Carlson, N. R., Buskist, W., & Martin, G. N. (2000). Psychology: The science of
behaviour. Harlow: Allyn and Bacon.
Castells, M. (1997). The network society. Oxford: Blackwell.
Clark, L. A., & Watson, D. (1999). Temperament: A new paradigm for trait
psychology. In L. A. Pervin & O. P. John (Eds.), Handbook of personality:
Theory and research (2nd ed., pp. 399–423). New York: Guilford Press.
Cofer, C. N., & Appley, M. H. (1968). Motivation: Theory and practice.
New York: Wiley.
Damasio, A. (2005). Descartes’ error: Emotion, reason, and the human brain.
Harmondsworth: Penguin Books.
Darwin, C. (1872/1999). The expression of the emotions in man and animal.
London: Fontana Press.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user accep-
tance of information technology. MIS Quarterly, 13, 319–340.
Deci, E. L., Koestner, R., & Ryan, R. M. (1999). A meta-analytic review of
experiments examining the effects of extrinsic rewards on intrinsic motiva-
tion. Psychological Bulletin, 125, 627–668.
Dickey, M. D. (2005). Engaging by design: How engagement strategies in pop-
ular computer and video games can inform instructional design. Educational
Technology Research and Development, 53, 67–83.
Dienes, Z., & Scott, R. (2005). Measuring unconscious knowledge:
Distinguishing structural knowledge and judgment knowledge. Psychological
Research, 69, 338–351.
Docampo Rama, M. (2001). Technology generations—Handling complex user
interfaces. Eindhoven: University of Eindhoven.
Dunning, D. (2004). On motives underlying social cognition. In M. B. Brewer
& M. Hewstone (Eds.), Emotion and motivation (pp. 137–164). Malden,
MA: Blackwell.
5 Emotions, Motives, Individuals, and Cultures in Interaction 165
Hassenzahl, M. (2011). Experience design. San Rafael, CA: Morgan & Claypool.
Hassenzahl, M., & Tractinsky, N. (2006). User experience—A research agenda.
Behaviour and Information Technology, 25, 91–97.
Helander, M., & Khalid, H. M. (2006). Affective and pleasurable design. In
G. Salvendy (Ed.), Handbook of human factors and ergonomics (pp. 543–572).
Hoboken, NJ: Wiley.
Hildgard, E. (1980). Consciousness in contemporary psychology. Annual Review
of Psychology, 31, 1–26.
Himanen, P. (2010). The hacker ethic. New York: Random House.
Hofstede, G. (1980). Culture and organizations. International Studies of
Management and Organization, 10, 15–41.
Hogan, R., Harkness, A., & Lubinski, D. (2000). Personality and individual
differences. In K. Pawlik & M. R. Rosenzweig (Eds.), International handbook
of psychology (pp. 283–304). London: Sage.
Jacoby, J., Johar, G. V., & Morrin, M. (1998). Consumer behavior: A quadren-
nium. Annual Review of Psychology, 49, 319–344.
James, W. (1890). The principles of psychology. New York: Dover.
Jordan, P. W. (2000). Designing pleasurable products: An introduction to the new
human factors. Boca Raton, FL: CRC Press.
Jung, C. G. (1999). Essential Jung: Selected writings. Princeton, NJ: Princeton
University Press.
Juutinen, S., & Saariluoma, P. (2007). Usability and emotional obstacles in adopt-
ing e-learning: A case study. Paper presented at the IRMA International
Conference, Vancouver, Canada.
Kahnemann, D. (1973). Attention and effort. Englewood Cliffs, NJ:
Prentice-Hall.
Kahnemann, D. (2011). Thinking, fast and slow. London: Penguin Books.
Karniol, R., & Ross, M. (1996). The motivational impact of temporal focus:
Thinking about the future and the past. Annual Review of Psychology, 47,
593–620.
Kelley, C. L., & Charness, N. (1995). Issues in training older adults to use com-
puters. Behaviour and Information Technology, 14, 107–120.
Kim, C. K., Han, D., & Park, S.-B. (2001). The effect of brand personality and
brand identification on brand loyalty: Applying the theory of social identifi-
cation. Japanese Psychological Research, 43, 195–206.
Kuniavsky, M. (2003). Observing the user experience: A practitioner’s guide to user
research. San Mateo, CA: Morgan Kaufmann.
5 Emotions, Motives, Individuals, and Cultures in Interaction 167
Monk, A., Hassenzahl, M., Blythe, M., & Reed, D. (2002). Funology: Designing
enjoyment. CHI’02 Extended Abstracts on Human Factors in Computing
Systems, 924–925.
Moody, G. (2002). Rebel code: Linux and the open source revolution. New York:
Basic Books.
Moors, A., Ellsworth, P. C., Scherer, K. R., & Frijda, N. H. (2013). Appraisal
theories of emotion: State of the art and future development. Emotion Review,
5, 119–124.
Myhill, C. (2003). Get your product used in anger! (Before assuming you
understand its requirements). Interactions, 10, 12–17.
Nagamashi, M. (2011). Kansei/affective engineering and history of Kansei/
affective engineering in the world. In M. Nagamashi (Ed.), Kansei/affective
engineering (pp. 1–30). Boca Raton, FL: CRC Press.
Neisser, U. (1967). Cognitive psychology. New York: Appleton-Century-Crofts.
Newell, A., & Simon, H. A. (1972). Human problem solving. Engelwood Cliffs,
NJ: Prentice-Hall.
Niedenthal, P. M., Krauth-Gruber, S., & Ric, F. (2006). Psychology of emotion:
Interpersonal, experiential, and cognitive approaches. New York: Psychology
Press.
Nonaka, I., & Takeuchi, H. (1994). The knowledge creating company. Oxford:
Oxford University Press.
Norman, D. A. (1999). Affordance, conventions, and design. Interactions, 6,
38–43.
Norman, D. (2004). Emotional design: Why we love (or hate) everyday things.
New York: Basic Books.
Oatley, K., Keltner, D., & Jenkins, J. M. (2006). Understanding emotions.
Malden, MA: Blackwell.
Orlikowski, W. J. (2000). Using technology and constituting structures: A prac-
tical lens for studying technology in organizations. Organization Science, 11,
404–428.
Ortony, A., Clore, G. L., & Collins, A. (1990). The cognitive structure of emo-
tions. Cambridge: Cambridge University Press.
Osborne, J., Simon, S., & Collins, S. (2003). Attitudes towards science: A
review of the literature and its implications. International Journal of Science
Education, 25, 1049–1079.
Panksepp, J. (1998). Affective neuroscience: The foundations of human and animal
emotions. Oxford: Oxford University Press.
5 Emotions, Motives, Individuals, and Cultures in Interaction 169
Shelton, B. E., Turns, J., & Wagner, T. S. (2002). Technology adoption as pro-
cess: A case of integrating an information-intensive website into a patient
education helpline. Behaviour and Information Technology, 21, 209–222.
Spinoza, B. (1675/1955). Ethics. New York: Dover.
Stenros, A. (2005). Design revolution. Jyväskylä: Gummerus.
Switzer, F. S., III, & Sniezek, J. A. (1991). Judgement processes in motivation:
Anchoring and adjustment effects on judgment and behavior. Organizational
Behavior and Human Decision Processes, 49, 208–229.
Tapscott, D., & Williams, A. D. (2008). Wikinomics: How mass collaboration
changes everything. Harmondsworth: Penguin.
Teo, T. S. (2002). Attitudes toward online shopping and the internet. Behaviour
and Information Technology, 21, 259–271.
Tibballs, G. (1999). Business blunders. London: Robinson.
Toates, F. M. (1986). Motivational systems. Cambridge: Cambridge University
Press.
Tversky, A., & Kahnemann, D. (1974). Judgement under uncertainty: Heuristics
and biases. Science, 185, 1124–1131.
Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating con-
trol, intrinsic motivation, and emotion into the technology acceptance
model. Information Systems Research, 11, 342–365.
Weiner, B. (1985). An attributional theory of achievement motivation and emo-
tion. Psychological Review, 92, 548–573.
Wells, A., & Mathews, G. (1994). Attention and emotion. Hove: Erlbaum.
Wiggins, J. S. (1996). The five-factor model of personality: Theoretical perspectives.
New York: Guilford Press.
Williams, J. M. G., Watts, F. N., McLeod, C., & Mathews, A. (1997). Cognitive
psychology and emotional disorders. New York: Wiley.
Yi, M. Y., & Hwang, Y. (2003). Predicting the use of web-based information
systems: Self-efficacy, enjoyment, learning goal orientation, and the technol-
ogy acceptance model. International Journal of Human-Computer Studies, 59,
431–449.
Zimbardo, P. G., & Leippe, M. R. (1991). The psychology of attitude change and
social influence. New York: McGraw-Hill.
6
Life-Based Design
Technical artefacts should exist to bring added value and quality to peo-
ple’s lives. Human-Technology Interaction(HTI) design should, there-
fore, be considered in a much broader context than merely the usage of
technology. It should be based on an understanding of people’s lives and
well-grounded design methods and tools, which can investigate life and
apply this knowledge to the design work. The conceptual model of life-
based design (LBD) is based on segregating unified systems of actions
called forms of life. Investigating the structure of actions and related facts
relevant to particular forms of life, in addition to the values that people
follow, is the core tool of LBD. The knowledge produced constitutes a
template for human requirements, which serves as a basis for design ideas
and technological solutions.
The motive for using any technology must be sought by investigating
the role and influence of particular technical artefacts in people’s lives.
Regardless of whether the technology is produced for daily activities,
work, education, or leisure, the justification for technology is always—
directly or indirectly—in its capability to improve the quality of life.
Technologies are intended to help people realize their action goals in life,
and thus make life easier or richer. Depending on the case, this means that
of life. Form of life is a concept that separates the design contexts (Leikas
2009; Leikas and Saariluoma 2008; Wittgenstein 1953).1
The main function of the concept ‘form of life’ is to define and sepa-
rate the section of life for which the new technology is designed, and to
help understand the contents of this particular area of life. Form of life
is, thus, a general concept used to abstract any system of actions in human
life under scrutiny and design. This concept has been used as a part of
sociological discourse (Giddens 1984, p. 74, 2000), but should not only
be studied from a sociological perspective. In addition to social elements,
human forms of life are determined and shaped by many biological and
psychological factors. Even though these three form the basic human life
sciences, they all have internal and combined fields of research that are
relevant for the analyses and theoretical discussions in LBD.
Form of life is thus a general conceptual abstraction, but it enables
designers to define the topic and organize their research. Such concep-
tual abstractions are a common procedure in all sciences. For example,
a chemist may be interested in the properties of molecules. Before it is
possible to proceed with the research, it is necessary to define whether the
research topic concerns hydrogen, sodium chloride, or something else.
Similarly, before a form of life can be studied, it first has to be distinct.
Whereas chemists can be interested in pentane molecules rather than
other molecules, a life-based designer may be targeting solutions at, for
example, recreational travellers as opposed to businessmen. Ultimately,
researchers in both fields have to decide and express their topics in a
similar manner.
Form of life is a highly flexible concept, yet at the same time it is also
exact. Using it, one can refer equally well to a medical doctor’s ways of
working as to the hunting habits of a little-known tribe. People’s lives, to
a great extent, are characterized by the different kinds of regularities they
1
The original German term for ‘form of life’ is ‘Lebensform’ (e.g., Wittgenstein 1953: §19). The
concept form of life originates from Wittgenstein’s (1953, 1964) late philosophy. By this term,
Wittgenstein, one of the most important philosophers of the last century, refers to any circle or
context of linguistic actions. In his original proposal, form of life was a theoretical concept and
conceptual abstraction for analysing human linguistic behaviour and use of language. It is possible
to extend the use of this concept to analyse, for example, human social life (Giddens 1990) and any
other aspects of human life (Leikas 2009; Saariluoma and Leikas 2010).
176 Designing for Life
follow. For some people, these may include hunting and preparing the
catch, for others driving to work every weekday or having dinner with
friends once a month.
People can participate in an unlimited number of possible forms of
life. A form of life can be a hobby or activity, profession, family status,
or a situation. Students, weight-watchers, voluntary workers, teenagers,
golfers, criminals, policemen, bankers, designers, Finns, Spaniards, slow-
food lovers, grandmothers, and alcoholics all have different forms of life,
and they may find new forms of life or develop old ones. Thus, everyone
participates in several different forms of life, and their regularities give
meaning to people’s actions and aims.
It is important to note that form of life is not identical to the concept
of culture. Cultures explain many differences in forms of life, from, for
example, designing clothes to participating in political or religious cere-
monies, but not all characteristics of forms of life are culturally motivated.
For example, it is not a cultural but a biological phenomenon that people
age and start to experience age-related decline in physical functional abil-
ities. This change in physical abilities, however, may change people’s daily
life and consequently create new forms of life, such as utilizing rehab
services or moving to a senior home, which entail culture-dependent trig-
gers and modifiers (Leikas 2009). To give another example, a compass
in a mobile phone that indicates the direction of Mecca is based on a
cultural form of life. Similarly, toys for children playing in a sandbox
are based on a special form of life. Thus, everything that people do takes
place in a definable form of life. For this reason, the first step in LBD is
analysing the form of life (Leikas 2009; Saariluoma and Leikas 2010).
Participating in a form of life is not necessarily a voluntary choice.
Instead of choosing it, an individual may be ‘thrown’ into it. For exam-
ple, a woman born in Scandinavia might have a different conception of
family roles than a woman from Africa. The official and public, as well
as private, individual, and tacit regularities in a human life’s actions form
logical wholes that constitute forms of life.
Understanding a particular form of life helps develop technical solu-
tions for people participating in it. For example, designing for young
and sporty people is very different from designing for retired but active
seniors. Although these groups have the common denominator of ‘active
6 Life-Based Design 177
life’, their forms of life differ in many ways. A form of life is, thus, a tool
for exposing relevant differences between different life settings. It pro-
vides a precise yet elastic enough concept for an LBD process to define
its target.
With the help of analysing forms of life and their structure of rule-
following actions, it is possible to innovate technology that would helps
people in their actions and practices. It can be said that rule-following
actions have their meaning in a form of life, and that the rule-following
actions and their goals give a meaning to technological ideas, and conse-
quently, to designing technology.
Form of life considers the activities of a group of people instead of indi-
viduals. This premise makes it possible to found the development of tech-
nology on this notion. Naturally, from an individual perspective, people
can have a certain way of life, but examining a form of life always binds
the point of view to a larger group of people. In this respect, forms of life
can be examined, for example, through certain generations with a specific
nationality and background.
When carrying out rule-following actions, people do not necessar-
ily follow the rules any more consciously than when they encode letters
when reading a text. The rules may equally well be subconscious patterns
of behaviour, such as rules of conduct and etiquette that people follow
when visiting each other. Many rules are explicit, such as tacit rules at
the work place—and some may even be formally regulated or juridically
determined, such as traffic codes—but most often they are just ways of
acting in life. It may also be that people may take different actions to
follow the same rule. One of Wittgenstein’s (1953) key remarks in using
the notion of rule-following actions is that these actions are regular but
not mechanistic. People can violate, reject, or even neglect rule-following
actions and still participate in a form of life. Soccer fans do not need to
watch every match their favourite team plays, children do not always
have homework, and one need not always visit a grocery shop on the way
to the summer cottage. It is also possible to reach these goals in different
ways. So, although they are regularly followed, rules do not have to be
absolute.
Mere discrimination of rule-following actions of a form of life does
not provide sufficient information. For early-phase design purposes, it is
essential to understand the internal structure of rule-following actions.
One has to see how actions are integrated in order to define the rela-
tionships between them, extract their similarities, and explicate the logic
behind them (Leikas 2009; Saariluoma and Leikas 2010).
6 Life-Based Design 179
superficial level, these groups of people seem to follow the same system
of rules—the rules of travelling—but on a deeper level, the divergent
frameworks composed of facts in their life make their travelling events
quite distinct from each other.
The first group of facts in life that signifies, for example, the difference
between young and older people is the biological facts of life, which in the
case of a retired person can include a decline in physical functional abili-
ties, such as vision and hearing. This may narrow their opportunities to
travel. The second group is socio-cultural facts. In the case of older people
these may include an increased amount of free time after retirement, and
thus the freedom to travel at any time of the year. Another sociological
fact is that as people get older, the extent of their social networks—and
thus the number of potential travelling companions—usually decreases.
The older travellers in Europe usually have more money than younger
travellers, and are thus able to organize their trip in a more comfortable
manner from their own point of view. In other words, their social condi-
tions are different from those of young travellers. Young travellers often
have a minimal backpacking budget and are keen to find new friends,
perhaps even new partners. To save money, they are prepared to use sleep-
ing bags, stay in campsites and youth hostels, and eat cheap food. Many
older travellers could not, for health reasons, travel in this way.
The third group of facts is the psychological facts of life. A psychological
fact can be, for example, that older people have more experience in life
but often also stronger resistance to change than young people. Another
psychological difference between these two groups is skills or expertise
(Ericsson 2006). For example, the ICT skills of young and older people
may differ. Young people are usually technically more skilled and able
to reap more benefits from ICT technology during a trip. They may use
technology to find, for example, information about travelling conditions
and tourist attractions. They can also take advantage of online maps and
other available services, while older people may mainly focus on sending
SMS messages and talking on the phone.
The different facts in life of these two groups of people influence their
travelling styles and the form that the life of a traveller takes. As can
be seen from the above examples, facts in life consist of biological, psy-
chological, and socio-cultural aspects that influence the constitution of a
184 Designing for Life
form of life (Fig. 6.2). As already pointed out, biological facts are essen-
tial in explaining the very basic elements of a form of life. Psychological
and socio-cultural facts, in turn, arise from the corresponding elements
of people’s lives. They become visible by modifying the rule-following
actions in different forms of life accordingly.
Forms of life are comprised of the biophysical conditions of an individ-
ual as well as the customs, habits, rules, and language games of the person
in question and many other people. In this sense, people are placed in
pre-existing forms of life and have little chance of changing them. Some
of the facts in life are more or less ‘chosen’ by the individual and may
be dynamic in nature, whereas others are ‘given’—that is, inherited or
determined by living conditions—and thus constitute stable factors of
the holistic form of life. An example of such a stable factor determined by
living conditions is the lives of war-affected children. Thus, form of life
is a holistic notion that includes predetermined factors in life as well as a
complex variety of different elements in everyday life.
In addition to facts, values provide necessary information for analysing
and understanding forms of life. They explicate individual or group goals,
demands, obligations, and conceptions of beauty and goodness in life,
and direct and affect people’s behaviour, goal settings, choices, and atti-
tudes. Logically, technologies must be designed to support these actions
(things that are valuable to people).
6 Life-Based Design 185
Many elements of form of life culminate, and are reflected in, values.
Similarly, values can be shaped by different elements of a form of life. For
example, mobile technology has changed the way people look at social
relationships. Before mobile phones, social relationships were to a large
extent based on certain locations and largely depended on a person’s spe-
cific environment (Hulme and Peters 2001; Hulme and Truch 2006).
With mobile phones, however, it is not immediately obvious to the caller
where the individual is calling from. At the same time that the boundar-
ies of environments have become more fluid, privacy conceptions have
become more flexible. Along with the phone’s value conception of ‘any-
time, anyplace’, people are starting to appreciate and take for granted that
they can be available every time the phone rings. Thus, mobile technol-
ogy has changed the concept of reachability: ‘always being available’ has
become a value for most people. An exception is made by the increasing
group of people that is committed to being available because of work
demands and that now seeks for possibilities for down shifting in life.
Values are either moral or practical, and they may include personal,
social, cultural, religious, philosophical, ethical, and aesthetic dimen-
sions. For example, Catholic people in many countries carry out deco-
rated and carefully organized processions of hundreds of people during
the Holy Week at Easter. The religion-based personal and community
values of participants of this ancient tradition explain why these people
are ready and willing to spend long hours preparing for these festivities
over the years, decades, and centuries.
Values as attributes of forms of life are different from facts, in that they
represent voluntary choices of the people in question. They influence the
selection processes that people carry out between different forms of life,
and make people’s future plans visible. They also give meaning to actions
within a specific form of life. For instance, praying towards Mecca is not
meaningful for all people who pray. Instead, knowledge of the direction
of Mecca is very valuable to a devout Muslim due to his or her form of
life. Based on their values, people give different weightings to different
goals, and values can make it understandable why people act as they do.
Understanding values helps identify the kind of ‘worth’ that technol-
ogy can bring people (Cockton 2004, 2006). The added value can be
attributed to the outcome of the use of a product. Moreover, it can be seen
186 Designing for Life
Technology-Supported Actions
The properties of the concept ‘form of life’ were analysed above, along
with its relationship to the human actions, values, and facts of life that
define people’s goals. Different forms of life are characterized by different
types of rule-following actions typical to a particular form of life. The
forms of life of older people include performing daily errands, attending
cultural events, visiting friends and relatives, taking care of their own
well-being and health, getting medical advice, spending time at a holiday
home and travelling, for example (Leikas and Saariluoma 2008). People
participate in forms of life by undertaking actions that enable them to
follow related rules.
Analysing the form of life is thus the first step in the design process.
Based on the analysis of rule-following actions and facts and values in life,
it is possible to consider the role of the designed technology in the chosen
form of life of the target group. The next question is how this informa-
tion can be used to design technical artefacts, and help people fully par-
ticipate in their chosen form of life. The idea of what a technology can be
used for provides the focal problem for design, and answering the ques-
6 Life-Based Design 189
tion ‘what for?’ thus constitutes a focal design idea. The next crucial step is
to elaborate the focal design idea and transform the acquired information
into technical interactions and concept descriptions. This process relies
on the information collected in the analysis of the form of life. Its task is
to transform the rule-following actions into technology-supported actions
(TSAs) (Leikas 2009).
As introduced earlier, rule-following actions, and the forms and rea-
sons behind them, together with information on facts and values, form
design-relevant attributes in a particular context (Fig. 6.3). After examin-
ing the design-relevant attributes, it is clear how some actions can be
supported by technology. These target actions constitute TSAs for the
particular form of life. They are actions that are realized with the help of
technical artefacts. Walking is a rule-following action, and walking with
a stick when a person is injured is a simple example of a TSA. Walking
sticks are hence technologies that support and modify the original action
of walking.
TSAs should consist of elements relevant to the generation of design
ideas. These elements are the action and its goal, the agent, the context,
and the possible technology. This categorization is characteristic of all
human actions, as explained in the previous chapter.
Scientific Foundations
In design, one cannot speak about facts unless they are scientifically
grounded. LBD relies on human research, and uses the knowledge and
methodological bases of such human sciences and research traditions that
are directly connected to research on human life. These can be called
human life sciences (Fig. 6.4). They can have two substantial roles in the
design of technical artefacts and systems. First, the concept of human
life sciences can be used to explore design issues when searching for clar-
ity with problems and solutions. Second, it can be used to advocate one
design solution over another for a particular design purpose. The core
question is how these sciences should be organized with respect to design
tasks in order to get the best possible support for design solutions. This
calls for a holistic and multidisciplinary examination of the design prob-
lem at hand. For instance, when people get older, some social decisions,
such as retirement, affect older peoples’ social lives, and many sociological
192 Designing for Life
changes, such as freedom from work and increasing amounts of free time,
take place. However, getting old is not merely a social phenomenon. It
is also an essential biological phenomenon and has many psychological
aspects as well. Therefore, sociological concepts alone cannot give suffi-
cient information about what is relevant in designing modern technolo-
gies for older people. It is naturally important to be able to describe the
changes in everyday contexts when people age, but it is also necessary
to understand the multidisciplinary process of ageing and its different
effects. Many of the changes in old age are not caused by social structures,
social habits, or social communities, but can be examined, for example,
with the help of biological and psychological concepts related to the age-
ing process.
6 Life-Based Design 193
The relevant research areas of human life sciences are discussed below,
starting with biology and biological life. These terms refer to all the para-
digms of biological and related research that focus on people. Biology
introduces numerous perspectives that are important in technology
design. People have different biological preconditions for interacting with
technologies. Their environment has its biological properties, and human
bodies have their biological characteristics, which change over the course
of time. Biology has a number of related fields of research, each of which
has many potential contact points with modern HTI design. Examples of
such research areas can be found in many forms in ecology (e.g., Berkes
and Turner 2006; Young 1974), human ethology (Eibl-Eibesfeldt 1989),
physiology, anatomy, and neuroscience (Corr 2006), and even in many
medicine and health care issues (Coiera 2009).
Without a biological life there would be no life at all. However, bio-
logical concepts are not sufficient for understanding all human life and
action. Modern brain research has shown that learning, for example,
modifies the brain and provides it with ‘programs’, which cannot be
understood merely in terms of biological concepts. Brains are dependent
on the properties of learning environments and their contents, and not
solely on their own properties (Saariluoma 1999). For example, if a child
is born in Germany she will learn German as a native language, and if she
is born in South America, her language will most probably be Portuguese
or Spanish.
The second dimension of human life sciences is based on the psychol-
ogy and philosophy of mind, that is, research on people as individuals
and the general laws conducting their behaviour. This not only refers to
the cognitive aspects of the mind—such as limited capacity or informa-
tion discrimination—but to all of psychology, including emotional issues
such as valence, and personality and social issues such as group behaviour
(Saariluoma and Oulasvirta 2010). Again, there are many related disci-
plines, such as education, ergonomics, and social psychology (e.g., Brown
2000; Karwowski 2006; Nolan 2003).
Sociology, cultural research, and related disciplines form their own
cluster of human life sciences (e.g., Argyle 1990; Geertz 1973; Giddens
1984, 1987), which have relevance for design. This cluster includes such
research disciplines as social and cultural anthropology, ethnography, ger-
194 Designing for Life
ontology, linguistics, and semiotics, as well as many issues that are usually
associated with history, art, literature, and film research.
Sociological concepts often describe the ways people act in society,
and thus they fruitfully elaborate the role of human life in design. Social
scientific analysis can help understand how and why people participate
in different forms of life and share the rules and regularities they involve.
This kind of research provides important facts about people and their
lives, and it is evident that psychological research, for example, would
not provide a similar perspective on life. Instead, knowledge of cultural
history or national habits, for example, can be useful in this sense. Many
ethical and value questions that are necessary to consider in design also
emerge in the context of social and cultural matters (Albrechtslund 2007;
Bowen 2009; Bynum 2010; Leikas 2009; Stahl 2010). In addition, there
are many interdisciplinary fields of learning that are relevant in HTI
design, such as action theory, management, and organizational research
(Bannon and Bødker 1991; Kuutti 1996; Nardi 1996). The core chal-
lenge is to find a framework that connects the design problems with the
respective research.
All the research fields presented above have something to say about
human life (Fig. 6.4). They provide methods, concepts, paradigms, mod-
els, and theories that may prove relevant in setting, asking, and solving
design questions. Indeed, all of these disciplines can play their part in
specific types of HTI design processes. In design it is necessary to under-
stand the types of scientific knowledge that are needed to solve different
design questions. In order to do this, it is useful to provide an overview of
the typical problems concerning the target user group and the knowledge
that can be used to design solutions to these problems. After this over-
view, it is possible to identify the kinds of multidisciplinary concepts that
are needed in designing HTI for particular users.
LBD Process
LBD, like all design paradigms, is a methodological process (Leikas et al.
2013). It defines major questions in the field and helps answer them.
LBD conceptualizes human life in concepts of human life sciences and
connects this knowledge with design thinking.
6 Life-Based Design 195
The LBD model consists of four main phases to guide the designer’s
thinking during the design process: (1) form-of-life analysis, (2) concept
design and design requirements, (3) fit-for-life design, and (4) innovation
design (Fig. 6.5). Each of the phases can have numerous sub-questions,
which are presuppositions for solving the main design questions. The
four phases do not have to be sequential; they can be parallel in the itera-
tive design process.
Form-of-life analysis includes defining and analysing the particular
form of life the designers are interested in. To produce design-relevant
information about the selected form of life, regular actions—the kinds
of actions people normally follow—defining the form of life in question
have to be extracted. In addition, it is essential to define major explana-
tory facts and values of the form of life and investigate several issues, as
detailed below.
What do people need in their life, and how could life be improved with the
help of technology?
The analytical work should generate the design theme and the human
requirements of a technical artefact. This information explains the why’s
and what for’s that should guide the entire design process. Human
requirements define how people’s life in a specific form of life should be
improved. They are based on the methods and results of human life sci-
ences, and form the basis of the next phase in the design by introducing
the design theme and the human requirements behind it. However, they
do not define the requirements for technological concepts that could be
used to address the defined design goals of the specific form of life; these
are called design (user) requirements, and they are discussed below.
Concept design and design (user) requirements is the second phase of the
LBD model, in which designers define the role of technology in achiev-
ing the defined design goals and produce a definition of TSAs in a prod-
uct or service concept. This means defining what the technology is to be
used for (the technical requirements).
An essential part of the concept design phase consists of ideation and
outlining what the supposed technological solutions could be like, as well
as reflection on (and elaboration of ) the selected solution. The outcome
of this phase is a definition of TSAs in a product or service concept. It
generates prototypes of the relevant new technology. It also explains how,
by defining the role of technology in the form of life, this technology can
be associated with people’s needs, and how technological solutions can be
implemented.
6 Life-Based Design 197
of life and experiences, users also modify and alter how they use technol-
ogy, and as a result may create totally new meanings for different tech-
nologies. For example, mobile technology has changed people’s attitudes
towards devices. A smartphone can be a highly personal device, perhaps
more personal than any other technical artefact. The owner of the phone
has a personal number, a personal physical device with personal contents,
and personalized tone rings. Hulme and Peters (2001) even argue that
users consider their mobile phone an extension of the self. Thus, the loss
of a mobile phone is felt not just as the loss of a device, but it is also
sensed on the level of one’s physical self. Indeed, when leaving home
without a mobile phone, many people feel that something is missing. So,
as users of technology people have come a long way from being consid-
ered extensions of machines. The artefact now, at least in the case of the
smartphone, is considered an extension of the human.
Although technology has a largely beneficial effect on forms of life,
ICT technology does have some downsides. First, the development of
ICT has led to a divided society, creating a digital divide between tech-
nological haves and have-nots. The socio-cultural reasons for not using
new technological products or services include ignorance of the services
offered; inability to use the services because of lack of knowledge, educa-
tion, and training; and reluctance to acquire and try new technologies.
For example, many commonly used types of interactions are beyond the
competence (and thus out of reach) of some groups of people. For some
people, the operation logics of new technological solutions are incompre-
hensible. Moreover, people may not be able to access (or are uninformed
about) the services offered through technology. In some cases they are
also reluctant to invest their time and effort in trying to learn to use
new solutions, especially if they have had bad experiences in using other
products or services.
To gain from the benefits of technology and lessen the risks, good
design practices are needed. LBD takes the concepts, methods, theories,
and empirical knowledge provided by human life sciences as its starting
point to serve as grounds for design and research actions as well as tools
of thought. It has created a shift in the focus of design thinking: the ques-
tion is no longer only about technical artefacts and natural sciences; it is
also about how people live their lives and how they can improve them.
Technologies—when understood as combinations of human action and
202 Designing for Life
References
Albrechtslund, A. (2007). Ethics and technology design. Ethics and Information
Technology, 9, 63–72.
Argyle, M. (1990). The psychology of interpersonal behaviour. Harmondsworth:
Penguin books.
6 Life-Based Design 203
Stahl, B. C. (2010). 6. Social issues in computer ethics. In L. Floridi (Ed.), The
Cambridge handbook of information and computer ethics (pp. 101–115).
Cambridge: Cambridge University Press.
Ulrich, K. T., & Eppinger, S. D. (2011). Product design and development.
New York: McGraw-Hill.
Wittgenstein, L. (1953). Philosophical investigations. Oxford: Basil Blackwell.
Young, G. L. (1974). Human ecology as an interdisciplinary concept: A critical
inquiry. New York: Academic Press.
7
Research and Innovation
ple of the mining industry, before Savery and Newcomen, water in the
mines created problems. However, the construction of primitive steam
engines for pumping water out made the problem much more manage-
able (Derry and Williams 1960). In the beginning of the innovation pro-
cess, there was little understanding of how to solve the problem with
miners’ working conditions. Cosimo de Medici failed in his attempts to
build a suction pump to raise water from a depth of 50 ft, but with the
help of the basic research experiments by Torricelli and Guernicke on the
power of atmospheric pressure it became evident that vacuums might
have a role in pumps. Finally, the solution of creating a vacuum by steam
enabled Savery, Newcomen, and others to develop ‘miners’ friends’, that
is, water pumps (Derry and Williams 1960), which dramatically changed
the methods of working in mines. Not only designers had changed their
way of thinking; the community had also adopted new ideas, and an
innovation was born (Schumpeter 1939). This was possible with the
help of human thinking and its capacity to create new representations. A
web of basic scientific principles made it possible to create ideas, which
ended in practical devices for mines. The scientific findings made it pos-
sible to construct new design solutions and further develop them into
innovations.
Two separate forms of human thinking, design, and research, are
important in the development of new HTI products. On the one hand,
the designer’s thinking creates new products, while on the other hand
science and human research enable designers to solve design problems.
As with the natural sciences, human research can be relevant in develop-
ing practical solutions. For example, social research on different aspects
of ageing in society has an essential role in developing technological
solutions to improve the everyday life of ageing people (Sixsmith and
Gutman 2013). A critical question is how to organize human research
and human life science in order to optimally support HTI processes to
improve the quality of human life. In order to develop practical tools
for this purpose, it is essential to consider the role of human research in
innovative thought processes.
7 Research and Innovation 209
Similarities and Differences
Although the foundations of science and human research,1 on the one
hand and science and design, on the other differ, their approaches have
much in common. They have the common goal of improving the quality
of human life by means of thinking, although they pursue this goal in
different ways and thus often seem to have little to share with each other
(Carroll 1997). Human scientists may have difficulty understanding the
reasoning of industrial designers, and in the same way many industrial
designers have difficulty exploiting the results of human research. One
reason for this can be found in their different modes of thinking, that is,
their scientific stance and design stance. In order to understand how human
and social research can be effectively linked with design, it is necessary to
consider the relationships of these two stances systematically.
The difference between thinking in design and science has been
known for a long time in design science (Cross 1982; Cross et al. 1981;
Rauterberg 2006). Herbert Simon (1969) in his classic work ‘The
Sciences of the Artificial’, argues that design represents a new kind of
science. It is a science of the artificial, and this is typical of schools of
medicine, architecture, law, business, and engineering (March and Smith
1995; March and Storey 2008). Since the early 1960s, research into the
differences between science and design has been considered in different
design contexts (Eder 1998; Iivari 2007; March and Smith 1995; March
and Storey 2008; Rauterberg 2006).
Science and design both explicate and describe phenomena, use special
terminology to discuss them, like to rely on methods, and construct new
ideas and test them (March and Smith 1995; Simon 1969). Thus they
have many interaction points. The properties of research and design com-
pared here are their relationships to the nature of theory and practice, the
1
There is an annoying difference in semantic fields and meanings between the English word ‘sci-
ence’ and the respective words in other European languages, such as ‘Wissenschaft’, ‘vetenskap’,
‘ciencia’, ‘tiede’, etc. The latter refer to any form of reason applied in research. For example, such
fields of learning as ‘literature’ and ‘history’ are forms of Wissenschaft, or vetenskap. However, they
are not ‘sciences’. When discussing human research, this difference often causes difficulties.
Therefore, in this context, ‘research’ is used as an equivalent term for the European words that
describe all forms of investigative activities.
210 Designing for Life
Explanatory Design
A solid ground for assessing what kinds of technological solutions are
required for a specific user-interaction design problem can be found by
applying human research in interaction design in an explanatory man-
ner. A detailed analysis of user interaction can best be exploited when it
is combined with relevant scientific knowledge. It should be possible to
explain why one solution is more suitable than another—for example,
why a specific solution would be more usable or more robust than other
competing interaction solutions.
Explaining is not uncommon in technical thinking. For example,
breakage of a car radiator in sub-zero temperatures has a simple technical
explanation: in freezing temperatures, the water in the radiator turns to
ice, and the force of this expansion is too strong for the radiator to with-
stand (Hempel 1965). Similarly in medicine, for example, the reasons
why different medical methods work are investigated, and this knowl-
edge is used to develop new types of treatments. For example, mechanical
forces caused by human bodily movements provide important knowledge
about how to treat spinal injuries (Roaf 1960).
In technical design, issues concerning the usage of technologies and
the respective design problems can be analysed on the basis of the gen-
eral laws of nature. Hence, design thinking not only involves explaining
something, it also concerns solving problems and predicting different
phenomena (Hempel 1965). If ice is the reason why a radiator breaks
down, it is necessary to ask how to prevent ice formation. This is a typi-
cal question in convergent engineering design. In this case, the problem
can be solved by lowering the freezing point by using glycol (Hempel
212 Designing for Life
These entities are more or less independent of each other and are
based on different theoretical discourses and multiple theory languages.
7 Research and Innovation 219
Table 7.1 The fundamental questions of HTI design ontology and the respective
design frameworks
Fundamental design
question Design task Design framework
What is the technology The role of technology in Life-based design
used for? life
How is the technology Control of the behaviour Functionality and
intended to behave and and performance of the user interface
to be controlled? technology design
How does the technology Being able to use Usability design
fit the users’ skills and
capabilities?
How does the technology Dynamic interaction User experience
produce a motivating design
emotional experience?
7 Research and Innovation 221
text boxes. Their function is to let users issue commands to the artefacts
and to inform the users about the state of the artefact. Thus interaction
elements are signs in dialogues between machines and people (de Souza
2005) (Table 7.3).
values that people follow. This knowledge can be gathered from the litera-
ture and, for example, interviews with different stakeholders. To under-
stand how new design elements would improve the quality of the form
of life in question, common creativity procedures such as brainstorming
or synectics are used to generate the first ideas (Nolan 2003; Long 2014).
Quite often, the source of ideas can also be found in the organization’s
existing designs, design knowledge, and design traditions. A common
example of this kind of ideation is reverse engineering, in which designers
inspect products (either their own or those of a competitor) by ‘cutting’
them into pieces to understand their logic and to find ways to improve
them. The same approach can be used in designing for life, only the
focus of the analysis is on how people use the product and how rede-
sign could help the product better serve the users’ purposes. Following
this approach, the initial idea for developing a good product does not
always have to originate from the analysis of life. It can also be technology
driven, as long as the designers have an idea that can be trusted, which
is discussed with (and accepted by) stakeholders in co-design sessions. In
this sense, when designing products, it is not important where the idea
came from, but how the designer proceeds (and where they end up): the
design must include all versatile phases of the LBD process, enabling
designers to elaborate their ideas. It would be impossible to launch a
successful product on the market without a proper understanding of its
implementation in and impact on everyday life, its tasks and function-
alities, and the harmonized interplay between the user and the artefact.
The outcome of the form-of-life analysis is brought into concept
design. In this phase, a product or service concept of a technical artefact
or service is created. The main functionalities of the artefact, and the
main models for its usage in practice, should be clarified at this stage.
Personas and scenarios have been found useful in this phase (Cooper
et al. 2007; Rosson and Carroll 2002). Personas are models of potential
or thought users that produce descriptions of possible users and their
goals and intentions. Scenarios can be seen as concrete and work-driven
explications of action contexts (Carroll 1995; Rosson and Carroll 2002).
In these two methods, the chosen paradigms generate much of the con-
ceptual structure and functional understanding of the product or service
concept.
230 Designing for Life
types and then into high-level embodied prototypes and, finally, products
(Goldschmidt 2003).
With the help of co-design activities and teamwork, development takes
an iterative form. Designers may return time and again to the same basic
solutions, but every time in more concrete and advanced forms. This
kind of ‘circulus fructuosis’ or hermeneutic circle is common in human
thinking. It is a natural consequence of constructive activity. Piece by
piece, a whole gets its final form.
Sketches, mock-ups, prototypes, models, and simulations are impor-
tant in communicating ideas with end users and within the design team
(Gero 1990; Goldschmidt 2003; Rosson and Carroll 2002). They are
also vital in supporting designers’ memory in their thinking. The role of
sketches and prototypes is to present a synthetic conception. The ontol-
ogy of design thinking provides tools to analyse all aspects of the idea, to
find things that have not yet been considered and provide possible solu-
tion models for different issues.
As human information processing capacity is limited, it is good to
use different types of sketches and prototypes to support external mem-
ory. Prototypes are an excellent means of collecting design ideas, since
they allow designers to consider how different interaction elements fit
together. Different levels of presentations are used to carry out iterative
discussions of the product’s functional and qualitative aspects, and enable
developers to embody their ideas. This includes defining the forms and
role of technology in everyday life, and to the way new technologies can
be implemented: what will the particular artefact do, and how will people
use it.
Human interactive behaviour is a complex whole, and is therefore
difficult to analyse. Laboratory experiments in which only one particu-
lar aspect of interaction is analysed are insufficient to cover all relevant
problems of interaction design. An additional tool for analysing interac-
tion processes is to conduct a computer simulation to evaluate proposals
related any aspect of the design (hypotheses). The simulation is done
first by constructing a computer programme that simulates the cognitive
mechanisms responsible for behaviours. Then an environmental situation
is created that hypothesizes that the user must respond in a certain way.
If the simulated user responds as expected, there is evidence to support
232 Designing for Life
around the future service. For example, an online grocery delivery ser-
vice may work perfectly from a technological point of view, and the user
interface may be highly usable. However, if the designers have not con-
sidered how the service will fit into the practicalities of the daily life of the
client, it will be at risk of failure. For instance:
• What kind of influence does the technology have on the users’ quality
of life?
• Does the technology enhance the quality of life of the users better than
any other artefact or solution?
• What needs (and whose expectations) should the technology fulfil?
• Who benefits from the technology? Would other stakeholders benefit
from it?
• What are the possible alternatives for solving the problem?
• How should the users (direct and indirect) be seen, interpreted, and
understood in the design?
• How are the users involved in the design theoretically and
empirically?
• Is the main basis for the design answering the users’ needs and
expectations?
• What are the multiplicative effects of the solution?
References
Anderson, J. R., Matessa, M., & Lebiere, C. (1997). ACT-R: A theory of higher-
level cognition and its relation to visual attention. Human-Computer
Interaction, 12, 439–462.
Anscombe, G. E. M. (1957). Intention. Cambridge, MA: Harvard University
Press.
Aristotle. (1984). (1.1–15.32) Categories. In J. Barnes (Ed.), Complete works of
Aristotle (W. Ross, & J. Urmson, Trans.). Princeton, NJ: Princeton University
Press.
Bahder, T. B. (2003). Relativity of GPS measurement. Physical Review D, 68,
1–18.
Bernal, J. D. (1969). Science in history. Harmondsworth: Penguin.
7 Research and Innovation 237
Dym, C. L., Agogino, A. M., Eris, O., Frey, D. D., & Leifer, L. J. (2005).
Engineering design thinking, teaching, and learning. Journal of Engineering
Education, 94, 103–120.
Dym, C. L., & Brown, D. C. (2012). Engineering design: Representation and
reasoning. New York: Cambridge University Press.
Eder, W. (1998). Design modelling—A design science approach (and why does
industry not use it?). Journal of Engineering Design, 9, 355–371.
Eder, W., & Hosnedl, S. (2008). Design engineering. A manual for enhanced cre-
ativity. Boca Raton, FL: CRC Press.
Eimer, M., Nattkemper, D., Schröger, E., & Printz, W. (1996). Involuntary
attention. In O. Neuman & A. F. Sanders (Eds.), Handbook of perception and
action 3. Attention (pp. 155–184). London: Academic Press.
Franken, R. (2002). Human motivation. Belmont, CA: Wadsworth.
Frijda, N. H. (1986). The emotions. Cambridge: Cambridge University Press.
Frijda, N. H. (1988). The laws of emotion. American Psychologist, 43,
349–358.
Galiz, W. O. (2002). The essential guide to user interface design. New York: Wiley.
Gero, J. S. (1990). Design prototypes: A knowledge representation schema for
design. AI Magazine, 11, 26–36.
Gibson, J. J. (1979). The ecological approach to visual perception. Boston, MA:
Houghton Mifflin.
Goldschmidt, G. (2003). The backtalk of self-generated sketches. Design Issues,
19, 72–88.
Griggs, L. (1995). The windows interface guidelines for software design. Redmond,
WA: Microsoft Press.
Gruber, T. R. (1993). A translation approach to portable ontology specifica-
tions. Knowledge Acquisition, 5, 199–220.
Gruber, T. R. (1995). Toward principles for the design of ontologies used for
knowledge sharing. International Journal of Human Computer Studies, 43,
907–928.
Hall, A. S., Holowenko, A. R., & Laughlin, H. G. (1961). Theory and problems
of machine design. New York: McGraw-Hill.
Hempel, C. (1965). Aspects of scientific explanation. New York: Free Press.
Hu, J., Chen, W., Bartneck, C., & Rauterberg, M. (2010). Transferring design
knowledge: Challenges and opportunities. In X. Zhang, S. Zhong, Z. Pan,
K. Wong, & R. Yun (Eds.), Entertainment for education, digital techniques and
systems (pp. 165–172). Berlin: Springer.
7 Research and Innovation 239
Sixsmith, A., & Gutman, G. M. (2013). Technologies for active aging. New York:
Springer.
Skolimowski, H. (1966). The structure of thinking in technology. Technology
and Culture, 7, 371–383.
Sowa, J. F. (2000). Ontology, metadata, and semiotics. In B. Ganter & G. W.
Mineau (Eds.), Conceptual structures: Logical, linguistic, and computational
issues (pp. 55–81). Berlin: Springer.
Standing, L., Conezio, J., & Harber, R. N. (1970). Perception and memory for
pictures: Single trial learning of 2560 stimuli. Psychonomic Science, 19, 73–74.
Treisman, A., & Gelade, G. (1980). A feature integration theory of attention.
Cognitive Psychology, 12, 97–136.
Ulrich, K. T., & Eppinger, S. D. (2011). Product design and development. New
York: McGraw-Hill.
van der Heijden, A. H. C. (1992). Selective attention in vision. London:
Routledge.
van der Heijden, A. H. C. (1996). Visual attention. In O. Neuman & A. F.
Sanders (Eds.), Handbook of perception and action 3. Attention. London:
Academic Press.
Venable, J. (2006). The role of theory and theorising in design science research.
In Proceedings of the 1st International Conference on Design Science in
Information Systems and Technology (DESRIST 2006) (pp. 1–18).
8
Epilogue: Designing for Life
The main criterion for HTI design is that it should not only concern
the development of a technical artefact and the design of the immediate
usage situation, but also help illustrate how technologies can advance
the quality of human life. People should be motivated to adopt and use
technology by the added value it can bring to everyday life to help them
accomplish their goals. The question of how much a technology can
improve the quality of human life defines the worth of the particular
technology.
To be able to improve the quality of the target users’ lives, the true value
of any technology should be measured with the concepts of life. Human life,
in all its complexity, should be the starting point for design. Somewhat
oversimplified concepts such as ‘user needs’ cannot adequately examine
the true essence of human life actions. In addition to asking what people
need, it is also necessary to consider how they can best use technologies,
how they can be motivated to use them and for what purposes. For exam-
ple, as numerous socio-psychological and anthropological approaches
have demonstrated, expectations, values, goals, and cultural factors influ-
ence people’s motives. Technology becomes meaningful through personal
and individual symbolic values as well as through social relationships in
a person’s everyday contexts. People follow the rules of the forms of life
that they have adopted (or have been thrown into) to reach their goals
and the technologies they use help them reach these goals. Considering
different forms of life, technology may enhance, for example, the feeling
of belonging to a certain group or to a particular geographical or virtual
space. It may also enhance people’s feelings of competence, security and
self-efficacy, and promote coping in life. Further, it can facilitate people’s
opportunities to influence decision-making processes through participa-
tion and creativity.
As discussed above, LBD thinking—which investigates the HTI
design process in concepts pertaining to research into human life—
offers the ultimate framework and grounds for designing for life, and
thus underlies all HTI design. LBD concepts should belong to the HTI
designer’s toolbox, from the front-end concept design to evaluating the
impact of designed solutions in people’s lives. The power of LBD is its
holistic nature. Designing for life cannot be based on the natural sciences
and mathematics alone. It calls for applying ‘human life sciences’ (par-
ticularly sociology, psychology, and biology of human life, supplemented
with other human life sciences) to design processes. Depending on the
research target, the corresponding sciences can be ethnography, organi-
zational research and management, philosophy of the mind, education,
ergonomics, medicine, neuroscience, or physiology and anatomy. To this
list can be added multidisciplinary areas of research such as cognitive
science, gerontechnology, occupational therapy, design science, and art
design. The key unifying argument is that the problems of HTI design
should be conceptualized (and, arguably) supported in concepts and the-
ories developed for analysing human life.
Technology-driven processes in the design of products and services
seldom truly manage to support people’s goals because they lack the
ability to holistically consider people’s lives. Therefore the resulting new
technologies may be too complex to use (Ramsay and Nielsen 2000),
or be anaesthetic, stigmatizing, or somehow ethically problematic. Or
they may simply not match the values of the users or fail to bring them
any added value. Often, technology-oriented design begins with the
creation of new technical artefacts, and only after that is it asked how
they could be used (Rosson and Carroll 2002). While there is nothing
8 Epilogue: Designing for Life 245
wrong with this process, it is also possible to turn the design process
around and begin with the role of technology in improving the qual-
ity of life. Such an approach gives human researchers a better chance of
incorporating research-based understanding of human life into designing
new technology concepts at an early enough stage, provided that these
concepts are given a clear role in technology design processes. This issue
has been brought out in such human-driven and human-centred design
approaches as goal-directed design (Cooper et al. 2007), contextual
design (Beyer and Holtzblatt 1997), and scenario-based design (Rosson
and Carroll 2002), which strive to offer a truthful picture of the position
of technology in the everyday life of the target population. Such widely
accepted and brilliant conceptualizations as personas and scenarios, for
example, can be seen as suitable tools to be used in different phases of
LBD, because their contents have the power to express the true position
of a new technology in life in a realistic and truthful manner. The truth
and validity of personas and scenarios depend on how well these descrip-
tions can express different attributes of the form of life of the target popu-
lation. Superficial and illusory descriptions may lead to problems rather
than be of real use.
Many human-centred design approaches include, for example, observ-
ing individuals’ daily routines in order to understand users’ actual needs.
This is a good way to start understanding the form of life of the target
group. However, merely observing users’ needs does not make the design
process itself worth conscious (Cockton 2006). As explained earlier, val-
ues, for instance, cover much broader and personal contents than the
moral values of human welfare and justice, and include non-perceivable
values that are based on both personal and cultural concepts. Thus, when
designing for the quality of life, it is important to understand what is
holistically relevant for people and to try to adapt the design processes
to acquire this information early enough and to exploit it effectively in
the design. The design may, for example, concern biological changes and
the health of ageing people, but it should at the same time focus on cog-
nitive capacity as well as on the values and goals of this target group in
order to produce successful design solutions. Therefore, when designing
user interfaces and interaction, for example, one should not only focus
on the parameters of a screen or input devices from the viewpoint of
246 Designing for Life
This book has argued that there are four major questions that have to
be answered in all HTI design:
The questions are necessary, and they must always be solved, implicitly
or explicitly, knowingly or tacitly. It would be impossible to create an
artefact without providing it with an appearance and functionalities, and
organizing these attributes in some way. It would make even less sense to
make a technology without defining a role for it in everyday life.
These fundamental questions are always present in design; they define
the parameters of the main design discourses. Seeing the scientific and
design process as a system of discourses gives designers the freedom to
apply the research ideas that they see as important and helpful in solving
their design problems. Understanding the discursive character of the
paradigmatic structure also makes it understandable why it is possible
to have partially overlapping—and at the same time partially different—
ways of examining things. Research and design in technology devel-
opment is like all human social development activities. The next ‘final
solution’ is to unify different perspectives (Behrent 2013; Foucault 1972;
Habermas 1973, 1981, 1990; Sikka 2011).
8 Epilogue: Designing for Life 249
References
Behrent, M. C. (2013). Foucault and technology. History and Technology, 29,
54–104.
Bernal, J. D. (1969). Science in history. Harmondsworth: Penguin.
Beyer, H., & Holtzblatt, K. (1997). Contextual design: Defining customer-centered
system. Amsterdam: Elsevier.
Cockton, G. (2006). Designing worth is worth designing. In Proceedings of the
4th Nordic Conference on Human-Computer Interaction: Changing Roles
(pp. 165–174).
Cooper, A., Reimann, R., & Cronin, D. (2007). About Face 3: The essentials of
interaction design. Indianapolis, IN: Wiley.
Cronbach, L. J. (1984). Essentials of psychological testing. New York:
Harper-Collins.
Foucault, M. (1972). The archaeology of knowledge and the discourse on language.
New York: Pantheon Books.
Galiz, W. O. (2002). The essential guide to user interface design. New York: Wiley.
Griggs, L. (1995). The windows interface guidelines for software design. Redmond,
WA: Microsoft Press.
Kline, P. (1994). An easy guide to factor analysis. New York, NY: Routledge.
Habermas, J. (1973). Erkentniss und interesse [Knowledge and interests].
Frankfurth am Main: Surkamp.
250 Designing for Life
Dienes, Z., 159, 161 Eppinger, S.D., 52, 69, 172, 207
Dieter, G.E., 67, 215 Epstein, C., 100
Dijk, T.A., 84 Erdos, G., 117–19
Dijkstra, E., 28 Ericsson, K.A., 83, 104, 107, 162, 183
Dillon, A., 79 Eris, O., 207
Dix, A., 31, 32 Euler, H., 142
Docampo Rama, M., 155 Euler, H.A., 142
Donchin, E., 32, 83
Donk, M., 92
Draper, S., 216 F
Draper, S.W., 33, 36 Fajardo, I., 84
Dumais, S., 94, 110, 213 Fantazzini, D., 11
Dumais, S.T., 57, 110 Farrell, R., 32, 64, 83, 84, 104, 105,
Dumas, J.S., 4, 16 117, 232
Duncan, J., 87 Feldhusen, J., 3, 17, 69, 172, 207,
Duncker, K., 215 215
Dunning, D., 152 Feltovich, P.J., 83, 84, 95, 118, 183
Dvash, A., 149 Fernandez, J.E., 55
Dym, C.L., 14, 25, 69, 207 Feyerabend, P., 29
Fichman, R.G., 200
Findlay, J., 31, 32
E Fisk, A.D., 147
Earnshaw, R., 113 Fitzgerald, G., 8
Eccles, J., 9 Florini, L., 198
Eco, U., 65, 111 Ford, H., 14
Eder, W., 69, 207, 209 Forrester, M., 97
Edge, D., 5 Fotowat, H., 98
Egeth, H.E., 91, 92 Foucault, M., 41, 248
Eibl-Eibesfeldt, I., 193 Fournier, S., 154
Eimer, M., 94, 213 Fozard, J.L., 27, 36, 61, 85, 155
Eisenberg, N., 140 Frambach, R.T., 152
Ekman, P., 140 Franken, R., 148, 149, 153, 158,
Ellenberg, H., 159 159, 161, 227
Ellis, R.D., 147 Frey, D.D., 207
Ellsworth, P.C., 145 Friedman, S.M., 139
Elmasri, R., 35 Frijda, N.H., 140, 143, 144, 216,
Endsley, M., 84, 95, 118 227
Endsley, M.R., 84, 118 Funke, J., 84
Englund, M., 146 Furukawa, K., 98
Author Index 255
Kintsch, W., 83, 84, 104, 107 Lee, T.D., 30, 98–100
Kirk, P., 93 Lehto, M.R., 84, 121
Kitajima, M., 79, 80 Leifer, L.J., 207
Kivimäki, M., 80 Leikas, J., 10, 35, 36, 61, 85, 155,
Klein, G., 56, 118 172, 173, 175 176, 177, 186,
Klein, H.K., 8 188, 189, 194, 195, 197, 198,
Kline, P., 85, 247 202, 217, 228, 234
Koelega, H.S., 95, 96 Leippe, M.R., 155
Koestner, R., 151, 158, 159 Leonard, D., 36
Koivisto, K., 28 Leppänen, M., 217
Konkle, T., 83 Lesgold, A.M., 106
Krauth-Gruber, S., 140, 142 Lewis, J.R., 99
Kroes, P., 2, 3 Lewis, M., 30, 35
Kuhn, T., 26–8, 29, 34, 214 Lindström, K., 80
Kulik, C.T., 158 Logie, R.H., 104
Kuniavsky, M., 36, 146 Long, H., 229
Kuutti, K., 33, 36, 172, 194 Lotman, Y., 65
Lubinski, D., 153
Luckiesh, M., 91
L Luria, A., 140
Laakso, M., 79 Lutz, W., 13
Lakatos, I.M., 26, 36 Lyytinen, K., 8
Lakoff, G., 98
Lamble, D., 79
Laming, D., 148 M
Land, M.F., 100 Mackenbach, J.P., 8
Landauer, T., 57, 110, 114 Mackworth, N.H., 93
Landauer, T.K., 110 MacLean, P., 140
Landels, J.G., 7 Maes, P., 28, 59
Lardelli, M., 11 Magyar, R.L., 99
Larkin, J.H., 113 Maloney-Krichmar, D., 33
Laudan, L., 26, 42 Mancinelli, E., 11
Laughlin, H.G., 207 Mannheim, B., 149
Laursen, L.H., 80 Manunta, Y., 93
Lazarus, B.N., 140, 142, 143, 160, Mao, J., 33, 36
227 Marcel, A.J., 159
Lazarus, R.S., 142, 143, 160, 227 March, S.T., 209, 210, 215
Lebiere, C., 232 Mark Pejtersen, A., 56, 57
LeDoux, J., 140 Markman, A., 106, 114, 117, 158
258 Author Index
Q
P Quaet-Faslem, P., 30
Pahl, G., 3, 17, 69, 172, 207, 215 Quesada, J.F., 115, 116
Panksepp, J., 140
Parasuraman, R., 81, 150
Park, S.-B., 5, 151, 154 R
Parker, R.J., 93 Radnitsky, G., 3, 16
Parsons, T., 179, 180 Raney, L., 8
Pärttö, M., 207 Rasmussen, J., 56, 57, 96, 115
Pashler, H., 82, 91–3 Rauterberg, M., 27, 33, 36, 55, 209,
Paulitz, H., 11 232
Pausch, R., 53 Rayner, K., 98
260 Author Index
W
Wagner, T.S., 149 Y
Walker, D., 209, 210 Yantis, S., 91, 92
Watson, D., 142 Yarbus, A.L., 94
Watts, F.N., 143 Yi, M.Y., 152
Weil, D., 7 Ylikauppila, M., 155
Weinberg, G.M., 31 Young, G.L., 193
Weiser, M., 61, 64, 82 Young, R.M., 109
Wells, A., 160 Yuan, X., 54
Wener, R., 92
Whitford, F., 28
Wickens, C., 30, 80, 106 Z
Wigfield, A., 9 Zimbardo, P.G., 155
Subject Index
culture, 14–17, 68, 85, 110, 112, 156, event (event flow), 54–6, 62, 223
157, 176, 198, 199, 235, 247 expertise, 14, 83, 100, 183, 226
explain, 16, 59, 62, 83, 103, 104,
110, 139, 152, 176, 182, 187,
D 211, 227
decision making, 84, 116, 120, 121, explanandum (explanans), 16, 213
124, 200, 244
design, 3–6, 9, 13–17, 25–42,
49–72, 82, 86, 94, 96, 99, F
101, 104, 110, 123, 140, 143, fact, 10, 31, 100, 139, 142, 152,
149, 153, 156, 158, 171–202, 183, 198, 219
207–23, 225, 227, 230, 243–9 fit for life, 195, 197, 222, 232–6
design-relevant attributes, 189 form of life, 175–8, 175n1, 182,
design stance, 209 184, 185, 188, 189, 195, 200,
digital divide, 11, 201 221, 222, 228, 233, 245
direct manipulation, 28, 59, 60 functionality, 52, 53, 69, 146, 210
discrimination, 82, 86–8, 92–5, 178, fundamental questions, 38, 39, 41,
213 71, 219, 220, 248
display, 60, 68, 87, 88, 90, 92, 181, funology, 27, 33, 36, 146
213, 226
dynamic interaction, 140, 158, 162,
218 G
gerontechnology, 27, 61, 244
graphical user interfaces (GUI), 35,
E 53, 61, 63
ecosystem, 15
emancipation, 7–10
emotions, 32, 112, 137–63, 213, H
216, 226, 227, 246 hermeneutic circle, 231
engineering, 3, 14, 26n1, 35, 37, 41, human-driven, 173, 245
57, 65, 69, 111, 172, 209, human factors, 18, 30, 31, 36, 61,
211, 212, 216, 229, 246 80, 101
entertainment computing, 27, 33, 36 human life sciences, 175, 191–3,
ergonomic, 26, 27, 30, 31, 33, 36, 196, 201, 244
68, 80, 98, 101, 109, 149, human technology interaction
193, 216, 244 (HTI), 1, 5, 15, 17, 25–7,
ethics, 33, 186, 198, 222, 236 29–34, 36–41, 51, 59, 65, 69,
ethnography, 193, 244 80, 84, 90, 101, 107, 110,
evaluation, 4, 31, 105, 145, 198 123, 140, 143, 145–8, 153,
Subject Index 265
O
K ontology, 217–20, 231
Kansei engineering, 27, 32, 33, 36,
146
P
paradigm, 27–9, 31, 34–7, 61, 66,
L 108, 139, 202, 217
learning, 28, 36, 64, 83, 95, 101, perception, 52, 82, 88–91, 93, 98,
102, 105, 108–10, 124, 143, 113, 118, 143, 225
151, 161, 193, 194, 209n1 perceptual-motor, 83, 94, 98, 124
life-based design (LBD), 171–202, 221 personality, 36, 84, 138, 139, 142,
linguistics, 194, 246 153–7, 162, 193
266 Subject Index
personas, 173, 229, 245 senses, 67, 88, 91, 97, 159, 226
pervasive computing, 61 sign, 9, 65, 66, 223
problem solving, 84, 120, 121, 123, simulation, 4, 231, 232
124, 215 situation awareness, 84, 118, 121
product, 2, 12, 33, 39, 53, 118, 146, sketch, 4, 230–2, 234
149, 151, 154, 161, 185, 186, skill, 2, 61, 83, 84, 105, 107, 115,
196, 198, 199, 207, 216, 220, 147, 148, 163, 180, 183, 202,
228–30, 235 217
prototype, 4, 196, 197, 230–4 socio-cultural research, 3, 4, 66
psychology, 4, 31, 33, 36, 79–124,
138, 139, 144, 150, 153, 155,
158, 162, 193, 215, 225, 244, T
246 task analysis, 33, 52, 56–9, 65
technology, 1–18, 27, 33, 51, 52, 56,
66–9, 72, 79, 81, 85, 97, 99,
Q 103, 111, 124, 137, 138,
quality of life, 2, 7, 10, 182, 197, 198, 145–7, 155, 171–4, 181, 190,
221, 228, 233, 245, 249 195, 215, 218, 226–7, 232,
243–5, 248
technology acceptance (TAM), 149
R technology-supported actions,
remembering, 82, 83, 102, 106, 124 188–91
requirements, 53, 55, 59, 68, 113, 148, technophobia, 147
173, 195, 196, 220, 228, 233 thinking, 1, 4, 5, 14, 18, 25, 26, 35,
research program, 26, 34–7, 39–42, 38, 41, 81, 87, 116, 117, 123,
218 124, 140, 162, 172, 174, 195,
risk, 11–14, 68, 84, 101, 138, 187, 207–10, 214, 226, 228–30,
201, 234, 236 234, 246, 249
rule-following actions, 177–82, transfer, 50, 64, 110, 113
188–90, 199, 222
U
S ubiquitous computing, 61, 82, 112
Scandinavian design, 28, 146 unconscious, 94, 99, 159, 161
scenarios, 59, 229, 245 unification, 214
science, 2, 14, 29, 42, 85, 157, 174, usability (usable), 16, 18, 26, 34, 36,
207–10, 214, 215, 244, 247 38, 68, 80, 82, 96, 99, 102,
scientific stance, 209 122, 146, 160, 197, 210, 216,
semiotics, 64–6, 111, 124, 157, 226 218, 225, 230, 233, 247
Subject Index 267