Sie sind auf Seite 1von 18

Systems Research and Behavioral Science

Syst. Res. 17, 543–559 (2000)

& Research Paper

On a Wing and a Prayer?


Exploring the Human Components
of Technological Failure
Denis Smith*
Centre for Risk and Crisis Management, Sheffield University Management School, Sheffield, UK

This paper attempts to explore the human factors and systems dynamics of human±
machine interaction by reference to the Kegworth aircraft accident. The paper seeks to
move beyond the more traditional human factors literature to include research findings
from both systems research and crisis management in an attempt to examine the
relationships between active and latent failures. The role of human error within
complex technical systems is explored and particular emphasis is placed upon the
dynamics of latent errors within the management of such systems. The main thesis
developed here is that, while there are important cognitive processes at work within
accident causation, attention needs to be moved away from the level of the operator to
the wider managerial and social frameworks within which individuals work. Copyright
# 2000 John Wiley & Sons, Ltd.

Keywords Kegworth air crash; organizational resilience; sense making; crisis management

INTRODUCTION human operator and machines and the sub-


sequent impact of organizational factors on this
relationship. In more recent years, a series of
The study of human reliability is to a large
major disasters have provided a powerful illus-
extent motivated by the need to prevent
tration of the complex dynamics of the failure
unwanted consequences from erroneous
process and have pointed to the dominant role of
actions Ð in other words to prevent accidents
latent error within the chain of causality for such
from happening. (Hollnagel, 1993, p. 12)
events. As technical systems have become more
complicated, it appears that they may have begun
The Chapter of Accidents is the Longest
to exceed the abilities of their human operators to
Chapter in the Book. (John Wilkes)
control them effectively. This is particularly
The academic literature has long recognized both apparent when elements of the system are work-
the importance of the interaction between the ing in a degraded mode. Under these conditions,
managers and operators tend to engage in trade-
* Correspondence to: Denis Smith, Centre for Risk and Crisis
Management, Sheffield University Management School, 9 Mappin
offs, thus eroding organizational defences to the
Street, Sheffield S1 4DT, UK. point of vulnerability (see Turner, 1978).

Copyright # 2000 John Wiley & Sons, Ltd. Received 15 December 1997
Accepted 2 June 1999
RESEARCH PAPER Syst. Res.

Because of the importance of human±machine There is considerable ambiguity in the use of


interactions within the failure of systems, the term `system' (see Checkland, 1981). Much
designers have sought to factor multiple safe- of this ambiguity stems from the common usage
guards into their designs in an attempt to of the term, compared to its meaning within
provide more defences against the risks of academic research, and this has prompted some
negative human intervention. However, in so to argue for the use of other terms in order to
doing, they may also increase the complexity provide greater clarity. The key characteristics of
and opacity of the system. This in turn will a system can be considered as:
further erode the capacity of operators to recover
the abstract idea of a whole having emergent
the process at the point when it is close to
properties, a layered structure and processes
collapse, especially in those cases where the
of communication and control which in
automated controls can no longer cope with the
principle enable it to survive in a changing
problem. This has, in effect, relegated the role of
environment. (Checkland and Scholes, 1990,
operator to that of a monitor under steady state
p. 22)
conditions, but requires a rapid role change
when the system moves into a degraded mode. Irrespective of the semantics used, the central
Under these conditions, operators may become notion of system is that it is an abstract means of
isolated from the control of the process and describing a complex phenomenon in its entirety,
denied effective feedback from the computer rather than taking a narrower, reductionist
until the point of failure is reached (see Wiener, approach. It is the interactions between the
1985). At this point they are then required to parts of the system and the emergent properties
make sense of the situation and develop appro- that arise as a result, which give the system
priate intervention strategies to prevent the characteristics that are often overlooked when we
problem escalating. The result has been the aggregate smaller elements together (see Bignall
creation of a complex working environment, and Fortune, 1984; O'Connor and McDermott,
for operators and managers, within which 1997). Almost by definition, it is the various
both the probability of human error may have aspects of emergence that create problems for the
been increased, and the consequences associated operators of high-technology systems. These
with such error heightened. The study of error emergent properties are invariably not covered
within systems failure has, therefore, widened by operating protocols and, as such, require the
from an initial concern with the role of human operators to make sense of what is happening at
operators to include a broader understanding of the point of failure and then to develop strategies
both managers and systems designers in pre- to cope with those problems.
cipitating systems and organizational failures Within systems thinking, there are a number of
(see Turner, 1976, 1978; Reason, 1990a, 1990b, common elements which have a particular
1997). importance here. For example, Checkland
Our aim in this paper is to examine the (1981) highlights the notions of communication,
relationships between systems vulnerability and control, emergence and hierarchy as central
human factors issues by reference to the concepts within systems thinking and which
Kegworth air crash. Of particular interest here clearly have implications for an understanding
is the manner in which human operators make of systems failure. If we add the broad category
sense of the problems that face them under con- of human factors to these issues then a complex
ditions where the system has become vulnerable. mosaic for analysis can be seen to develop. For
However, before examining this accident in more the purposes of this paper, there are two main
detail it is necessary to provide some clarity issues that emerge from this mosaic which are
on the use of the term `system' and to outline worthy of further discussion, namely vulner-
some of the current research on vulnerability and ability and sense making (as an element of
sense making which has relevance to our Weltanschauung). It is to a discussion of these
discussions. concepts that our attention must now turn.

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

544 Denis Smith


Syst. Res. RESEARCH PAPER

SYSTEMS, VULNERABILITY AND of systems thinking because, as Fortune and


SENSE MAKING Peters argue.
mismatches in expectations may arise from
Fortune and Peters (1995) describe the notion of
difference in Weltanschauung but because
vulnerability in terms of a system's susceptibility
each party takes their own assumptions as
to the adverse consequences of a trigger event.
`given' and assumes they are shared by others
As such, it relates closely to Turner's (1976, 1978)
the differences may not emerge in advance of
notion of incubation and Reason's (1990a, 1997)
the failure being perceived. Indeed, without
`resident pathogen' metaphor. The resilience of a
specific attempts to apply the concept of
system can be seen to exist across its human,
Weltanschauung they may never be revealed.
organizational and technological (HOT) (Shri-
(Fortune and Peters, 1995, pp. 240±241)
vastava et al., 1988) components. Consequently, it
is important to think in terms of a matrix of
It is clear then that different expectations about
vulnerability incorporating these HOT factors,
the likely performance of a system may create
along with the important notions of space and
problems of interpretation, especially when that
time (Giddens, 1990) as these also have implica-
system is operating in a degraded mode. Because
tions for sense making as well as vulnerability.
of the nature of complex technologies, organiza-
The resultant `matrix' can be expressed in such a
tions have devised protocols to be used when the
way that we can conceptualize pathways of
system moves outside of its normal operating
vulnerability within a system which occur
envelope. While the use of these protocols is
because of the impact of the emergent properties
often an important training tool for operators
upon the interaction of the spatial and temporal
and may help condition them to respond to
dynamics of these HOT factors (see Figure 1).
certain events, they are weaker when dealing
The capability of managers and operators to
with emergent properties. Some have argued
cope with emergent properties will obviously be
that the reliance on protocols can impair the
a function of their ability to make sense of the
problem-solving capabilities of operators (see
issues that they face. This leads us into the
Degani and Wiener, 1993), particularly as the
second issue, which is relevant to a discussion of
emergent properties of the system will invariably
failure, namely the concept of `Weltanschauung'.
not be covered by such protocols. This raises an
This notion of world-view is a central component
important issue for the training of operators in
safety critical system which, if not adequately
addressed, may lead to difficulties for the
operators when the system begins to fail during
normal operations (see Diehl, 1991; Edwards,
1990; Green, 1990; Hurst and Hurst, 1982; Jensen,
1995; Nagel, 1988). It is this challenge to the
dominant world-view concerning systems beha-
viour that is an important dynamic of the sense-
making process, particularly when dealing with
conditions of failure.
Within the context of systems failure, sense
making can also be linked to the concept of
resilience (Weick, 1993) and therefore vulner-
ability. For Weick, sense making as a process
is tested to the extreme when people encoun-
ter an event whose occurrence is so implau-
sible that they hesitate to report it for fear they
Figure 1. Pathways of vulnerability will not be believed. In essence these people

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

On a Wing and a Prayer? 545


RESEARCH PAPER Syst. Res.

think to themselves, it can't be, therefore, it of the failure, there remains a requirement that the
isn't. (Weick, 1995, p. 1) human component of the system is adept at
making sense of these problems. In addition,
Under conditions of failure, our ability (or there is a need to ensure that role disintegration
otherwise) to make sense of the emergent pro- does not take place. This is especially important
perties associated with that failure will be when the system is operating in degraded mode
important in dealing with systems vulnerability. (Weick, 1988, 1993, 1995) as the human operator
Given Weick's (1990, 1993) assertion that sense may prove to be the last line of defence, with the
making is concerned with formal systems defences proving to be an
contextual and negotiated agreements that ``armour of complacency''.
attempt to reduce confusion (Weick, 1993,
p. 636)
and that HUMAN INTERVENTION: ROOT CAUSE OR
The basic idea of sense-making is that reality LAST LINE OF DEFENCE?
is an ongoing accomplishment that emerges
from efforts to create order and make retro- What holds organisations in place may be
spective sense of what occurs. (Weick, 1993, more tenuous than we realise. (Weick, 1993,
p. 635) p. 638)
then it is clear that there is considerable scope for Considerable research has focused on the gener-
error within this process, particularly for complex ation of `human error' amongst operators, al-
and uncertain problems. Of particular importance though there is an increasing recognition that the
here is the role of the organization as a controlling notion of a single root failure is an inadequate
construct. This should allow for the provision of construct in explaining failure generation (see
some sort of order on the various conflicting Reason, 1997). The incidence of failure attributed
environmental and task demands that impinge to human causes has risen from an estimated
upon those who are making sense of systems 20% in the 1960s to some 80% in the 1990s
problems (Weick, 1993). The social dynamic of (Hollnagel, 1993; Meshkati, 1991). This change is,
sense making is also an important factor in to some extent, a reflection of the increasing
explaining the manner in which cues are inter- complexity of technical and organizational
preted, communication is shaped and behaviours systems and the resultant inability of humans at
are affected (Weick, 1995). Weick (1993) has also all stages in the design, build and operating
observed that both role structures and interacting process to exercise control over that system. In
routines are important factors in framing organ- virtually every case, the generation of failure
izations and their behaviour. When these routines events cannot simply be attributed to a single
and role structures break down they erode source but rather results from the complex
organizational resilience which may, in turn, mosaic of interactions that occur between
impact upon the likelihood and significance of elements. This complexity necessitates that
human error. When the system operates in a operators are able to deal with a range of `non-
degraded mode it is likely that routines and role design emergencies' (Reason, 1990a) which lie
structures can break down or, alternatively, they outside of the known failure envelope designated
may compound the problem by exaggerating the by systems designers. While contingency plans
impact of emergence. may be available for many events, there will
Resilience in the face of any technological always be the unforeseen cascade through the
failure will invariably be a function of organiza- system which was either not considered, or
tional and human factors, as well as those fail-safe possibly even dismissed, by designers as being
mechanisms which are built into the technology. so improbable as to be impossible (see Perrow,
While these may all mitigate the negative impacts 1984). It is these events that operators and

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

546 Denis Smith


Syst. Res. RESEARCH PAPER

Figure 2. Points of intervention for the management of human error. Source: Reason (1990a, p. 202).

managers have to make sense of and it is here so. Similarly, the role of the organization (and its
where the greatest potential for error lies. associated culture) in shaping the pattern of
In developing a framework for analysing individual behaviour within a crisis event is
accidents, Reason (1990a, 1990b, 1997) identifies often ignored by managers.
a number of points of intervention for the In discussing the sense-making process in-
management of human error and these are volved in the Mann Gulch fire, Weick elaborates
shown in Figure 2. Reason argues that too little on the role of `organization', arguing that we can
attention is focused on line management see a `recipe for disorganization' which
deficiencies and the psychological precursors of
reads, Thrust people into unfamiliar roles,
unsafe acts within accident analysis. Too much
leave some key roles unfilled, make the task
weight is given in many post-accident analyses to
more ambiguous, discredit the role system,
the point at which the accident occurred Ð that
and make all of these changes in a context in
is, the unsafe act Ð and this gives a distorted
which small events can combine into some-
view as to the nature of human error. The notion
thing monstrous. (Weick, 1993, p. 638)
that errors arise from the actions of operators
ignores a whole series of decisions taken by This notion of disorganization is an important
managers and systems designers in the construc- concept within any discussion of systems failure.
tion and maintenance of the system within which For non-design emergencies, operators will be
the operators work. A series of faulty assump- placed into unfamiliar role situations and the
tions on the part of managers may actually result associated stress of the event may lead to cogni-
in the incubation of disaster potential, which only tive narrowing taking place, thus weakening the
becomes realized when the disaster or accident abilities of the operators to manage the problem
occurs. In many cases, there is evidence to suggest still further. In particular, the effect of cognitive
that the operators made mistakes because their narrowing would inhibit the problem-solving
operational environment encouraged them to do capabilities of those operators and managers

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

On a Wing and a Prayer? 547


RESEARCH PAPER Syst. Res.

who are interacting with the technology at the have automated many of the tasks previously
point of failure, thereby increasing the problems undertaken by flight engineers and pilots and
of system emergence. Their interpretation of this has, in turn, resulted in a reduction of flight
events will create a set of extracted cues and deck staff. Such reductions have served to erode
these will then shape subsequent behaviour slack in the system, which would have been a
(Weick, 1988, 1995). When the technical system valuable resource if the aircraft experienced a
is also tightly coupled and interactively complex major failure. Similar `economies' have been
(Perrow, 1984), then any failure will quickly made by reducing the number of engines needed
escalate from an incident to a major event, thus for normal flights to two. One of these modern
creating severe time constraints for operators. If twin-jet aircraft involved in a fatal accident was
this is combined with role ambiguity and the British Midland Boeing 737±400 which
communication breakdown within the organiza- crashed on the M1 motorway in January 1989
tion, then the recipe for catastrophe is clear. It is with the loss of 47 lives (Air Accidents Investi-
this downward spiral of failure that operators gation Branch, 1990; Carter, 1994). This accident
and managers need to make sense of. These occurred because the pilots, who allegedly
aspects of the `recipe' for a downward spiral became confused by their instruments and the
would fall within Reason's (1990a) stages of nature of the event, shut down the wrong engine
fallible decisions, line management deficiencies following an engine fire and the aircraft crashed
and psychological precursors. They would have on its approach to the airport. While the pilots
a significant impact upon the cognitive processes carried much of the blame for the accident, and
that underpin the unsafe act itself and would subsequently had their employment terminated,b
also go some way towards undermining the the accident was much more complex than a
defences that the organization has in place. simple case of pilot error (Smith, 1992). Although
Miller (1988) argues that an oversimplification there is little doubt that human intervention was
of blame within accident investigation results in a principal cause of the disaster, the question of
the oft-quoted cause of human error and neglects whether the pilots were as culpable in causing
further investigation of the wider issues ident- the accident, as many accounts suggest, is open
ified here. Miller also argues that accidents to debate. It can be argued that there are a range
invariably result from a sequence of events in of factors that precipitated the event and these
which a causal chain is present and that, in all were subsequently compounded by the actions
cases, the principle of `known precedent'a applies of the pilots as they tried to make sense of the
(Miller, 1988). This raises an interesting question problem.
for those who manage complex technologies, The accident took place when the aircraft was
namely: how do we ensure that the `known en route from London Heathrow to Belfast.c The
precedent' is dealt with in the sense-making aircraft was relatively new, having logged only
process so that systems vulnerability can be 521 airframe hours, and had come into service
reduced? It is now necessary to examine some with British Midland in 1988 (Air Accidents
of these issues in more detail by reference to the Investigation Branch, 1990). This aircraft differed
Kegworth air crash. from the 300 Series in a number of ways. The
most noticeable changes related to the new
engines, which were an upgraded version of
SAFETY CRITICAL SYSTEMS AND THE
engines fitted to earlier models, and the pro-
MANAGEMENT OF ERROR: THE CASE OF
vision of a `glass cockpit' in the cabin. Both of
THE KEGWORTH AIR CRASH
these factors were major contributors to the
As aircraft become more sophisticated, they accident. The Captain and First Officer could be
create problems of interpretation for pilots
when non-routine events occur. Modern aircraft b
There seems little doubt that this decision to terminate their
employment was a result of the accident.
a c
Miller defines this notion as meaning that `it is rare, if ever, that new For a full account of the accident see Air Accidents Investigation
accident causal factors appear' (Miller, 1988, p. 60). Branch (1990), Carter (1994) and Smith (1992).

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

548 Denis Smith


Syst. Res. RESEARCH PAPER

considered as experienced pilots and had logged Airport) rather than return to Heathrow. Upon
a total of 13,176 and 3290 flying hours respect- attempting to land at East Midlands Airport, the
ively (Air Accidents Investigation Branch, 1990). port engine lost power and the pilots received a
However, of these hours, the Captain had only fire warning. Immediate attempts were made to
logged 763 hours on B-737 aircraft, of which 23 restart the right-hand engine but these were
hours had been on the 400 Series, and the First unsuccessful and the plane crashed on the M1
Officer had 192 hours on the B-737, with 53 on motorway.
the 400 Series (Air Accidents Investigation The first indication that the pilots had of a
Branch, 1990). Both the Captain and the First problem with the engine was when the plane
Officer completed their conversion course for the vibrated heavily and, according to the Captain,
400 Series aircraft on the 17 October 1988 (Air smoke came through the air conditioning system.
Accidents Investigation Branch, 1990). The pilots The First Officer informed the Captain that there
claim that this conversion course for the 400 was a fire and that, after some hesitation, stated
Series took the form of a lecture and tape±slide that it was in the right (no. 2) engine. The
presentation, which did not give them a com- Captain then ordered the throttling back of the
plete understanding of the aircraft's character- supposedly damaged engine which, he held
istics. Unfortunately, there was apparently no later, was common procedure, before closing it
simulator training for the 400 Series aircraft down. This action gave the indications that their
available at that time in the company, as this actions in identifying the failure had been correct
would have allowed the pilots to gain valuable as the vibration ceased. However, by shutting
experience with the new instrumentation prior to down an engine, the pilots had overridden the
making commercial flights. The event which fuel management system and this allowed more
faced them that evening, namely an engine fuel to be provided to the damaged engine,
failure and compressor stalls, would have been thereby reducing the juddering to a negligible
the first time that they had been confronted with level (Air Accidents Investigation Branch, 1990).
that particular set of instrument conditions. In Had the pilots checked the vibration indicator,
addition, the pilots have argued that the vibra- which was in a secondary position amongst the
tion meter, which was in a secondary position, engine instrumentation, they might have been
did not provide them with a sufficiently clear more readily able to identify the source of the
warning that a problem existed. Again, simu- problem. However, the position and size of this
lated training may have given them more instrument resulted in it being neglected by the
familiarity with the instrumentation's character- First Officer when he made his initial diagnosis
istics.d of the problem. This initial error was not
The aircraft engine initially failed some 20 min- corrected later when the pilots tried to evaluate
utes after take-off. As the aircraft climbed their decision making. Nothing seemed to alert
through 28,300 feet, part of a fan blade detached them to the real cause of the problem or to the
itself in the port (left-hand) engine and caused mistake that they had made and there were no
some damage to the engine (Air Accidents apparent warnings provided by those who had
Investigation Branch, 1990). The pilots became observed the fire.
confused by the combination of cues Ð namely, In this case, the error did not arise because
heavy vibration and associated noise, shudder- there were not available contingencies available
ing and a smell of burning Ð and they believed to the pilots but because a relatively bad rule,
that the right-hand engine was on fire. After that of throttling back on the supposedly
throttling the engine back, the captain decided to damaged engine, compounded the pilots' initial
shut it down. The pilots took the decision to judgemental error and reinforced that false
divert to the airline's home base at Castle judgement. Had the pilots been able to visually
Donnington near Nottingham (East Midlands confirm the status of the engine, either through
d
British Midland introduced their B737-3/4/500 simulator in 1990
the provision of external cameras or by a physical
(Aviation Information Resources, 1995). check through the passenger cabin windows,

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

On a Wing and a Prayer? 549


RESEARCH PAPER Syst. Res.

then their initial mistake would have been Turner, 1976, 1978; Perrow, 1984; Wagenaar et al.,
recognized. The problem was compounded 1990; Smith, 1995; Reason, 1997). In dealing with
further by the fact that the cabin crew did not this issue, Connors et al. (1994) suggest a number
alert the pilots to the fact that several passengers of reasons why failure is disproportionately attri-
had witnessed smoke and flames come out of the buted to operators. These reasons are considered
left hand (no. 1) engine. to be as follows: a reluctance by managers to
The AAIB Report stated that the cause of the consider the various design-related issues
crash was due to the fact that (because of the irradiation effect that this would
have on other systems of the same type); that
the operating crew shut down the No 2 engine
blaming the operator may deflect some financial
after a fan blade had fractured in the No 1
liability for the event away from the organiz-
engine. This engine subsequently suffered a
ation; the disproportionate power relationships
major thrust loss due to secondary fan
within organizations and the general helpless-
damage after power had been increased
ness of operators within the legitimation process
during the final approach. (Air Accidents
tend to result in a culture of blame; and that there
Investigation Branch, 1990, p. 2)
is a tendency amongst operators to blame
In addition, the report listed five factors that themselves when dealing with complex tech-
compounded this erroneous decision by the nologies (Connors et al., 1994).
pilots. These were: that the combination of vibra- Such arguments have relevance for the accid-
tion, noise, smell and shuddering were outside of ent at Kegworth. Had the root cause of the crash
the pilots' experience; that, contrary to their been given as a faulty engine, rather than the
training, they reacted too quickly to the event; focus on operator error that ensued, then this
the pilots did not fully assimilate the information may have resulted in all Boeing 737±400 aircraft
provided by their instruments before throttling being grounded until the fault was rectified.
back the no. 2 engine; in throttling back the no. 2 Given that two other aircraft experienced a
engine, the vibration, noise and shuddering similar engine failure after the Kegworth crash,
ceased and this reinforced their view that they although with no loss of life (Air Accidents
had made a correct decision; and finally, the Investigations Branch, 1990), then such concerns
pilots were not informed that many passengers over a possible flaw in the engine may have been
and some crew had seen flames coming out of considered to be well founded. Indeed, there are
the no. 1 engine (Air Accidents Investigation a number of other technical factors that may have
Branch, 1990). In addition, the report made 31 a causal bearing on this accident or may have
safety recommendations associated with the impacted upon the casualties that ensued. These
accident, many of which related to technical include: the failure to fit external cameras,
and design issues. following the engine fire in a Boeing 737 aircraft
at Manchester Airport; the apparent reluctance
on the part of manufacturers to strengthen cabin
DISCUSSION floors and seat provision; the failure on the part
of operators and manufacturers to fit rear-facing
The accident at Kegworth illustrates many of the seats in aircraft; and the nature of overhead
problems surrounding the process of error within storage bins and their impact upon upper body
an organizational setting. There is a general injuries during a crash-landing. What is clear is
recognition within the literature that failures in that operator error, while an important active
technical systems are all too often attributed to failure within this accident chain, served to
human causes (see, amongst others, Norman, expose a whole series of latent failures within
1990; Hollnagel, 1993). However, there is also a the system and these made a major contribution
view that such an attribution of blame is some- to the event (Smith, 1992) both in terms of
what misplaced as there are a multitude of causality and consequence. These errors can be
factors which interact to generate failure (see categorized according to the various points of

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

550 Denis Smith


Syst. Res. RESEARCH PAPER

intervention outlined by Reason and are shown focusing the pilots' attention on the likely source
in Table 1. of the problem.
What is clear from Table 1 is that the unsafe act
of switching off the wrong engine was preceded
by a whole series of latent failures, which com- MAKING SENSE OF THE EVENT
pounded its impact. In addition, there were a
number of systems-related problems and inade- If we try to model the dynamics of a cockpit and
quate defences which failed either to prevent this the interactions between pilots and the machine,
unsafe act from occurring in the first place or to then a series of potential error sources emerge.
mitigate the consequences of its impact once the These are shown in Figure 3, which is based on
error was made. Within safety critical systems, it the accident at Kegworth. There are a number of
is important that visual and auditory warnings important issues surrounding the interaction
are given to operators in order that they may between human operators and the technology
clearly identify the source of any problem. It that they are working with, as well as issues
would appear from this accident that such clarity relating to the crew resource management
was not afforded to the pilots from their instru- process (see, for example, Weick and Roberts,
mentation. Herein lies the danger for automated 1993). The distraction caused by external com-
systems. In this context, Wiener has observed munications from Air Traffic Control, the
that inability of the checklist to help the pilots
successfully identify the root cause of the
automated devices, while preventing many
problem and correct the error, the systemic
errors, seem to invite other errors. In fact, as a
barriers to communication within the aircraft
generalisation, it appears that automation
and the role of instrumentation in providing
tunes out small errors and creates opportu-
valuable feedback to the pilots are all considered
nities for large ones. (Wiener, 1985, p. 83)
to be important characteristics of the accident
Indeed, in discussing the monitoring role of chain for this event.
pilots on the modern flight deck, Wiener goes on A key aspect of this accident was the role of the
to argue that aircraft's instrumentation in both confusing the
pilots and failing to provide them with a clear
we are still making the same seemingly
indication of their initial error. In this case the
contradictory statement: a human being is a
error did not arise because there were no availa-
poor monitor, but that is what he or she
ble contingencies but because a relatively bad
should be doing. (Wiener, 1985, p. 87)
rule Ð that is, throttling back on the supposedly
In the case of the Kegworth accident, it is damaged engine Ð compounded the initial error
apparent that the pilots were not good monitors and reinforced their flawed judgement. A num-
of the system; although one can also point to the ber of questions remain unanswered by the
failure of the flight services crew to communicate findings of the official report and, while some
eyewitness accounts of the engine fire to the blame does inevitably rest with both the pilots
pilots, as being a major compounding factor in and other crew members, there are further issues
the accident. Similarly, the ability to make a that proved to be important in compounding the
visual check on the engines from the cockpit via problems faced on the flight deck.
external cameras would also have been an Firstly, one has to question the effectiveness of
invaluable feedback mechanism. The accident the team that was operating the aircraft. This
also raises issues surrounding the role and obviously includes the two pilots and their
effectiveness of checklists in fault diagnosis, an relative abilities in that task, but also includes
issue that has again attracted the attention of the interaction between the flight services crew,
researchers in human factors (see Degani and who had knowledge about the event but failed to
Wiener, 1993). This event suggests that the use of pass it on to the pilots. It also raises questions
checklists proved to be an inadequate method of concerning the reasons why two pilots with less

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

On a Wing and a Prayer? 551


RESEARCH PAPER Syst. Res.

Table 1. Points of intervention for error generation applied to the Kegworth air crash
Points of intervention
Fallible decisions . Manufacturer's decision not to test the engine in a ¯ying test bed
. No out-of-range markings provided on the vibration meter (attention-getting
device)
. The cabin services crew did not pass on vital information concerning direct
observations of events to the pilots
. The decision to divert to East Midlands Airport may have increased the
workload for the pilots due to the time factor and additional load. This may have
prevented a full assessment of events
. Co-pilot's determination of the source of the engine ®re was inaccurate (the left
engine was the faulty one and not the right engine)
. The pilots were deemed to have acted too quickly to the situation and did not
assimilate information from their instruments prior to making a decision

Line management . Lack of effective communication between the cabin crew and the pilots prevented
de®ciencies passenger accounts of the ®re being passed up to the ¯ight deck
. Awareness raising needed for the improved accuracy of vibration meters
. The nature of the training provided to cabin staff for emergency situations may
be insuf®cient to cope with such an event
. Need for greater simulator training for new aircraft types
. The presence of a ¯ight engineer might have led to an effective diagnosis of the
fault (aircraft designed for a two-person crew) by adding slack to the decision
team
Psychological precursors of . The combination of smoke/smell of burning created a confused perception of
unsafe acts incident causality for the pilots due to their knowledge of the systems design
. The engine was throttled back and this gave the pilots the indication that the
problem was solved
. Possible effects of an authority gradient within the aircraft may have impaired
the quality of communication, decision making and problem solving (hurried
decision and diagnosis)
. Failure event was outside the experience of the pilots
. Competing radio transmission traf®c caused some measure of distraction for the
pilots as they were trying to work through their checklist
. Problems of information processing due to the con®guration of the engine
instruments (vibration meter in a secondary position)
Unsafe acts . Pilots shut off the wrong engine having initially throttled it back

Inadequate defences and . Engine failed on the ®nal approach to land: pilots unable to correct the problem
systems de®ciencies by restarting the second engine due to time constraints
. Instrumentation inadequate to provide a suf®ciently clear warning of the
problem and its root cause
. Warning mechanisms inadequate for the pilots to take effective action
. No external cameras ®tted to provide pilots with a visual check on engine status
. Aircraft could have been further strengthened to exceed current design
requirements for crash testing which might have reduced the extent of the
fatalities and casualties (cabin ¯oor, seating and overhead stowage bins)
. The speed of interaction between the fan blade shattering and poor engine
performance was rapid
. Emergency procedures for engine failure and shutdown inadequate to cope with
the demands of the event ( pilot overload)
. The emergency checklist procedure appears to have been inadequate to allow the
pilots to diagnose the fault
Data drawn from Air Accidents Investigation Branch (1990); Carter (1994).

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

552 Denis Smith


Syst. Res. RESEARCH PAPER

Figure 3. Possible cockpit-related interactions on board the Kegworth aircraft

than 76 hours on the 400 Series were rostered on accident sequence, was both small and placed in
the same flight. However, given the absence of a secondary position within the bank of instru-
simulator training for this type it was probably ments. Fourthly, the lack of external cameras,
inevitable that most crews would have limited which had been a recommendation from the
experience of the aircraft at the time of this earlier Manchester Airport fire involving a 737,
accident. Secondly, the training that the pilots prevented the pilots from visually checking the
had been provided with to allow them to transfer status of the engines from the cockpit. Similarly,
to the new model of 737 can also be questioned. the lack of a flight engineer on this type of
At the time of the accident the conversion course aircraft also compounded the problem, as that
consisted of a tape±slide presentation and individual would have been in a position both to
associated question and answer sessions. There visually check upon the status of the engine and
was no available simulator available for the to assess the instrument readings. These com-
pilots at that time. Indeed, it appears that British ments, however, do not preclude the fact that the
Midland may have only introduced the first pilots could have checked with the flight services
available simulators for this model of aircraft into crew concerning their observation of the event,
the UK as late as 1990e (Aviation Information or that the latter should have been active in
Resources, 1995). Thirdly, the ergonomics of the communicating with the pilots. This cultural
glass cockpit and the switch to LED instruments dynamic is an important contributor to many
appear to have played a major role in precipitat- accidents which are deemed to have human error
ing the disaster. The vibration indicator, which at their core. Fifthly, there was considerable
was deemed to be of importance within the `interference' and `distractions' from Air Traffic
e
Aer Lingus introduced its B737-3/4/500 simulator in 1989 and
Control and this combined with the difficulties
British Airways in 1991 (Aviation Information Resources, 1995). that the pilots appeared to have with the

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

On a Wing and a Prayer? 553


RESEARCH PAPER Syst. Res.

become an object of attention only after it has


occurred' (Weick, 1995, p. 26). Under the con-
ditions of emergence associated with systems
failure, it is clear that this creates considerable
problems for those trying to manage complex
technologies. Given the nature of coupling and
complexity and the role of human intervention in
shifting the pattern of the event, this process of
retrospective sense making is both complex and
often poorly understood by those caught up in a
disaster. This relates, in turn, to Weick's third
Figure 4. Weick's seven properties of sense making element which sees the process as being `enactive
of sensible environments' (Weick, 1995, p. 30).
on-board navigational computer in diverting to Put another way, the environment is a social
East Midlands Airport to inhibit their effective construction and this then becomes an integrated
evaluation of the decision. Finally, the conven- part of the problem being defined. In dealing
tion of throttling back a supposedly damaged specifically with crises, Weick observes that
engine seems a somewhat difficult rule to justify
from the perspective of enactment, what is
in the light of this particular event.
striking is that crises can have small, volitional
beginnings in human action. Small events are
carried forward, cumulate with other events,
MAKING SENSE OF CONFUSION
and over time systematically construct an
environment that is a rare combination of
Those who forget that sensemaking is a social unexpected simultaneous failures. (Weick,
process miss a constant substrate that shapes 1988, p. 309)
interpretations and interpreting. Conduct is
Weick (1988) also argues that the process of
contingent on the conduct of others, whether
enactment will be dependent upon three major
those others are imagined or physically pre-
factors. These are: the commitment that those
sent. (Weick, 1995, p. 39)
involved in the sense making process have
Weick (1995) sees sense making as a process with towards their actions; the capacity that the actors
seven main elements (see Figure 4). Firstly, he have to deal with the problem in terms of their
argues that sense making is invariably grounded skills and perceptions; and the assumptions and
in the notion of the self Ð the individual is an expectations held by decision makers (Weick,
integral part of the process. Consequently, our 1988). One can argue that Rasmussen's (1983,
determination of the problem and our ability to 1992) distinction between skills, rules and knowl-
define what is `out there' will ultimately be a edge will be of importance in shaping this
function of our notions of self. In this context, process of enactmentf.
sense making becomes Within this whole process, discourse and
conversation are central components of sense
an ongoing puzzle undergoing continued
making and they comprise Weick's fourth group.
redefinition, coincident with presenting some
Sense making is a social process and it is
self to others and trying to decide which self is
inevitably dependent upon the discourse that
appropriate. (Weick, 1995, p. 20)
exists between those who are trying to come to
Weick's second element concerns the retrospec- terms with the problem facing them. This
tive nature of sense making. Here the actions of
those trying to make sense of the phenomena f
By segmenting their training programmes to deal with each of these
three aspects of an operator's task requirements, then it should allow
will invariably become an integral part of the for a more holistic approach to the identification and management of
problem being investigated: `An action can error and sensemaking issues.

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

554 Denis Smith


Syst. Res. RESEARCH PAPER

element of sense making is of particular import- may not recognize the pattern of events and will
ance within aviation as flight crews attempt to go with the `solution' that fits within their
solve problems which fall outside of their dominant paradigm. In drawing these issues
protocols (see Johnson, 1997; Linde, 1988; Cush- together, Weick details the overall sense-making
ing, 1994). The fifth element sees sense making as process as follows:
an ongoing process. As sense makers are
Once people begin to act (enactment), they
presented with more stimuli and information
generate tangible outcomes (cues) in some
then they will continue to re-evaluate the nature
context (social), and this helps them discover
of the problem or, perhaps, reject data that does
(retrospect) what is occurring (ongoing), what
not fit into their current world-view. The
needs to be explained ( plausibility), and what
processes of insight and reflection are important
should be done next (identity enhancement).
here, and the whole process is deemed to be tied
Managers keep forgetting that it is what they
into those emotional factors and relationships
do, not what they plan, that explains success.
that exist between the various sense makers
(Weick, 1995, p. 55)
(Weick, 1995). This is closely related to Weick's
sixth element, which concerns the extraction of If we apply these seven elements to the crew
cues and the breaking up of the flow of events in actions on the Kegworth plane then we can gain
order to provide for an interpretation of the some insights into the manner in which they
event. This process runs the risk of becoming sought to make sense of the engine failure and
reductionist, particularly when it becomes con- the resultant emergency.
textualized within organizational settings. In the The main patterns of sense making that were
aftermath of a crisis event, this extraction of cues undertaken by the Kegworth crew are shown in
will be important in allowing for the reconstruc- Figure 5. The initial cues that were presented to
tion of events, with the possible elevation in the pilots came from the noise, smoke, fumes and
significance of certain acts and stimuli over smell associated with the engine fire. They
others. The final element in the process is the received no clear or effective warning from
notion that sense making is driven by plausibility their instruments to alert them to the source of
rather than accuracy, for as Weick observes: the problem. From this point on, the event
assumes characteristics that are influenced by
sensemaking is about plausibility, coherence,
the First Officer's determination that the problem
and reasonableness. Sensemaking is about
was with the right-hand (no. 2) engine. This
accounts that are socially acceptable and
error, combined with a failure of the aircraft's
credible . . . It would be nice if these acceptable
warning systems to alert the pilots to their false
accounts were also accurate. (Weick, 1995,
assumption, set in train a sequence of events that
p. 61)
would result in the accident.
If the interpretation of the event is plausible The action of throttling back on the no. 2
enough, then people will act upon the cues that engine and the disengaging of the auto-throttle
emerge and base their subsequent actions on that resulted in a reduction of the vibration and
interpretation. Under conditions of crisis, both smoke which gave the pilots an illusion of
the time and the information required to ensure normality. After some time, the Captain took
greater accuracy are often not available to the decision to shut down the no. 2 engine as it
decision makers. As a result, they then tend to seemed plausible that they had correctly diag-
base their interpretation on the credible nature of nosed the nature of the problem. This would
events. Under such conditions, this issue of later create problems on the approach to East
plausibility may lead to cognitive narrowing as Midlands Airport, as the crew could not restart
decision makers reject interpretations that do not the healthy engine when the no. 1 engine began
meet their world-view. Indeed, if the event to fail under the additional power requirements
results from the emergent properties of the of landing. Despite the severity of the situation
system then it is likely that the decision makers the pilots were apparently not made aware of the

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

On a Wing and a Prayer? 555


RESEARCH PAPER Syst. Res.

Figure 5. Patterns of sense making in the Kegworth aircraft

fact that a number of passengers and some cabin simulator provision) and the usefulness of the
staff had seen flames coming out of the no. 1 conversion training that was given to the pilots
engine. The passengers made no comment when in moving them to a glass cockpit aircraft. Given
the Captain announced that they had a problem the nature of the 400 Series cockpit layout, it
with the no. 2 engine, making the assumption might have been expected that this conversion
that the pilots were fully aware of the problem. course would have been more extensive to take
This may, in part, reflect on the social roles that account of these changes. It can be suggested that
exist within aircraft and the possible authority a greater familiarity with the instrumentation
gradients (see Alkov et al., 1992) that can exist and its significance might have helped the
within such social settings. For their part, some Kegworth pilots in their sense making for the
of the cabin staff claimed that they had not heard event. Certainly the Captain believed that
the Captain's reference to the right-hand engine, the vibration meter was unreliable and, given
although a number of the passengers clearly his experience on other aircraft, did not give it
heard the announcement. The communications the attention that it demanded. While the First
process was also central to the prevention of Officer made the initial determination of the
effective review for the pilots. When they tried to source of the problem, he could not remember
review the cues that they had received, they were what prompted him to make that decision.
interrupted by legitimate communications from Clearly then, the instrumentation and warning
Air Traffic Control. systems did little, if anything, to aid the pilots in
There are a number of organizational issues their decision making, although this may have
that also had an impact upon the event. The most been as much an issue with the pilots themselves
notable was the decision to roster pilots together as with the layout. This was compounded further
with little collective time on the 400 Series by the fact that the checklists did not alert the
(accepting the difficulties caused by a lack of pilots to their error in decision making and so

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

556 Denis Smith


Syst. Res. RESEARCH PAPER

their initial determination of the problem re- is likely to become more of an issue as a new
mained plausible and, therefore, dominant. It re- generation of glass cockpit and fly-by-wire
mains an issue for further research as to whether aircraft come into service. As these technologies
the extensive use of checklists can impair the become more advanced, it is likely that the
effective sense-making process, especially given human operators will be faced with further
the nature of emergence. While the process of sense-making problems, especially as these tech-
sense making continued for the short duration of nologies become ever more increasingly con-
the flight to East Midlands airport, the constant trolled by computers. The Kegworth accident
distractions and the time constraints imposed pointed to the need for clear diagnostics within
upon the pilots prevented their initial mistake the instrumentation and that these systems
from being identified. It is difficult to assess the should provide pilots with unambiguously
importance of role identity within this case, clear information concerning the root cause of
although there would seem to be important problems with the aircraft. The design and layout
issues around the relative status of aircraft staff of instrumentation have, once again, proved to
and the possible effects of this upon communi- be a significant factor in affecting the accident
cation. This issue would require further research chain, as previous evidence had already
in a simulated environment before its impact on suggested (see Mann and Schnetzler, 1986). On
the sense-making process could be evaluated. this occasion, the position of the instruments,
and the pilots' perception of their accuracy and
reliability, combined with the available circum-
stantial evidence to generate a false assessment
CONCLUSIONS
of the problem. Again, this is a well-researched
area, although the new generation of cockpits
Our actions are always a little further along
will create new and more complex scenarios for
than is our understanding of those actions,
researchers to deal with, and further work on the
which means that we can intensify crises
sense-making process is required here. What is
literally before we know what we are doing.
also clear is that further attention needs to be
Unwitting escalation of crises is especially
given to the processes by which organizations
likely when technologies are complex, highly
learn from crisis events of this nature. Clearly,
interactive, non-routine and poorly under-
sense making is an effective precursor to this
stood. The very action which enables people
learning process and its core principles need to
to gain some understanding of these complex
be incorporated into accident investigation.
technologies can also cause those technologies
While a study of single-case accidents is
to escalate and kill. (Weick, 1988, p. 308)
important, it is necessary to validate the theo-
The accident at Kegworth illustrates the complex retical frameworks outlined in this paper across a
interactions that precipitate crisis events. While range of accidents. In particular, it is important
the root cause of the crash is held to be a result of to combine post-accident analysis of this nature
human error, there are clearly a number of with other forms of data capture, in an attempt to
mitigating factors that need to be taken into ensure that these frameworks are robust. Of
account when assessing causality. This paper has importance in this respect is the study of error
sought to outline and review some of the main generation across aircraft types, or within the
issues surrounding the role of human error in context of a single organization where a safety
such events. However, the constraints of space culture may not be in evidence. Further work is
prevent a full discussion of the human factors clearly required in these areas, particularly those
elements inherent in aircraft operations. It is clear involving the new generations of aircraft in
that further research in this area is required; in which the computer assumes greater control of
particular, work is needed to assess the role of the operation than before. Such technological
modern glass cockpits in creating problems of developments have significant implications for
interpretation and sense making for pilots. This both the selection and training of pilots and the

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

On a Wing and a Prayer? 557


RESEARCH PAPER Syst. Res.

management of the interface between the soft- authority gradient on Navy/Marine helicopter mis-
ware engineers and the end users of the tech- haps. Aviation, Space, and Environmental Medicine
63(8), 659±661.
nology. The core assumptions of each group of Aviation Information Resources (1995). Civil simu-
`experts' may not be the same and this may lators directory. Flight International 147(4468), 33±34,
increase the potential for error. It is in this area 36±38, 42±44, 46±50, 52±56.
that the broader concepts of crisis management Bignall, V., and Fortune, J. (1984). Understanding
will be of importance in providing frameworks Systems Failures, Manchester University Press,
Manchester, UK.
for the assessment of error potential across the
Carter, R. (1994). A major disaster Ð the M1 plane
organization, rather than simply at the opera- crash Ð how it occurred. In Wallace, W. A., Rowles,
tional level. J. M., and Colton, C. L. (eds), Management of Dis-
It is clear that human factors research has a asters and their Aftermath, BMJ, London, pp. 10±28.
considerable role to play in preventing accidents Checkland, P. (1981). Systems Thinking, Systems Prac-
involving complex technologies. What is also tice, Wiley, Chichester.
Checkland, P., and Scholes, J. (1990). Soft Systems
apparent is the need for such research to move Methodology in Action, Wiley, Chichester.
beyond the traditional cockpit-based assessment Connors, M. M., Harrison, A. A., and Summit, J.
of the problems and further into the wider (1994). Crew systems: integrating human and
organizational context. It is here that a broader technical subsystems for the exploration of space.
understanding of cognitive processes within Behavioral Science 39, 183±212.
Cushing, S. (1994). Fatal Words: Communication Clashes
latent error incubation would provide consider- and Aircraft Crashes, University of Chicago Press,
able scope to improve the reliability of organiza- Chicago.
tions and the systems that they operate. This Degani, A., and Wiener, E. L. (1993). Cockpit check-
would occur not only by improving the quality lists: concepts, design and use. Human Factors 35(2),
of the systems themselves, but also by changing 345±359.
the current paradigms that are prevalent within Diehl, A. E. (1991). Human performance and systems
safety considerations in aviation mishaps. Inter-
management theory. national Journal of Aviation Psychology 1(2), 97±106.
Edwards, D. C. (1990). Pilot Mental and Physical
Performance, Iowa State University Press, Ames.
ACKNOWLEDGEMENTS Fortune, J., and Peters, G. (1995). Learning from Failure:
The Systems Approach, Wiley, Chichester.
Giddens, A. (1990). The Consequences of Modernity,
The author is grateful to a number of colleagues Polity Press, Cambridge, UK.
who have made comments on earlier versions of Green, R. (1990). Human error on the flight deck.
this paper. These include Dominic Elliott, Ray Philosophical Transactions of the Royal Society of London
Hudson, Rebecca Lawton, Jo McCloskey and B 37, 503±512.
James Reason. In addition, the author is grateful Hollnagel, E. (1993). Human Reliability Analysis: Context
and Control, Academic Press, London.
for the comments made by two anonymous Hurst, R., and Hurst, L. R. (eds) (1982). Pilot Error: The
reviewers. Inevitably, the author is responsible Human Factor, Jason Aronson, New York.
for all errors of omission and commission within Jensen, R. S. (1995). Pilot Judgment and Crew Resource
the paper. Management, Avebury Aviation, Aldershot, UK.
Johnson, C. W. (1997). The epistemics of accidents.
International Journal of Human±Computer Studies
47(5), 659±688.
REFERENCES Linde, C. (1988). The quantitative study of commu-
nicative success: politeness and accidents in aviation
Air Accidents Investigation Branch (1990). Report on the discourse. Language in Society 17(3), 375±399.
accident to Boeing 737-400 G-OBME near Kegworth, Mann, T. L., and Schnetzler, L. A. (1986). Evaluation of
Leicestershire on January 8, 1989. Department of formats for aircraft control/display units. Applied
Transport, Aircraft Accident Report 4/90, HMSO, Ergonomics 17(4), 265±270.
London (Internet copy at http://roof.ccta.gov.uk/ Meshkati, N. (1991). Human factors in large-scale
aaib/gobme/gobmerep.htm). technological systems' accidents: Three Mile Island,
Alkov, R. A., Borowsky, M. S., Williamson, D. W., and Bhopal, Chernobyl. Industrial Crisis Quarterly 5(2),
Yacavone, D. W. (1992). The effect of trans-cockpit 133±154.

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

558 Denis Smith


Syst. Res. RESEARCH PAPER

Miller, C. O. (1988). System safety. In Wiener, E. L., Shrivastava, P., Mitroff, I., Miller, D., and Miglani, M.
and Nagel, D. C. (eds), Human Factors in Aviation, (1988). Understanding industrial crises. Journal of
Academic Press, San Diego, pp. 53±80. Management Studies 25(2), 283±303.
Nagel, D. C. (1988). Human error in aviation opera- Smith, D. (1992). The Kegworth aircrash: a crisis in
tions. In Weiner, E. L., and Nagel, D. C. (eds), three phases? Disaster Management 4(2), 63±72.
Human Factors in Aviation, Academic Press, San Smith, D. (1995). The dark side of excellence: manag-
Diego, pp. 263±303. ing strategic failures. In Thompson, J. (ed.), Handbook
Norman, D. A. (1990). The problem with automation: of Strategic Management, Butterworth-Heinemann,
inappropriate feedback and interaction not over- London.
automation. Philosophical Transactions of the Royal Turner, B. A. (1976). The organizational and inter-
Society of London B 37, 585±593. organizational development of disasters. Adminis-
O'Connor, J., and McDermott, I. (1997). The Art of trative Science Quarterly 21, 378±397.
Systems Thinking: Essential Skills for Creativity and Turner, B. A. (1978). Manmade Disasters, Wykeham
Problem Solving, Thorsons, London. Press, London.
Wagenaar, W. A., Hudson, P. T. W., and Reason, J. T.
Perrow, C. (1984). Normal Accidents: Living with High-
(1990). Cognitive failures and accidents. Applied
Risk Technologies, Basic Books, New York.
Cognitive Psychology 4, 273±294.
Rasmussen, J. (1982). Human errors: a taxonomy for
Weick, K. E. (1988). Enacted sense-making in crisis
describing human malfunction in industrial installa- situations. Journal of Management Studies 25, 305±317.
tions. Journal of Occupational Accidents 4, 311±335. Weick, K. E. (1990). The vulnerable system: an analysis
Rasmussen, J. (1983). Skills, rules, knowledge: signals, of the Tenerife air disaster. Journal of Management 16,
signs and symbols and other distinctions in human 571±593.
performance models. IEEE Transactions: Systems, Weick, K. E. (1993). The collapse of sense-making in
Man and Cybernetics 13, 257±267. organisations: the Mann Gulchj disaster. Admini-
Reason, J. (1990a). Human Error, Cambridge University strative Science Quarterly 38, 628±652.
Press, Cambridge, UK. Weick, K. E. (1995). Sense-Making in Organisations,
Reason, J. (1990b). The contribution of latent human Sage, Thousand Oaks, CA.
failures to the breakdown of complex systems. Weick, K. E., and Roberts, K. H. (1993). Collective
Philosophical Transactions of the Royal Society of London mind in organisations: heedful interrelating on flight
B 37, 475±484. decks. Administrative Science Quarterly 38, 357±381.
Reason, J. (1997). Managing the Risks of Organisational Wiener, E. L. (1985). Beyond the sterile cockpit. Human
Accidents, Ashgate, London. Factors 27(1), 75±90.

Copyright *
c 2000 John Wiley & Sons, Ltd. Syst. Res. 17, 543–559 (2000)

On a Wing and a Prayer? 559

Das könnte Ihnen auch gefallen