Sie sind auf Seite 1von 37

Applying the Cognitive Streaming Theory to Air Traffic Management: A preliminary study

November 2001

Dylan Jones Department of Psychology Cardiff University Eric Farmer Centre for Human Sciences QinetiQ

This page intentionally blank

Table of Contents
Glossary Executive Summary 1.
1.1 1.2

2 3 4
4 4

Objectives of the preliminary study Scope

2.1 2.2 2.3 2.4

Cognitive Streaming ? An Outline

Introduction Auditory attention The nature of short-term memory The nature of dual-task processing

5 5 6 7

3.1 3.2 3.3 3.4 3.5 3.6 3.7

Applying the Streaming Model to Aviation

Introduction Workload Direct voice input and output Human error Situational awareness CRM training Data link and the party line

9 9 10 10 10 10 11

4.1 4.2 4.3 4.4 4.5 4.6

Experimental Study: Effects of Irrelevant Speech in a Memory Task 14

Introduction The irrelevant sound effect Extending the ISE to applied settings Method Results Discussion 14 14 15 16 17 20


Optimising Information Presentation

Conveying information by eye and ear


6. 7.
7.1 7.2 7.3 7.4 7.5 7.6

Conclusions Proposal for Future Work

The position to Future work Work-Package Work-Package Work-Package Work-Package date 1 2 3 4

27 28
28 28 28 28 29 29



30 35

Annex A: Results of Statistical Analysis

ANOVA Analysis of variance. A standard statistical technique in which the variation in a set of data is partitioned into that due to particular factors Dimension cue point. In this study, the time at which experimental subjects were informed which dimension of the stimuli they were required to report The irrelevant speech effect: disruption of performance by speech that is irrelevant to the individual's task millisecond. Response times are typically recorded to the nearest ms Probability. In statistical tests such as ANOVA, p refers to the probability that the effect in question occurred by chance. If the p value is small (typically less than 0.05), it is concluded that the effect is genuine Mental processing in which stimuli are categorised on the basis of the persons established linguistic repertoire Early mental processing that does not involve language and is not dependent on categorising the stimulus Response time. The interval between presentation of a stimulus and execution of a response The process of keeping track of the order of information Experimental material that participants are required to process Short-term memory. The duration of STM is generally considered to be about 10-15 seconds, and its capacity 72 'chunks' of information 'to-be-remembered'. Experimental material that the individual is asked to memorise



ms p




Seriation Stimulus STM


Executive Summary
This preliminary study was conducted as part of the Eurocontrol CARE programme. Its objective is to demonstrate how a new theoretical approach to understanding human information processing ? cognitive streaming ? can be applied to problems facing aircrew and air traffic controllers as traffic density increases. Cognitive streaming is a theoretical framework for human information processing when auditory and visual information is combined. It was initially developed to understand the effects of irrelevant background sound on cognitive processing. Adverse effects are often observed, particularly in tasks that involve the retention of order in short-term memory ? even though the individuals are instructed to ignore the sound. The streaming notions that explain these effects differ from theories based on specialised mental resources: they predict that interference between tasks occurs when they draw upon the same mental process (such as seriation, or keeping track of order) rather than when their content is similar (e.g., both are spatial tasks).

Relevance to aviation
The possible application of the streaming theory to aviation topics is discussed. Particular attention is given to the introduction of data link and the resultant partial loss of party line information. Since speech disrupts aspects of human performance, removal of the party line may have beneficial effects. However, some studies have suggested that party line information may be a source of situational awareness (SA), and that SA will therefore be compromised after the introduction of data link. The available evidence on loss of the party line is reviewed. Although there is some evidence that the party line is useful in maintaining SA, it is largely based on subjective aircrew opinion. To study further the possible benefits of removing party line information, an experiment was conducted in which participants performed a short-term memory task in the presence or absence of irrelevant speech. T he results indicated that order memory for visually presented information was disrupted by extraneous speech, regardless of whether this information was coded spatially or verbally. These findings are entirely consistent with the predictions of the streaming model, but are problematic for the resource-based models upon which many aviation human factors studies are based. Their practical implications are discussed.

Optimising information presentation

It is likely that future data link systems will retain some voice-based information. Rather than postulate a strict distinction between visual (data link) and auditory (R/T) information, it is more important to consider how information presentation can be optimised by judicious combinations of these sensory modalities. The streaming model is ideally suited to guiding such optimisation, and our initial thoughts on this topic are described.

Next steps
The report concludes with a description of possible further work. Initial laboratory-based investigation is recommended, followed by validation in realistic aviation environments.

1. Introduction
1.1 Objectives of the preliminary study

The objective of this short project is to demonstrate how a new theoretical approach to understanding human information processing ? cognitive streaming ? can be applied to problems facing aircrew and air traffic controllers as traffic density increases. This preliminary work considers topics such as the introduction of data link, and includes an experiment to demonstrate that irrelevant speech disrupts human performance.



The study encompasses: A description of the streaming model Relevance of the model to aviation Data link issues Methods of optimising the presentation of information Demonstration experiment: effects of irrelevant speech on verbal and spatial processing in a memory task Ideas for further research

2. Cognitive streaming An outline

2.1 Introduction
Cognitive streaming is the name given to a theoretical framework for human information processing when auditory and visual information is combined. This approach has a number of characteristics that distinguish it from conventional approaches to understanding the nature of auditory attention and the effects of workload. The framework was initially developed to understand the effects of irrelevant background sound on cognitive processing. Adverse effects of background sound are often observed, particularly in tasks that involve the retention of order in short-term memory ? even though the individuals are instructed to ignore the sound. Effects on short-term memory have implications for many practical tasks. Short-term memory is particularly important when individuals have to deal with novel information, or when they have incompletely developed capacities or skills (during training, or when dealing with information overload).


Auditory attention

Processing of sound has a number of important characteristics: Processing of sound is obligatory. The mere fact that unattended sound has an impact on processing demonstrates the obligatory nature of auditory processing. Sound is registered even when attention is directed elsewhere, and even when the primary (short-term memory) task is visual. The interference is therefore not an effect at the sensory periphery; rather, its location is more central. Unattended sound is organised much like attended sound. The perceptual organisation of unattended sound seems identical to that found when the sound is attended. The auditory system organises sounds into temporally extended entities or streams. The principles of this organisation are well understood and have been exploited for centuries by composers. Consider the case of streaming by location as an example of perceptual organisation of unattended sound. It is well established that a changing sequence of irrelevant sound is more disruptive than a repeated sequence. Hence a sequence in which different letter sounds are repeated (say, p, j, x ) will be much more disruptive than one in which a single sound is repeated (p, p, p). If three different successive sounds in the cycle also come from different locations (left ? centre ? right), they form three streams. The degree of disruption is then much lower than when the sounds come from a single location. This implies that the sounds must have been organised in a way comparable to that observed when they are attended. The ability to organise information without attention facilitates switching between one source and another. If there were no such organisation, it would require more effort to switch attention from vision to audition, or from one auditory stream to another. Having all possible sources of information organised in advance, on the basis of their physical characteristics, makes attention flexible and adaptive. Unattended sound does not habituate. It is often assumed that the involuntary orienting response to unattended sound will gradually weaken as a result of repetition (i.e., the individual will get used to the sound and it will have less effect). The cognitive streaming framework suggests otherwise; it proposes that the nature and strength of representation do not change over time. The utility of failing to habituate to sound is that the flexibility and speed of change in attention are preserved. The experimental evidence supporting this position comes from two sources. One shows no attenuation of the effect over successive trials in an experiment, and even over successive days of testing. The other relates to the number of different items (say, words or letters) in the irrelevant sequence. The habituation position holds that the amount of habituation is a product of the number of times a stimulus is repeated: the more items, the fewer the repetitions of any item. In fact, the function is non-

linear, with disruption increasing as the number of items goes from one to two and rising only modestly beyond then. This result supports the conclusion that habituation does not play a significant role. Disruption is not affected by the similarity of irrelevant and relevant material. An intuitively attractive assumption is that the effect of irrelevant sound is based on its similarity to the contents of the primary task. It seems plausible that hearing irrelevant digits will be particularly disruptive if a person is trying to remember sequences of digits. However, this assumption is incorrect: several experiments have shown that the similarity of the irrelevant sound to the material held in memory is of no importance.


The nature of short-term memory

Memory as skill The pattern of interference by irrelevant sound has led to a reappraisal of the notion of shortterm memory (STM). Classically, memory has been thought of as a faculty distinct from others such as perception. An analogy with computer storage was drawn upon extensively in the 1950s: it was argued that memory was based upon stores, and some writers suggested that there were several such stores. Of course, this is not the only way in which memory can be conceptualised. The cognitive streaming approach suggests that memory phenomena represent one region of a continuum of behaviour. Rather than suppose that short-term memory is a store, the streaming approach suggests that it is one of a range of skilled behaviours. Short-term memory is characterised by sequences with low transitional probabilities to which habits of language processing are applied. For example, in a sequence of random numbers, each digit gives no inherent information about the next in the sequence; however, the individual may apply a number of strategies ? from mere repetition to complex association with previously stored knowledge ? to assist in the retention of the sequence. Thus, transitional probabilities are low because the materials that typically have to be committed to memory are in an unfamiliar order, with weak or non-existent linkages between them. The transitional information normally present in spoken linguistic sequences ? the cat sat on the leads to a high transitional probability for the word mat? is largely absent. The persons task is to overcome this shortfall by imposing order where none existed. This type of behaviour ? deliberate and effortful ? belongs to one end of a continuum, the other end of which is represented by effortless and fluent skilled behaviour. Notions related to skill rather than to storage are therefore appropriate. The dominant idea is the match between established procedures for action and perceptual affordances matching these procedures to events in the world (or indeed the products of thought). Short-term memory is a skill much like playing tennis; no special stores need to be invoked. Autonomy of action, and low workload, occur when stimuli and skill are matched perfectly. Short-term memory phenomena occur when stimuli are imperfectly related to skill; effort has to be expended in applying a range of linguistic skills to support the retention of order information. Understanding of memory will be achieved not by searching for stores but by understanding how habits (linguistic and other types) and statistical regularities of the material dictate the ease with which it is retrieved. For example, there will be differences in the ease with which sequences are set up automatically, depending on whether the information is presented visually or auditorily. The key construct is that of transitional probabilities: performance is good when transitional probabilities are high, which may be achieved by a match between stored procedures and familiar stimuli. The stored procedures may be perceptual affordances (such as the obligatory processing of sound, and some visual stimuli) or well-learned cognitive skills (such as the motor skill of driving or flying). When perfectly matched, automaticity of performance can be achieved. This has the advantage of releasing resources to deal with other streams of information.


The nature of dual-task processing

Pre-categorical versus post-categorical processes A distinction is often made within cognitive psychology between: pre-categorical processes (those not involving language, or not dependent on categorising the stimulus) post-categorical processes (resulting from contact with the persons established linguistic repertoire). In the typical irrelevant sound experiment involving short-term memory, adding meaning to the sound does not increase the degree of disruption. This suggests that pre- and postcategorical process are not distinct; the primitives involved in streaming of sound are almost certainly pre-categorical, whereas short-term memory for strings of verbal items is most certainly post-categorical, but there is marked interference. This could be the case only if perception disrupts memory. Other experiments in our Cardiff laboratory using quite different techniques support this conclusion. For instance, an auditory memory task undertaken whilst the person is being exposed to visual motion reduces the duration of motion after-effect (whereby motion is induced in a stationary object as a result of prior exposure to a moving object). Here is another example in which the primary task is pre-categorical or perceptual and a concurrent task is post-categorical. The distinction between pre- and post-categorical processes has implications for the interference between concurrent activities (typified by multiple-task job performance). We may expect perceptual events to interfere with storage (or cognitive) activities, and vice versa. It is very difficult for conventional models, such as Wickenss (e.g., 1992) multiple resource theory, to account for such findings. Such models of workload suggest a functional distinction between vision and audition. There are by now several examples (two of which have been provided here) in which auditory processing interferes with visual performance (of a precategorical or post-categorical nature). From the standpoint of cognitive streaming, the pattern of interference between modalities can be understood in terms of the functional characteristics of each. That is, vision and audition are not intrinsically different in the types of representation they use, but they are structurally constrained in ways that make certain types of processing more habitual than others. We discuss the functional differences between modalities later. Suffice it to say that, to partition functions in terms of their suitability for auditory and visual processing, the functional characteristics of each have to be considered. Spatial and verbal processing. A distinction is commonly made between spatial and verbal processing. Some theorists have even argued that spatial and verbal events are retained in separate stores. The cognitive streaming perspective supposes otherwise. Consider a simple case from the laboratory. For some time it was supposed that spatial and verbal short-term memories were different because the patterns of serial position error (the error pattern over successive items in the sequence to be remembered) were different. However, researchers in our laboratory noticed that all the cited instances of spatial memory involved tests of recognition, whereas those of verbal memory were tests of serial order. When both tests are of the same type, the pattern of error turns out to be the same. That is, process (such as recognition) has been confounded with modality. Modalities constrain processes: for example, in dual-task studies spatial tasks are sometimes continuous (e.g., involving tracking) whereas verbal tasks tend to be discrete (e.g., involving response to words). In these cases, interference is more a product of having to co-ordinate continuous and discrete tasks than of a conflict between verbal and spatial codes. Processing of secondary task information depends on the processing of primary task information. This is the least fully developed part of the framework, and the following discussion should be regarded as speculative. A potentially important trend emerging from

the research on irrelevant sound is that the character of interference from sound depends on the type of processing being undertaken in the primary task. If that task involves the processing of order, then it seems that the order information embodied in the irrelevant sound is the important factor. If, however, the key component of the primary task involves the manipulation or extraction of meaning, then the meaning of the sound becomes an important determinant of disruption. Hence the prevailing level of processing seems to be applied to both attended and unattended information. At its minimum, this means that irrelevant sound is processed to different degrees. In some settings, only the barest information about the physical characteristics of the sound is available; whereas, in others, semantic analysis is possible.


Applying the streaming model to aviation


Since the streaming model accounts for multiple-task performance and the simultaneous processing of visual and auditory information, it is extremely relevant to aviation operations. For an overview of future challenges facing aviation psychology, see Farmer (2000b). Below, the contribution of the streaming model to particular aviation topics is briefly considered.



Workload has traditionally been one of the most important topics in aviation psychology (e.g., Farmer, 1993). Early studies were concerned primarily with aircrew workload. In recent years, however, the workload of the air traffic controller has been of considerable concern, owing to the rapid growth in air traffic and the introduction of new technology and procedures. Farmer et al (1991) reported several consequences of increased workload in UK air traffic controllers, including changes in performance (sensitivity to signals), mood state, and biochemical indices. For example, the excretion of cortisol (a chemical related to stress states) increased when controllers worked on busy shifts. It is possible to measure workload in several ways ? primary or secondary task performance, subjective ratings, or physiological indices ? but the high cost of testing future systems in man-in-the-loop studies has led to increased dependence on predictive modelling. Most approaches are based on the notion of multiple resources (Wickens, e.g., 1992) represented in Figure 1. This approach supposes the existence of distinct functional processing resources, each with its own capacity. Interference between tasks is assumed to occur when their content is similar (e.g., both are spatial tasks), since the tasks are assumed to be drawing upon the same resource. However, as discussed earlier, it can be demonstrated that ostensibly different tasks will interfere if they both require the same process (such as keeping track of order), and, conversely, that apparently similar tasks will not interfere if they require different types of process. Although several studies have shown assumptions of the multiple resources model to be incorrect (e.g., Wickens and Liu, 1988), we believe that the streaming approach is the first to offer a well-founded explanation of its weaknesses.

Stages Responding Manual Visual Modalities Auditory Spatial Codes Verbal

Figure 1. Wickenss multiple resource theory

Central Processing


Responses Vocal

One of the most widely used predictive techniques, developed by McCracken and Aldrich (1984), is based on a loose interpretation of the Wickens model. This VACP model (visual, auditory, cognitive, psychomotor) estimates the demand produced by tasks on each of four resources, and the likely extent of interference between concurrent tasks drawing upon the same resource. In response to perceived shortcomings of the VACP approach, QinetiQ produced the Prediction of Operator Performance (POP) model. This fairly sophisticated model (Farmer, 2000a) requests subject-matter expert ratings of the demands imposed by single tasks, and computes the demands experienced when tasks must be performed concurrently. The POP model uses mathematical algorithms based on the notion of resources. It predicts workload quite accurately, and also makes reasonably good predictions of performance decrement under multiple-task conditions. However, further improvements in the model will probably require a more advanced theoretical basis, and QinetiQ is considering whether notions of cognitive streaming can be incorporated.


Direct voice input and output

Implicitly or explicitly, ideas for the design of future aviation systems are often based upon the notion of multiple resources. Conventional aviation tasks typically place heavy demands upon visual information acquisition, spatial processing, and manual output (e.g., attempting to fly straight and level, using the attitude indicator and controlling the aircraft using the column). Concurrent tasks such as manual data entry might be expected to interfere with this activity. It is sometimes claimed, therefore, that the relatively under-utilised resources of auditory processing and vocal output should be exploited more fully. Hence, provided that the reliability of the technology can be improved, direct voice input and output (DVI/O) have appeared to be attractive options (e.g., Hawkins, 1987). The streaming model suggests that the mental processes required are more important than the task content in determining interference between concurrent activities. Thus, careful task design may be more useful than striving for new technologies.


Human error

Most aviation accidents continue to be attributed, at least in part, to human error, and factors such as workload and distraction emerge as causal factors (e.g., Chappelow, 1999). In the multi-tasking aviation environment, the streaming model may help to explain the sources of interference that underlie some types of error. Importantly, this model's predictions are to some extent counter-intuitive, and so powerful interference effects may in the past have been overlooked.


Situational Awareness

The notion of situational awareness (SA) has a number of weaknesses. For example, ongoing research in the QinetiQ Centre for Human Sciences (Croft, Banbury, Crick, & Berry, 2000; Croft, Banbury, & Thompson, 2001) is demonstrating that explicit knowledge of the situation, addressed by current SA measures, is not the only determinant of performance: it is important also to measure implicit knowledge (knowledge that cannot be verbalised but influences performance). Since much expert knowledge is implicit for example, the skilled pianist may have difficulty in describing the process of sight-reading music no comprehensive model of SA can be based upon explicit knowledge only. The importance of the streaming model in the context of SA is that it indicates that material that appears to be ignored may in fact be processed quite comprehensively. Speech signals may therefore be an appropriate medium for the transmission of important information.


CRM Training

Crew Resource Management (CRM) training is now widely mandated in aviation, and is being adapted for ATC (see Farmer and McIntyre, 1999, for an overview). The purpose of CRM is to develop non-technical, teamwork, skills, since poor team co-ordination has been the source of


several major aviation accidents. One of the major components of CRM training is effective communication, particularly vocal communication. Since the streaming model has important implications for the processing of speech-based signals, its general principles might usefully be introduced during CRM training courses.


Data Link and the Party line

Introduction Consideration of the effects of partial loss of party line after the introduction of data link touches upon a number of key constructs within the cognitive streaming framework: the way in which the irrelevant party line sound is processed the costs associated with undertaking auditory and visual processing concurrently the functional features of processing in auditory and visual modalities Using current voice communication systems, aircrew overhear a considerable amount of information exchange between other aircrew and air traffic control. At least some of this party line information will be lost when data link systems are introduced. There is experimental evidence that irrelevant speech disrupts aspects of human performance, suggesting that removal of the party line will have beneficial effects. Moreover, reliance on voice links carries with it a number of disadvantages, including the role of voice communication in bringing about critical incidents and accidents. Although the error rate of voiced communication is low (e.g., Cardosi, 1993) these errors tend to have a disproportionately large impact on the accident and incident rate. For example, one survey covering a five-year period found that problems related to transfer of information were implicated in some 70% of accidents and incidents (Prinzo & Britton, 1993). On the other hand, some studies have suggested that party line information may be a source of aircrew situational awareness (SA), and that SA will therefore be compromised after the introduction of data link. The concept of SA, although currently widely invoked in aviation psychology studies, remains rather ill-defined and difficult to measure. In the present context, SA will be used to refer to information acquired by operators that has a demonstrable effect upon their performance. Data link: A review One reason for the disproportionate impact of the modest degree of unreliability of the voice channel is that critical communication is likely to take place during periods of very high workload for both pilot and controller. The party line information may not be processed to good effect under these conditions. The volume of information exchanged by controllers and aircraft (and between aircraft) is likely to increase if advances in the management of air traffic now being contemplated are realised. In particular, the concept of free flight will increasingly be applied to air traffic management. Essentially free flight allows increasing discretion to be provided to pilots (as well as dispatchers in commercial airlines) in the choice of route and timing, in an effort to increase system capacity and flexibility. Correspondingly, the role of the air traffic controller will change, tactical direction being replaced strategic decision making. Free flight potentially provides flexibility and safety, but also a means of increasing the overall volume of traffic. Wickens et al (1998) point out that the controllers awareness of the big picture may be degraded under free flight for several reasons: when individuals do not actively direct changes but simply observe them passively, they are less likely to remember them
1 1

This discussion does not assume that data link would remove the party line completely, since some combination of voice and data is likely to be adopted in the future; however, it can be assumed that some loss of voice information would occur


airspace that functions under free flight rules will by definition lose the structure and order that enable the controller to grasp the bigger picture; shifts in traffic density will be unpredictable; dynamic and idiosyncratic states of the system will occur, making it more difficult to fit into an established mental model free flight separation algorithms are likely to be time-based rather than space-based, and it could be argued that space is better envisaged than time, thereby again threatening situational awareness.

Free flight systems will increasingly use data link as a means of communication either to augment or to supplant the use of voice. In addition to reducing errors in transmission, the data link will provide information in a form that was not available hitherto: weather, scheduling and traffic information can be presented either graphically or as text. Moreover, these data may be entered directly into aircraft flight management systems. Data link systems will be able to support new methods of air traffic management, particularly those that involve greater information exchange between aircraft and air traffic control, such as reconciliation of 4-D waypoints between aircraft and dynamic flight routing. Among the advantages claimed or demonstrated for introducing data link communication into busy airspace are: reduction of congestion on radio frequencies decrease in communication errors increase in complexity (and length) of transmitted information without a corresponding decrease in reliability decrease in flight-deck workload (particularly if data entry to flight management systems is direct). One key disadvantage is associated with the discrete addressing of data link communications: pilots would have reduced access to party line information. This would be true not only of aircraft making full use of data link, but also of aircraft not using data link, since they would not now be privy to the data being sent to aircraft using data link. Previous work on data link and loss of party line Evidence from flight simulation studies about the usefulness of party line information is inconclusive. These studies suggest that speech is not the best method of enhancing awareness, particularly during high workload phases of flight. The degree of awareness of ATC clearances was tested in a range of scenarios by recording events such as queries of ATC instructions. In one scenario, pilots were given take-off clearance soon after another aircraft had been cleared to cross the runway designated for their take-off. Here, the degree of awareness was inversely related to the level of workload (Midkiff & Hansman, 1992; see also Infield, Logan, Palen, Hofer, Smith, Corker, Lozito & Possolo, 1994). A survey of 1500 pilots (Midkiff & Hansman, 1993) showed that party line information was particularly useful during terminal operations and on final approach. A more comprehensive study (Pritchett & Hansman, 1995) corroborated the notion that party line information ? particularly that related to traffic and weather ? had high utility for aircrew. In a large-scale survey using over 700 responses, Pritchett and Hansman (1997) allowed pilots to rate simultaneously the importance, availability and accuracy of each of a range of types of party line information. The party line information with the highest rating of importance was related to traffic (collision avoidance) and weather situations (windshear, and abnormal weather conditions) in which flight safety was a critical factor. A range of other elements was ranked as moderately important, and with a good degree of consensus. Generally, these related to traffic information useful for anticipating flight routings and delays; again weather, but related to specific phases of flight; and relaying of communications frequency. Some elements had ratings of high importance but low availability and accuracy. Ratings varied with the phase of flight, the highest importance being attached to phases of flight nearest to airports, especially final approach and terminal area (and lowest during cruise).


One important outcome of the analysis was the finding that many elements of party line information were given very high ratings of importance but the ratings of availability and accuracy were much lower. This may reflect the fact that monitoring of party line information is reduced or abandoned under conditions of high workload. Alternatively, pilots may be reluctant to act on the evidence of party line alone, requiring corroborative evidence. Differences in ratings were also evident between different types of pilot. Those associated with regional/commuter operations and major airline pilots generally gave more weight in their importance ratings for terminal and final approach, with very much lower ratings for cruise. In contrast, general aviation pilots also emphasised the importance during cruise, possibly reflecting the relatively short duration of this element in their typical flight or the lack of weather radar on such aircraft. In addition to the rating information, the study by Prichett and Hansman (1997) elicited free responses to the question: There is concern that without Party Line information, pilots may lose a sense of the Big Picture. What does the Big Picture mean to you? The most frequent response was traffic situation, followed by weather situation and then by being able to predict and plan ahead. Prichett and Hansman (1997) suggested that data link implementation can be considered an opportunity to present the important [party line information] elements in a more reliable, available, and intuitive manner to the pilot than current party line communications can provide (pp 48-49). Studies by the FAA (e.g., 1995, 1996) have used full-mission simulations to collect performance and subjective data on the usefulness of data link. Improvements include increased traffic flow and reduced delays. However, qualifications have been expressed about the suitability of data link in some circumstances. Also, Wickens et al (1998) note that in these studies the simulations of data link conditions were based on and compared with live radiotelephony-only scenarios. The comparison was confounded in several ways: in comparing data link conditions with radiotelephone-only conditions, there were differences not only in the interface, but also the traffic (simulated versus live), the identity of controllers, and operating conditions (on-the-job controllers versus those participating in an experiment) (Wickens et al, 1998; p.103). In a study for the UK Civil Aviation Authority, Carpenter and Goodson (1999) conducted a literature survey, an analysis of incident databases, and a survey of pilots. They found considerable evidence of work on the party line, mainly from the US, but little attention to safety. Summarising previous research, they concluded that party line information is most important near airports, and that information on the position of other aircraft and on weather is particularly significant. However, aircrew often missed party line information. Incident database analysis was hampered by difficulty of identifying party line information during searches; most of the evidence concerned negative effects, such as call sign confusion, although some reports of positive effects were found. A poor response rate was obtained in the pilot survey. Those who did respond considered party line information to have been beneficial, particularly in holding patterns.


4. Experimental Study: Effects of Irrelevant Speech in a Memory Task

4.1 Introduction

When human operators are exposed to many streams of information, it is often assumed that they process these streams independently. It has been argued, for example, that information presented to the eye and the ear is relatively independent, particularly if the individual has to ignore one stream of information and concentrate on the other (e.g., Wickens, 1992). Hence, if the individual is concentrating on a visual task, any extraneous sound will have no impact except when the information to the ear is very loud and unexpected (Broadbent, 1979). Certainly, this was the received wisdom some ten years ago (see Smith & Jones, 1992 for a review). Since that time, however, a substantial body of evidence has accumulated showing that, even at low intensities, extraneous sound impairs performance on certain classes of visual task (Jones, 1999). A similar argument can be made in relation to streams of information that call on different processing codes (e.g., verbal versus spatial). Thus, it might be expected that a primary task involving processing of spatial information would not be susceptible to interference from irrelevant verbal information. Again this view has been undermined recently by findings showing that spatial memory is disrupted by irrelevant speech (Jones, Farrand, Stuart & Morris, 1995). This section examines both cross-modal and cross-code interference effects in a memory task, to illustrate both the nature and extent of the effects and to highlight their practical implications. The task used here, although laboratory-based, requires mental processes similar to those involved in aviation-related tasks.


The irrelevant sound effect

The argument that streams of information entering the eye and ear do not remain independent beyond sensory encoding stages is based largely on the so-called irrelevant sound effect (ISE). This effect refers to the finding that background speech, which the participant is explicitly instructed to ignore, significantly disrupts performance on memory tasks, particularly those involving serial recall ( Colle & Welsh, 1976; Salam & Baddeley, 1982, 1989; Jones, Madden & Miles, 1992). The serial recall task typically involves recalling a sequence of 7-9 visually presented items (usually letters, digits or words) in the order in which they were presented. Background speech may be presented a) during the presentation of the to-be-remembered (TBR) items, b) during a retention interval when participants are expected to keep rehearsing the sequence, or c) during both these phases of the task. The accuracy of serial recall drops by up to 30-50% relative to a quiet control condition (Ellermeier & Zimmer, 1997). What property of speech gives it this disruptive power? An early account proposed that the important factor was the phonological content of the speech (Salam & Baddeley, 1982), or more accurately the phonological similarity between to-be-ignored speech and to-beremembered verbal items. Based on Baddeleys (1990, 2000) modular working memory model, it was argued that irrelevant speech gains privileged access to a phonological memory store and as a result these unwanted phonological codes become confused with similar codes generated by the sub-vocal rehearsal of the TBR items. However, more recent findings show that non-phonologically-based sounds such as tones also produce disruption (Jones & Macken, 1993) and also that non-phonologically-based recall tasks such as those involving the recall of visuo-spatial stimuli are disrupted (Jones et al, 1995). In this latter study, the serial recall of the locations of a sequence of dots was significantly disrupted by the presence of extraneous speech. Such findings are consistent


with an alternative explanation of the effect, known as the changing-state account (Jones, 1993; Jones, Madden & Miles, 1992). In this account the necessary and sufficient characteristic for disruption to occur is that the sound must show appreciable acoustical change over its time course, to the extent that the perceptual system segregates the sound into discrete events. As a by-product of this segregation process, the order of the sounds constituent events is pre-attentively registered and corrupts the process of deliberately maintaining (i.e., rehearsing) the order of the TBR items. That is, two representations of order information ? one pre-attentively registered, the other attentively registered ? enter the same STM workspace and clash (see Jones, Beaman, & Macken, 1996 for an extensive discussion). Maintenance of order information is referred to as 'seriation'. In contrast to modular conceptions of STM (e.g., Baddeley, 1990, 2000; Wickens, 1992), the changing-state framework sees STM as a unitary workspace where the processing of relevant and irrelevant streams can clash, irrespective of their modality source (e.g., visual, auditory) and representational code (e.g., verbal, spatial) so long as they involve similar processes. The notion that the ISE is mediated by a clash of seriation processes is further supported by the finding that tasks that rely primarily on serial processing (Beaman & Jones, 1997; Jones & Macken, 1993; Salam & Baddeley, 1990) or in which serial processing is the most efficient strategy (Beaman & Jones, 1998; LeCompte, 1994; Richardson, 1984) are most susceptible to disruption by changing-state sound.


Extending the ISE to applied settings

The changing-state account of the ISE has been derived almost entirely from extensive examination of the impact of sound on simple laboratory-based tasks. Recently, however, this line of research has been extended to more complex work-related tasks to highlight its practical implications (Banbury, Tremblay, Macken & Jones, in press; see also Jones, 1995). For example, Banbury and Berry (1998) found that office-related tasks such as mental arithmetic and memory for prose were adversely affected by the presence of extraneous speech and other non-speech office noise. Moreover, Banbury and Jones (2000) extended the ISE to simulated aviation-related tasks. They found that extraneous speech presented during a rehearsal phase following stimulus presentation significantly disrupted the ability to recall auditorily presented navigation information (longitude and latitude co-ordinates, e.g., Longitude: 4825; Latitude: 3719). In a second experiment they found that extraneous speech also detrimentally affected participants ability to recall a radar targets track history on a visual tactical display. The fact that speech disrupted both auditory-verbal and visuo-spatial memory again undermines the phonological store hypothesis based on the phonological similarity between relevant and irrelevant material (Salam & Baddeley, 1982). It favours instead the changing-state account based on a clash of concurrent seriation processes (Jones & Tremblay, 2000; Macken, Tremblay, Alford, & Jones, 1999). Extraneous sound is present both in the ATC control room and on the flight deck: Sources of speech for the controller include messages sent directly through the headset, the speech of colleagues in the same environment, and telephone messages. A common practice is to have the headset over only one ear so that the controller can potentially attend to either the headset messages or to fellow controllers speech or other messages being relayed over loudspeakers from other positions (Hopkin, 1995). Aircrew using radio communications have access to a party line often involving speech messages that are not directly relevant to them. Data link systems will remove the party line, and hence the irrelevant speech. There may therefore be expected to be an improvement in performance, although perhaps at the cost of reduced situational awareness (since the party line might impart useful information concerning factors such as traffic and weather conditions). Clearly, it is important to know whether performance is susceptible to disruption by speech messages that are not relevant to the individual. In the present experiment, we designed a task that assessed both the accuracy and speed of short-term recall for verbal and spatial information, in the presence or absence of irrelevant radio messages.


More specifically, the task involved recalling either the serial order of seven sequentially presented letters (call signs) or the sequence of locations in which the letters were presented ('aircraft position'). The task was designed to tap into some of the key cognitive processes involved in aviation tasks, namely those of keeping track of the temporal order of verbal and spatial information. We also manipulated the point at which participants were informed of which dimension of the stimuli (verbal or spatial) they were to recall. They were cued with this information either before or after the stimuli were presented (a factor that from now on will be referred to as the dimension cue point or DCP). Although this distinction has not been systematically examined in the literature on memory for sequences, there is evidence that prior knowledge of the task requirement in serial recall tasks involving two-dimensional stimuli can modulate the effect of interfering material. Maybery et al (2001) observed an effect of changing-state irrelevant speech on serial recall of the verbal identity of consonants presented in different spatial locations but not the serial recall of these locations when participants knew what information to recall before stimulus presentation. However, when informed only at the time of recall, the effect of changing-state irrelevant speech was found for both types of material. Across all variants of the task (verbal versus spatial, and cueing before versus after presentation), participants were tested under two background conditions: radio speech signals and quiet. All sounds were irrelevant to the task. Based on the non-modular and nonmodal conception of STM embodied within the changing-state account, the extraneous speech was expected to disrupt both spatial and verbal recall. Predictions with regard to the effect of the DCP (before/after) manipulation are not clear-cut, given the scarcity of published research on this effect. On the basis of the results of Maybery et al (2001), one may predict that extraneous radio speech signals will adversely affect the serial recall of the call signs, but not that of the locations, when the nature of the recall task is known before stimulus presentation. When this information is provided at recall only, the effect of speech signals should be present in both the call signs and locations recall tasks.



Participants Thirty-six undergraduate psychology students from Cardiff University participated for a small honorarium. All reported normal or corrected-to-normal vision and normal hearing. Apparatus/Materials The visual stimuli were generated using Visual Basic 6 and were presented on an IBM compatible PC. For each trial the TBR visual sequence consisted of seven consonants randomly selected from 19 consonants (all except W and Y) each placed in a square (40 x 40 pixels; screen set to 800 x 600 pixels). The seven consonants (black; Arial bold font; 30 pixels) were presented sequentially at the rate of 1 per second with no inter-stimulus interval (i.e., in immediate succession). They could appear at one of a fixed set of 19 locations, chosen randomly but with the constraint that no two stimuli would spatially overlap each other. For the irrelevant speech trials, radio speech signals were recorded and edited from live ATC communications from New York's JFK tower over the internet ( scripts/ Ten 18-second samples were produced, each edited to remove segments that were unintelligible and any silent periods extending beyond 500ms. The samples were built so that conversations made sense but no extract from one sample was inserted into a different sample. The irrelevant messages were presented stereophonically over headphones at approximately 65dB(A).


Design A four-factor design was used: serial position of the items (1-7) dimension (verbal or spatial) background condition (speech or quiet) Dimension Cue Point (before or after). For both DCP versions of the task, there were 40 trials in the experimental block consisting of 10 trials in each of the four dimension x background combinations. Other than this constraint, both the dimension to-be-recalled and the background condition were randomly selected for each given trial. Each of the 10 speech samples was played twice over the experimental block. The second presentation of the same sample was precluded until all other samples had been played in the interval between its first presentation and its second. Procedure The sample of 36 participants was randomly split into two groups so that 18 participated in the 'DCP before condition and 18 participated in the 'DCP after condition. Each participant was tested individually in a sound-proof testing booth. They were given instructions as to what the task involved and were told to ignore any speech they heard and informed that they would not be tested on any aspect of it. Each trial was structured as follows: Participants first clicked on a START button to initiate the trial Following a delay of 500ms the word CALLSIGNS or LOCATIONS appeared at the bottom of the screen for 2 seconds, to indicate which dimension of the ensuing stimuli was to be recalled, i.e., the identity of the consonants or the locations in which they appeared. These words did not appear at this point for those in the 'DCP after condition but were replaced with a ####### pattern After a delay of 1second, the first visual stimulus was presented Following the last item there was a retention interval of 10 seconds, during which participants were expected to continue sub-vocally rehearsing the to-be-remembered information After the retention interval, the stimuli reappeared simultaneously. Also, in the 'DCP after condition the word CALLSIGNS or LOCATIONS now appeared. In all conditions, the consonants were re-arranged amongst the locations. Participants responded by using the mouse to click on the appropriate items in the order in which they believed they had originally been presented. The colour of an item changed once chosen, and responses could not be repeated or altered. Participants were instructed to recall each sequence as quickly and as accurately as possible. Four practice trials were given before the experiment proper, one for each of the dimension x background sound combinations. The experiment took approximately 25min per participant.



The data were analysed with regard to two types of measure: accuracy and response time (RT). Accuracy is the typical measure used in the study of memory for sequences of events (e.g., Baddeley, 1966). RT measures are relatively rarely used (e.g., Anderson, Bothell, Lebiere, & Matessa, 1998; Anderson & Matessa, 1997) even though recent evidence demonstrates that they can prove useful to test models of serial memory and to elucidate the mechanisms at play in serial recall tasks (e.g., Maybery, Parmentier, & Jones, 2001). RT appears to be even more important in applied research, particularly that on aviation, since both accuracy and speed are clearly crucial in this setting. Accuracy For a measure of recall accuracy, the strict serial recall criterion was adopted: recalled items were scored as correct only if they corresponded exactly to the position in which they had


been presented. The average percentage of correct responses was first analysed in a 2 (DCP: before, after) x 2 (task: spatial, verbal) x 2 (background condition: speech, quiet) x 7 (serial position) analysis of variance (ANOVA). However, since the DCP condition did not affect performance significantly, nor interact with other variables, this factor was removed from the analysis. Details of the statistical effects appear in Annex A. Figure 2 shows the mean percentage of correct recall in the verbal and spatial variants of the task and under the two background conditions. Performance was superior in the verbal task compared to the spatial. Irrelevant speech led to a significant decrease in recall performance, which was similar for both tasks. The effect of serial position was significant, with similar shapes of the serial position function for verbal and spatial tasks. The effect of speech was stronger in middle list serial positions than at early or late positions. However, this effect was accentuated in the spatial task, and complicated by the relatively small recency effect (better recall for the last few items) in the spatial quiet condition compared to the other conditions, and the somewhat more irregular pattern in the spatial conditions (see Figure 2). Further analysis confirmed that the effect of speech was stronger in the middle list positions by comparing the speech and quiet conditions in terms of the quadratic trend of the serial position curve. A stronger curvature was observed in the speech condition than in the quiet for both the verbal and the spatial task.

90 80 70 % correct serial recall 60 50 40 30 Verbal - quiet 20 10 0 1 2 3 4 5 6 7 Serial position

Figure 2. Percentage of correct serial recall for the verbal and spatial tasks in the extraneous speech and quiet conditions

Verbal - speech Spatial - quiet Spatial - speech


Response times (RTs) Median RTs for correctly recalled items were measured from the presentation of the response screen to the first response (initiation time), and from response to response from then on. Generally speaking, the pattern of RTs can be described as follows. RTs were longer for the first item than for all subsequent items, among which little variation was observed except for a trend for a slight reduction from serial position 2 to 7. As evident in Figure 3, which shows the median RT for correct responses across positions 1-7, differences between conditions occurred mainly at serial position 1. We therefore confine the analysis to these initiation times.

3000 Before - quiet 2500 Before - speech After - quiet 2000 Correct RT (ms) After - speech




0 1 2 3 4 5 6 7 Serial position
Figure 3. Average median response times for correctly recalled items in the before and after conditions with speech and quiet

Initiation times ANOVA was carried out on the average median RT for correct responses at serial position 1, as a measure of the cost of initiating the response sequence. Initiation time was significantly longer in the after condition than in the before; longer in the verbal task than in the spatial; and marginally longer in the speech condition than in the quiet condition. All other effects were non-significant.




Overall, the results of this experiment indicate that order memory for visually presented information is disrupted by the presentation of extraneous speech, whether this information is coded spatially or verbally. Prior knowledge of the relevant aspects of the stimuli did not affect accuracy but increased initiation time, probably owing to the time required to read and process the task cue. The effect of extraneous speech on serial verbal memory is consistent with a substantial body of evidence on the effect of irrelevant speech (e.g., Colle & Welsh, 1976) or sound (e.g., Jones & Macken, 1993). The results also support the few studies to have reported an effect of extraneous speech on a spatial task (Jones et al, 1995; Banbury & Jones, 2000); findings that do not fit equally well with all views of STM. Interference effects that cross the verbal-spatial boundary highlight the interplay between verbal and spatial representations, and are consistent with the hypothesis that serial memory processes are of central importance (Jones et al, 1996). Such findings challenge the view that the negative impact of irrelevant speech is the result of interference in the phonological loop, the sub-system of working memory dealing with temporary storage of verbal information only (Baddeley, 1990). More broadly, they also undermine modular views of STM assuming memory stores defined by the type of representation they handle, i.e., verbal versus spatial (e.g., Baddeley, 1990, 2000; Wickens, 1992). The results reported above differ somewhat from those found by Maybery et al (2001). We found adverse effects of extraneous speech on both verbal and spatial tasks regardless of DCP manipulations, whereas Maybery et al reported an effect on the spatial task only when participants were told what information was relevant after stimulus presentation. The divergence in results between the two studies may relate to the difference in the characteristics of the extraneous speech as well as methodological differences in the primary task. Indeed, although the extraneous speech used by Maybery et al consisted of digits presented at a regular rate of two per second, our speech signals were comparatively more continuous, with irregular silent gaps of at most 500 ms between speech signals. This probably increased the 'dose' of interference (number of speech units per unit time) as well as the rhythmic irregularity of the speech, two effects shown to increase the interference by irrelevant speech (Bridges & Jones, 1996; Jones & Macken, 1995). An additional methodological difference between the two studies is that Maybery et al used a presentation method in which all locations were visible from the beginning of the trial, with consonants appearing sequentially in these locations. It is possible that the availability of the location information throughout the trial facilitated the encoding and maintenance of the spatial information. In the present study, locations were presented sequentially. Finally, in their before condition, Maybery et al presented the verbal and the spatial tasks in blocks, but in the after condition trials were presented in random order. Perhaps the randomisation of the trials for the two tasks in our before condition made it more difficult for participants to focus on spatial information and ignore verbal information in the locations task. The absence of any effects involving the DCP manipulation seems to oppose this conclusion, however. From the present results, those of Jones et al (1995), and the finding that verbal information can be disrupted by non-verbal stimuli (e.g., Jones & Macken, 1993), it could be argued that memory modules are not necessarily always the most appropriate way of examining cognition. Rather, it would appear that the processes applied to memory representations are in some cases more relevant to understand and predict patterns of interference in memory, in particular the process of seriation (Jones, 1993; Jones, Alford, Bridges, Tremblay, & Macken, 1999). Importantly, our results suggest that extraneous speech causes interference even when participants can prepare to attend to a particular attribute of the stimuli, a factor hereto not examined systematically in this experimental context. This is compatible with the view that interference in serial memory takes place at a pre-attentive level, or, in other words, that interference emerges from involuntary or obligatory processing (see Jones et al, 1996).


As noted earlier, a remarkable aspect of our data is that prior knowledge regarding the information to attend and recall did not improve memory accuracy, whereas initiation RTs were slower when that information was unavailable until the time of recall. Compared to the effect of extraneous speech, for which effects were observed for both measures, the absence of an accuracy deficit suggests that participants dealt relatively well with different amounts of information. Indeed, when the type of information to attend is known before presentation (spatial or verbal), the order of either seven call signs or locations must be retained. When the type of information to attend is known only immediately prior to recall, however, participants must maintain the order of seven call signs and seven locations. There is evidence indicating that human memory takes advantage of object or group representations, thereby reducing the amount of information to keep under the focus of attention for later recall. For example, the amount of information (e.g., letters) one can store when this information is organized into higher structures (e.g., words), patterns, or according to other schema-driven rules is well known to increase relative to unstructured sequences (e.g., Miller, 1956). A possible explanation for the failure of the DCP manipulation to affect accuracy is as follows. When participants did not know in advance which aspect of the stimuli to concentrate on, they were able to store both the identities and the locations of these stimuli as a sequence of objects combining some verbal and spatial features. This would allow the storage of a relatively large amount of information while avoiding memory overload. In this case, one might expect some sort of filtering or decomposition process at the time of recall to extract the relevant information from the object representation (particularly since the spatial configuration of the call signs was different at presentation and at recall). Put more simply, the sum must be decomposed into its constituent elements before one of these elements can be retrieved and output. One may hypothesise that such a process may slow recall, either at each serial position (repeated decomposition process), or as a one-off cost before the first response is produced (one-off decomposition). Our design does not allow us to examine this suggestion in detail, however, since differences in initiation times in the after conditions are likely to be largely due to the need for the participant to read the cue presented below the presentation matrix. However, if a reconfiguration process were required in addition to the reading and processing of the task cue, our data would favour a one-off decomposition rather than a process repeated across serial positions. (See Allport, Styles, & Hsieh, 1994; Goshcke, 2000; or Meiran, 1996, for evidence of time cost following the reconfiguration of the cognitive system to an upcoming task.) The practical implications of our results are clear: The presence of extraneous speech, even if irrelevant and to be ignored, reduces cognitive performance in a task measuring the ability to recall verbal or spatial information in order. Such effects have been established relatively recently and contradict the modular, resource-based, view that has dominated research in aviation psychology; they are also counter-intuitive. It is relatively easy to understand that one can rarely perform the same activity twice simultaneously. Articulating two words simultaneously is an extreme example in which there is clearly a physical constraint as well as a mental one. Speaking while listening, or reading while listening, are examples in which physical constraints do not necessarily figure, but that are nevertheless extremely difficult. One can, introspectively, sense that these activities both involve the processing of language. Since many active verbal activities can be described as relatively effortful and require attentional control, one can easily predict some conflict of resources in such examples. However, concurrent performance of verbal and spatial activities (e.g., listening to the radio while driving) might appear more feasible, since spatial tasks are perhaps more likely than verbal tasks to be mediated by effortless, automatised behaviours. Since some verbal and spatial tasks can be performed simultaneously without interference, it may be less intuitively obvious that mental conflict between verbal and spatial tasks often emerges, as indicated by the effect of irrelevant speech on the recall of locations in our data. In the context of ATC, such counter-intuitive effects may include impairment in the ability to re-organise flight paper strips (which involves prospective memory for the new order and sequential motor activity) as a result of extraneous verbal information. Guidelines to air traffic controllers may therefore need to specify precautions such as avoiding radio communication while re-organising paper strips or processing visual information from the radar display (e.g.,


spatial arrangement of aircraft). Because controlling air traffic is a dynamic complex task in which continuously changing information must be processed, the temporal sequencing of cognitive activities required in a particular situation is of crucial importance. Recent theoretical developments clearly show that sequences are dealt with in memory by processes that act indiscriminately on visual and auditory sources of information, and also operate across verbal and spatial domains. The present study reinforces this view; extends it to multi-dimensional stimuli within a complex work-related task; and suggests that extraneous speech causes interference even when participants attention is directed towards a particular dimension of these stimuli on the basis of prior knowledge.


5. Optimising Information Presentation

Data link and radiotelephony should not be considered alternatives. It is more productive to regard them as complementary. Indeed, Roca (personal communication, 2001) has indicated that Eurocontrols current operational concepts are based on combined data link and voice, using each medium for what it is best adapted to do. For example, the ODIAC (Operational Development of Integrated surveillance and Air/ground Data Communications) task force has given detailed consideration to these issues. The streaming model is able to assist in optimising such a combined system, since it deals precisely with understanding how combined visual and auditory information is processed. Duplication of transmissions on both media has advantages: combining voice with data link leaves the level of flight efficiency the same but with a reduced number of voice communications, mainly through reductions in the need for repetition of voice messages (Talotta et al, 1992, a, b). However, this redundancy gain does not overcome the difficulties of auditory distraction or of the amount of available information as the volume of traffic increases. An alternative approach is to make sound complement the visual information available from data link. To do so in an efficient manner, some general principles need to be applied to the partitioning of information to sound and vision. Moreover, it may be desirable to transform the information in order that it achieves maximum situational awareness: changes to the timing of presentation (delayed to periods of low workload) whether the full information is made available (as a minimum its presence signalled by a warning tone, or a truncated form of the message that can be recalled subsequently) changes to the form in which the message arrives (as natural, digitised or synthesised speech) The selection of methods depends on two things: the significance of the information for the current operational requirements; and matching these requirements to the functional characteristics of the medium. For example, urgency and identity are best conveyed by speech, whereas complex information that needs to be consulted a number of times is best presented visually.


Conveying information by eye and ear

We consider the key characteristics of auditory/vocal and visual/manual methods of relaying information within aviation settings (Tables 1-3). Most studies acknowledge that auditory information (speech and non-speech) can play a role in data link and that careful consideration of its use will improve situational awareness (manifested in, among other things, reduced workload, reduced head-down times, and improved flight management). The development of a multi-sensory interface should proceed systematically, drawing upon our general understanding of the capabilities of the auditory and speech production system. Based on previous work, it seems likely that auditory information can be used: Via speech using a range of methods, including synthesis and digitisation Via non-speech sounds, using a variety of methods to convey urgency To provide an alerting function for crews, informing them of the arrival of data link information To promote sharing of information between all present on the flight deck To provide some degree of information on the messages: source (origin) class (type) content To be part of a dialogue that includes feedback and acknowledgement as well as the transmission of information To be sensitive to context, especially with respect to: workload urgency other auditory alarms/messages


Auditory information
Conveys urgency. Pitch and loudness help convey urgency unambiguously and quickly. This is less easy to accomplish in the visual modality, although factors such as physical brightness, rate of onset, and repetition can be exploited. Conveys emphasis. Emphasis can be used to disambiguate messages. Messages with identical lexical order can have quite different meanings imposed by emphasising different elements. In vision, emphasis can be accomplished, for example by punctuation, but the range that can be conveyed is lower, and the variety also constrained. Conveys identity. Even quite subtle differences in the acoustic content of the message (spectral content, pitch range and prosodic character) can convey differences in identity of the voice. This is important in establishing the continuity of source (the person with whom a dialogue is being conducted) that may be important in maintaining the coherence of the communication and providing cues to legitimacy. Is susceptible to disruption during transmission. Noise coupled to limited bandwidth can make speech less intelligible. Generally, the less predictable the form of the message (speaker, syntax and lexical content) the more difficult it is to hear the message in adverse conditions. Is transient. Spoken communication usually leaves no permanent record. Thus it must be acted upon immediately or committed to memory. The load on memory will be of particular significance during periods of high workload. Load from the spoken message will be increased if it is unpredictable either in its content or syntax (or in an unfamiliar voice). Is temporally extended. Messages may be read faster than they can be heard; a whole visual message can be presented simultaneously, and later parts can be read before earlier ones. Auditory presentation is wholly serial and typically slow. The capacity for the brain to cope with rates of speech far faster than those naturally occurring in speech using Rapid Serial Auditory Presentation has not been exploited to any great degree. Is processed pre-attentively. There is ample evidence that sound processing is obligatory and automatic, beyond the individuals control. This does not mean, however, that it will make an impact on the person. It seems that sound may be processed only in physical terms, that is, not to its semantic level. The evidence for obligatory visual processing is less compelling. However, if the individual is looking at a stimulus, the processing of words does seem beyond conscious control. For example, in the so-called Stroop phenomenon, involuntary reading of a word name interferes with the task of naming the ink colour in which it appears. Is omni-directional. Sound is always within the persons sensory reach, unlike visual information, for which the eyes have to be directed toward the source of the stimulus. A message or signal can be heard when the pilot is head-up looking outside the cockpit. Sound is more public than information typically found on a visual display. The processing of sound is more likely to be shared by those close by (which may enhance team mutual awareness and coherence of action). Disrupts visual processing. Sound disrupts visual processing. Examples range from the interpretation of lip movements, through short-term memory, to the perception of visual motion. Has sentinel property. Hearing has been called the sentinel of the senses, because it is omnidirectional and because auditory sensory information is fed into parts of the brain responsible for alerting the individual. This makes auditory signals particularly suitable for alarms. Conversely, tasks of high priority but susceptible to interruption (for example, checklist procedures) will be vulnerable to disruption by auditory messages. Data link information can be read more or less at leisure, allowing pilots to carry on with other tasks between transmitting and receiving messages (Lozito et al, 1993). Spatial configurations are poorly transmitted using speech. Magnitudes and spatial locations are not easily conveyed using speech. Maps and diagrams more readily convey the relations between features and objects. However, sound can be used to notify the updating of information and to draw attention to dangerous configurations or conditions. For example, Hahn and Hansman (1992) found that graphic representation of data link routing embedded in an electronic map imposes lower workload than either text or spoken versions of the same information.

Table 1. Characteristics of auditory information transmission


Auditory information Speech recognition: despite much research and even greater commercial promotion, speech recognition systems have not achieved a reliability that approaches that of manual/visual methods. Dialogue design (particularly that associated with error recovery) is particularly complex and demanding (see Murray, Frankish & Jones for a discussion). Speech synthesis: Great advances have been made in text-to-speech synthesis, but the speech generated by this means lacks the appropriate prosody of natural speech, and fails to imitate its spectral characteristics. One great advantage of speech synthesis is that the message exists in both spoken and written form. Transmission may be encrypted, and indeed in the decoding the message may be generated in the language of choice. Also the bandwidth of the messages in transmission is small. Generally, synthetic speech is more difficult to understand than natural speech. The potential range of messages is very large with synthesised speech; depending on the particular type of synthesiser, it is possible to type in any message. Additionally, the identity of the speaker can be manipulated at will. This may be used to associate a particular voice with a certain class of information, thereby improving the partitioning of information and in turn the level of situational awareness. Digitised speech. This consists of stored samples of speech that can be retrieved when appropriate. Separate samples can be concatenated to make up a unique message (but each of the elements must necessarily be recorded in advance). Digitised speech technology has been used with great success in telephone-number enquiry systems. Care has to be taken in the way that elements are concatenated. For example, strings of digits are often concatenated in regular tempo on the basis of their onset times. In fact, this leads to the perception of irregularity of timing. However, there are methods of overcoming this difficulty (the P-Centre approach). The key advantage of digitised speech is its clarity; its key shortcoming is lack of flexibility: unless the component parts are already recorded, the message cannot be sent. Like synthesised speech, the timing of presentation of the message can be regulated, to avoid periods of high workload, for example. Also it is possible for the digitised speech to be presented in the pilots language of choice. Again, like synthetic speech, the message can be replayed at leisure, thereby allowing checks to be made on the contents. Table 2. Characteristics of speech systems


Manual entry and visual display Encoding can be slow. Entry of free text can be slow and demanding, a notable source of complaint from pilots (van Gent, 1995). Pre-defined sets of key strokes (macros) can be used to send a complex message with a single keystroke, and a list of macros can be drawn up for a particular flight plan. However, macros may pose a burden on memory if their meaning is not intuitive. Moreover, the simpler the response, the greater the tendency for automaticity; this may lead to a lack of awareness on the part of the person entering the macro. Encoding can be error-prone. Poor keystroke skills may lead to errors in data entry. However, information that is keyed can be held, and scrutinised by error-trapping mechanisms. The content of speech is much more difficult to check in this way. Encoding is attention-demanding. For vision, encoding is effortful: the individual has to orient the eyes toward the display and read off the information (Groce & Boucek, 1987). In dual-seat cockpits the burden of data link communications falls on one of the two crew; the pilot in control may glance at the display to cross-check the information (van Gent, 1995). Data link may result in slow response times. Delays in responding to pilot requests may be about 3 seconds longer when the controllers respond to visual as opposed to voice requests (Wickens et al, 1996), although other studies have failed to show a difference (e.g. Kerns, 1994). Estimates of the total cycle time for a visual/manual system (the time for encoding, transmission, receipt and acknowledgement) vary, but it takes 10-20 seconds longer than for a radio-based system (FAA, 1995, 1996; Kerns, 1994; Waller & Lohr, 1989; Talotta et al, 1990). This increased cycle time may lead controllers to abandon data link at times of great urgency and workload (Wickens et al, 1998). Visual displays have a larger footprint than auditory displays. Auditory information can be relayed via loudspeaker or headphone, occupying no space on a display panel. The amount of space for displaying messages is problematic. There will be competition for visual resources both on the air side and the ground side. The primary ground-based display is a radar display or plan view. Data link messages can be presented as windows on the margin of the display, incorporated into flight data blocks, or embodied in the depiction of the flight trajectories on the radar display (Kerns, 1994; Wickens et al, 1996). In all these cases, the space for displaying the history of previous messages is very limited. Methods of retrieving and reviewing displays have received little attention, particularly in the auditory modality. Table 3. Characteristics of manual/visual information transmission


6. Conclusions
What do these various generalisations from cognitive streaming mean for the aviation environment? Most of the cognitive streaming principles are derived from the study of irrelevant sound. This research indicates that unattended sound is processed in an obligatory fashion and that it has repercussions for tasks that are undertaken concurrently. This disruption will occur in both spatial and verbal tasks. Tasks typically performed in aviation settings will be disrupted by the mere presence of irrelevant sound, as our experiment has shown. The research on streaming suggests that, when attention is directed elsewhere, sound is registered and analysed to a degree largely depending on the nature of the primary task. Although this conclusion is speculative, the important point is that the sound does have an impact on behaviour, resulting from the analysis of either the physical nature of unattended sound or its semantic content. Perhaps most significantly for the present study, there is no indication that the processing of unattended sound permits the integration and assemblage of information so as to give a strategic picture of the type usually associated with situational awareness. A corollary to this conclusion is that, for situational awareness to be enhanced by auditory information, it has to be done in a deliberate fashion, not through some unconscious registration of the sound when it is unattended. For the data link environment this means either that information be wholly visual or that a hybrid system is desirable but one that exploits the undoubted advantages of audition and associates them with the complementary advantages of visual displays in such a way as to maximise situational awareness. Cognitive streaming provides a framework for making an informed allocation of function to either the auditory or visual modality. The framework has implications for auditory attention, the characteristics of memory and skill, and understanding of the effects of multiple task performance. It is therefore possible in broad terms to make some judgement about the allocation of ATM information in systems exploiting the benefits of processing in several modalities. We have described the key advantages and disadvantages of text and auditory processing.


7. Proposal for Future Work

7.1 The position to date
The experiment already undertaken has established that (a) even when unattended, sound is registered by the brain, and (b) this has a damaging effect on performance if the person is undertaking a demanding task involving short-term memory.


Future work

Follow-up work may take several forms; the key questions that relate to the processing of unattended party-line information are as follows (these are not in order of priority):


Work-Package 1

Research issue: The sound is registered, and has a damaging effect on task performance, but is what is registered useful? This issue centres on whether there is anything more than mere registration of the sound. Can the material that is registered be informative? In other words, what types of information are available to the person as a result of the sound being registered? At one extreme one may conceive that the sound is present only in its raw form, and lacks any information that the pilot could usefully apply to the task of flying (improve situational awareness, if you like). At the other, the sound may be informative: sequences of sound (such as co-ordinates or call signs) are more readily heard at a later stage (when they become relevant to the task at hand) the heard material can contribute to the building up of some kind of plan or spatial representation that includes the position of other aircraft within the airspace a more abstract awareness encompassing knowledge of the number, type, operational attributes, and spatial location of the aircraft The studies to judge these possibilities will involve implicit methods. Individuals will be exposed to the sound during the execution of a task. At some subsequent test they will be shown a variety of materials and some measure taken of how familiar they are with the material. The interval between the initial exposure and the subsequent test will be a variable of interest. There are a variety of methods by which this can be done, but a simple example is the presentation of the information to which the person was previously exposed in noise. The test here is to see whether the previously heard sound is easier to detect in noise than some sound that the person has not heard in the context of the earlier task. This would be a way of judging the persons familiarity with the speech, but to test awareness of space the person might be presented with two or more alternative maps and asked to judge which is most familiar. Again the test is one to evaluate the contribution of the unattended auditory message to the persons awareness of the situation.


Work-Package 2

Research issue: Do the effects found in the pilot study generalise to other tasks? There is some evidence that the nature of processing irrelevant sound depends on the nature of the primary (e.g. short-term memory) task. The task used in the pilot study was simple (but demanding) and this subsequent work could examine a range of tasks to explore the many different tasks that are undertaken in aviation. This would indicate those tasks that are


vulnerable to irrelevant sound (and perhaps in turn suggest, for example, that R/T traffic should be suppressed during these tasks). The approach adopted here would be like the one taken in the pilot study: simple laboratory tasks representing a range of mental function used in aviation. This would necessarily require a preliminary analysis of flying or ATC tasks (several such analyses already exist and could be exploited).


Work-Package 3

Research issue: How can the disruptive effects of irrelevant sound be avoided? This work focuses on the negative effects of irrelevant sound and recognises that the magnitude of disruption is sufficiently great for it to pose a threat to safety. The proposed study would examine methods of presenting party line information that would minimise the disruptive effect of auditory presentation. One approach might be adaptive storage of R/T; that is, if the pilot is engaged in a part of the flying task that poses a particularly high workload, the presentation of party line information could be postponed. Another possibility is to present the information visually with the appropriate auditory or visual alerting cues. Essentially this work-package is not experimental, but is concerned, rather, with interface design and with the assignment of tasks to auditory and visual modalities that is suggested by the cognitive streaming framework. The output of this work would be guidelines for the optimal combination of visual and auditory information.


Work-Package 4

Research issue: Are these effects present in real-world aviation tasks? This work-package would examine the extent to which real aviation tasks are disrupted by irrelevant sound. Necessarily, this would involve studies with qualified controllers and pilots using simulators. Care would need to be taken of the selection of the tasks and the timing of the irrelevant sound. Indices of efficiency could be taken either from task performance (and also subjective reports, but these tend not to be reliable) or from implicit measures (see above). Work-Package 4 would be undertaken in collaboration with other groups, such as the ATC group at QinetiQ Malvern, human factors specialists at NLR Amsterdam, or Eurocontrol staff in Bretigny.


8. References
Allport, D. A., Styles, E. A., & Hsieh, S. (1994). Switching intentional set: Exploring the dynamic control of tasks. In C. Umilta & M. Moscovitch (Eds.) Attention and Performance, XV (pp. 421-452), Hillsdale, NJ: Erlbaum. Anderson, J. R., Bothell, D., Lebiere, C., & Matessa, M. (1998). An integrated theory of list memory. Journal of Memory & Language, 38, 341-380. Anderson, J. R., & Matessa, M. (1997). A production system theory of serial memory. Psychological Review, 104, 728-748. Baddeley, A.D. (1966). The influence of acoustic and semantic similarity on long term memory for word sequences. Quarterly Journal of Experimental Psychology, 18, 302309. Baddeley, A.D. (1990). Human Memory. Hove, East Sussex: Lawrence Erlbaum Associates Ltd. Baddeley, A.D. (2000). The episodic buffer: a new component of working memory? Trends in Cognitive Sciences, 4, 417-423. Banbury, S.P., Tremblay, S., Macken, W.J., & Jones, D.M. (in press). Auditory distraction and short-term memory: Phenomena and practical implications. Human Factors. Banbury, S.P., & Berry, D.C. (1998). Disruption of office related tasks by speech and office noise. British Journal of Psychology, 89, 499-517. Banbury, S.P., & Jones, D.M. (2000). Driven to distraction. Ergonomics (April), 37-39. Beaman, C.P., & Jones, D.M. (1997). The role of serial order in the irrelevant speech effect: Tests of the changing state hypothesis. Journal of Experimental Psychology: Learning, Memory and Cognition, 23 , 459-471. Beaman, C.P., & Jones, D.M. (1998). Irrelevant sound disrupts order information in free as in serial recall. Quarterly Journal of Experimental Psychology, 51A, 615-636. Bridges, A.M. & Jones, D. M. (1996). Word-dose in the disruption of serial recall by irrelevant speech: Phonological confusions or changing state? Quarterly Journal of Experiment Psychology, 49A, 919-939. Broadbent, D.E. (1979). Human performance and noise. In C.M. Harris (Ed.), Handbook of noise control (pp. 17.1-17.20). New York: McGraw Hill. Cardosi, K (1993). An analysis of en route controller-pilot voice communications. DOT/FAA/RD-93/11, Federal Aviation Administration, Volpe Center, Cambridge, MA. Carpenter, J. and Goodson, J. (1999). Loss of Party Line Information Safety Study. UK: Admiral Management Services Ltd (PowerPoint Presentation; file EuroV1.ppt). Chappelow, J. W. (1999). Error and accidents. In J. Ernsting, A. N. Nicholson & D. J. Rainford (eds), Aviation Medicine (third edition). Oxford: Butterworth Heinemann. Colle, H.A., & Welsh, A. (1976). Acoustic masking in primary memory. Journal of Verbal Learning and Verbal Behavior, 15, 17-32.


Croft, D. G., Banbury, S. P., Crick, J. L., and Berry, D. C. (2000). Implicit memory and situational awareness: Probing attended and unattended auditory information. th Proceedings of the Human Factors and Ergonomics Society 44 Annual Meeting (San Diego, USA). Croft, D. G., Banbury, S. P., and Thompson, D. J. (2001). The development of an implicit situation awareness toolkit. Proceedings of the Human Factors and Ergonomics th Society 45 Annual Meeting (Minneapolis, USA). Ellermeier, W., & Zimmer, K. (1997). Individual differences in the susceptibility to the irrelevant speech effect. Journal of the Acoustical Society of America, 102, 21912199.

Endsley, M. R., & Rogers, M. D. (1998). Distribution of attention, situation awareness, and workload in a passive air traffic control task: Implications for operational errors and automation. Air Traffic Control Quarterly, 6, 21-44. Endsley, M. R., Hansman, R. J., & Farley, T. C. (1999). Shared situation awareness in the flight deck-ATC system. IEEE AES Systems Magazine, August. Farmer, E. W. (1993). Conceptual issues in workload. Proceedings of Conference on Workload Assessment and Aviation Safety. London: Royal Aeronautical Society. Farmer, E. (2000a). Predicting operator workload and performance in future systems. Journal of Defence Science, 5(1), pp F4-F6. Farmer, E. W. (2000b). Challenges for Aviation Psychology. In D. de Waard, C. Weikert, J. Hoonhout and J Ramaekers (Eds.), Human-System Interaction: Education, Research st and Application in the 21 Century. Maastricht: Shaker Publishing. Farmer, E. W., Belyavin, A. J., Tattersall, A. J., Berry, A. & Hockey, G. R. J. (1991). Stress in Air Traffic Control II: Effects of Increased Workload. RAF Institute of Aviation Medicine Report No. 701. Farmer, E.W., & McIntyre, H. M. (1999). Crew resource management. In J. Ernsting, A. N. Nicholson & D. J. Rainford (eds.), Aviation Medicine (Third Edition). Oxford: Butterworth Heinemann. Federal Aviation Administration (1995). Benefits of two-way data link ATC communications: Aircraft delay and efficiency in congested en route airspace. DOT/FAA/CT-95/4. Data link benefits study team report. Washington, DC: US Department of Transportation. Goschke, T. (2000). Intentional reconfiguration and involuntary persistence in task-set switching. In S. Monsell & J. S. Driver, (Eds.). Control of Cognitive Processes: Attention and Performance XVIII. Cambridge: MIT Press. Groce, J. L., & Boucek, G. P. (1987). Air transport crew tracking in an ATC data link environment. SAE Technical paper 871764. Warrendale, PA: SAE International. Hawkins, F. H. (1987). Human factors in flight. Aldershot, UK: Gower Technical Press. Hopkin, V.D. (1995). Human Factors in Air Traffic Control. Taylor & Francis: London. Infield, S., Logan, A., Palen, L., Hofer, E., Smith, D., Corker, K;, Lozito, S., and Possolo, A. (1994). The effects of reduced partyline information in a data link environment. st Human Factors in Aviation Operations: Proceedings of the 21 Conference of the European Association for Aviation Psychology, Vol 3, 51-56, Dublin, Ireland.


Jones, D.M. (1993). Objects, streams and threads of auditory attention. In A.D. Baddeley & L. Weiskrantz (Eds), Attention: Selection, awareness and control, pp. 167-198. Oxford: Clarendon Press. Jones, D.M. (1995). The fate of the unattended stimulus: Irrelevant speech and cognition. Applied Cognitive Psychology, 9, 23-38. Jones, D.M. (1999). The cognitive psychology of auditory distraction: The 1997 BPS Broadbent Lecture. British Journal of Psychology, 90, 167-187. Jones, D. M., Alford, D., Bridges, A., Tremblay, S., & Macken, B. (1999). Organizational factors in selective attention: The interplay of acoustic distinctiveness and auditory streaming in the irrelevant sound effect. Journal of Experimental Psychology: Learning, Memory, & Cognition, 2, 464-473. Jones, D.M., Beaman, C.P., & Macken, W.J. (1996). The object-oriented episodic record model. In S. Gathercole (Ed.), Models of short-term memory (pp. 209-238). London: Lawrence Erlbaum Associates. Jones, D.M., Farrand, P., Stuart, G., & Morris, N. (1995). The functional equivalence of verbal and spatial information in serial short-term memory. Journal of Experimental Psychology: Learning, Memory and Cognition, 21, 1008-1018. Jones, D.M., & Macken, W.J. (1993). Irrelevant tones produce an irrelevant speech effect: Implications for phonological coding in working memory. Journal of Experimental Psychology: Learning, Memory and Cognition, 19, 369-381. Jones, D.M., & Macken, W.J. (1995). Organizational factors in the effect of irrelevant speech: The role of spatial location and timing. Memory & Cognition, 23, 192-200. Jones, D.M., Madden, C., & Miles, C. (1992). Privileged access by irrelevant speech to shortterm memory: The role of changing state. Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 44A, 645-669. Jones, D.M., & Tremblay, S. (2000). Interference in memory by process or content? A reply to Neath (2000). Psychonomic Bulletin & Review, 7, 550-558. Kerns, K. (1991). Data link communication between controllers and pilots: A review and synthesis of the simulation literature. International Journal of Aviation Psychology, 1, 181-204. Kerns, K. (1994). Human factors in ATC/Flightdeck integration: Implications for data link simulation research. MP 940000098. McLean, VA: MITRE Corporation. Knox, C. E., & Scanlon, C. H. (1990). Flight tests using data link for air traffic control and weather information exchange. SAE Journal of Aerospace, 99, 1683. LeCompte, D.C. (1994). Extending the irrelevant speech effect beyond serial recall. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20, 1396-1408. Lozito, S., McGann, S., & Corker, K. (1993). Data link air traffic control and flight deck environments: Experiment in flight crew performance. In (R. Jensen & D. Neumeister, eds.) Proceedings of the Seventh International Symposium on Aviation Psychology. Columbus, OH: Ohio State University. Macken, W.J., Tremblay, S., Alford, D., & Jones, D.M. (1999). Attentional selectivity in shortterm memory: Similarity of process, not similarity of content, determines disruption. International Journal of Psychology, 34, 322-327.


Maybery, M. T., Ford, K., Huitson, M., Parmentier, F. & Jones, D. M. (2001). Concurrent presentation of verbal and spatial sequences and interference from changing-state irrelevant speech. Manuscript in preparation. Maybery, M., Parmentier, F. B. R., & Jones, D. M. (2001). Grouping of list items reflected in the timing of recall: Implications for models of serial verbal memory. Manuscript submitted for publication. McCracken, J. H. & Aldrich, T. B. (1984). Analysis of Selected LHX Mission Functions: Implications for Operator Workload and System Automation Goals. Report No. TNA ASI479-24-84. Fort Rucker, Al: Anacapa Sciences Inc. Meiran, N. (1996). Reconfiguration of processing mode prior to task performance. Journal of Experimental Psychology: Learning, Memory & Cognition, 22, 1423-1442. Midkiff, A. H., & Hansman, R. J. (1992). Identification of important party line information elements and implications for situational awareness in the data link environment. Air Traffic Control Quarterly, 1, 5-29. Miller, G. A. (1956). The magical number seven plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 81-97. Prichett, A., & Hansman, J. R. (1997 b). Experimental studies of pilot performance at collision avoidance during closely spaced parallel approaches. Presentation at the Ninth International Symposium on Aviation Psychology, April. Prichett, A., & Hansman, J. R. (1997a). Mismatches between human and automation decision making algorithms and their effect on the humans task and system. Pritchett, A. R., & Hansman, R. J. (1997). Variations among pilots from different flight operations in party line information requirements for situation awareness. Air Traffic Control Quarterly, 4, 29-50. Richardson, J.T. (1984). Developing the theory of working memory. Memory and Cognition, 12, 71-83. Rogers, R. D., & Monsell, S. (1995). The cost of a predictable switch between simple cognitive tasks. Journal of Experimental Psychology: General, 124, 207-231. Salam, P., & Baddeley, A.D. (1982). Disruption of short-term memory by unattended speech: Implications for the structure of working memory. Journal of Verbal Learning and Verbal Behavior, 21, 150-164. Salam, P., & Baddeley, A.D. (1989). Effects of background music on phonological short-term memory. Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 41A, 107-122. Salam, P., & Baddeley, A.D. (1990). The effects of irrelevant speech on immediate free recall. Bulletin of the Psychonomic Society, 28, 540-542. Smith, A.P., & Jones, D.M. (1992). Noise and performance. In A.P. Smith & D.M. Jones (Eds.), Handbook of Human Performance (Vol. 1, pp. 1-27). London: Academic Press. Talotta, N. J. et al (1990). Operational evaluation of initial data link en route services, Volume 1. DOT/FAA/CT-90/1 I. Federal Aviation Administration. Washington DC: US Department of Administration. Van Gent, R. N. (1995). Human factors issues with airborne data link. NLR Technical Publications 956666L. National Aeronautics Laboratory, Amsterdam, Netherlands.


Waller, M. C. & Lohr, G. W. (1989). A piloted simulation study of data link ATC message exchange. NASA Technical paper 2859. Hampton, VA: NASA Langley Research Center. Waller, M. C. (1992). Flight deck benefits of integrated data link communication. NASA Technical paper 3219. Hampton, VA: NASA Langley Research Center. Wickens, C. D. (1992). Engineering Psychology and Human Performance. New York: HarperCollins. Wickens, C. D., et al (1989). The future of air traffic control: Human operators and automation. Washington, DC: National Academy Press. Wickens, C. D. & Liu, Y. (1988). Codes and modalities in multiple resources: A success and a qualification,. Human Factors, 30, 599-616. Wickens, C. D., Miller, S., & Tham, M. (1996). The implications of data-link for representing pilot request information on 2D and 3D air traffic control displays. International Journal of Industrial Ergonomics, 18, 283-293.


Annex A: Results of Statistical Analysis

Performance was superior in the verbal task compared to the spatial F(1, 35) = 30.60, MSE = 1485.68, p < .001. Irrelevant speech led to a significant decrease in recall performance F(1, 35) = 41.21, MSE = 575.59, p < .001. The effect of serial position was also significant F(6, 210) = 48.24, MSE = 260.19, p < .001. The overall effect of extraneous speech was similar for both tasks, as indicated by the absence of an interaction between task and background condition F(1, 35) = 2.89, MSE = 453.84, p > .05. Similar shapes of the serial position function were observed for the verbal and spatial task, as indicated by the absence of a task by serial position interaction F(6, 210) = 1.69, MSE = 184.02, p > .1. The effect of speech was stronger in middle list serial positions than at early or late serial positions, as revealed by a significant background condition by serial position interaction F(6, 210) = 6.33, MSE = 161.72, p < .001. This effect was accentuated in the spatial task compared to the verbal, and complicated by the relatively small recency effect observed in the spatial quiet condition compared to the other conditions and the somewhat more irregular pattern in the spatial conditions, as indicated by the significant triple interaction F(6, 210) = 3.26, MSE = 161.10, p < .005. Further analysis confirmed that the effect of speech was stronger in the middle list positions by comparing the speech and quiet conditions in terms of the quadratic trend of the serial position curve. A stronger curvature was observed in the speech condition than in the quiet for both the verbal task and the spatial task Verbal task: F(1, 35) = 13.46, MSE = 147.90, p < .001 Spatial task: F(1, 35) = 6.73, MSE = 149.87, p < .05.

Initiation times
significantly longer in the after condition than in the before F(1, 34) = 9.37, MSE = 1666613, p < .005 longer in the verbal task than in the spatial F(1, 34) = 13.13, MSE = 271834, p < .001 marginally longer in the speech condition than in the quiet condition F(1, 34) = 3.82, MSE = 185365, p = .06. all other effects and interactions non-significant (p > .1).