Sie sind auf Seite 1von 16

Gale M.

Sinatra
Universityof Massachusetts

Convergenceof listening processing


THE

and

reading

RELATION betweenlisteningand readingis important for theory as well as practice. Once a word has been recognized, is the comprehensionprocess for readingthe same as for listening? This study tested the point of convergence of linguistic informationfrom auditoryand visual channels. Forty college students were asked to indicate whether two visual stimuli presentedon a computerscreen were the same or different;before each pair was presented, the studentheardan auditorystimulus, which either matchedor did not matchthe first visual stimulus. Four types of stimuli were chosen to reflect differentlevels of processing: sentences, syntactic but meaningless word strings, randomword strings, and pronounceablenonwords. As measuredby reactiontimes, the visual comparisonwas significantly faster when subjects first heard a matching auditory stimulus for sentences, syntactic nonsense strings, and randomwords, but not for nonwords.The resultssuggest thatlistening and readingprocessing converge at the word level, and that words processed aurally and visually share the same lexicon.

Convergence entre les processus d,'coute et de lecture


des relations entre l'coute et la lecture presente un interet aussi bien thdorique que pratique. Est-ce que les processus d' coute et de lecture de mots sont les memes? La presenterecherchea tentede verifier si le traitement de l'information est le meme selon que l'informationest perque uniquementpar le canal visuel ou ' la fois par le canal ' ' visuel et auditif. Quaranteetudiantsde college ont participe l'experiencequi consistait juger si des stimuli presentes par paires sur ecran cathodiqueetaient pareils ou diff6rents. Quatretypes de stimuli qui diff6raienten fonction du niveaude traitementetaient presentes: des phrases, des suites de mots grammaticalesmais sans signification, des suites de mots aleatoires, et des mots sans signification. Avantchaquepresentationsur l'cran, les etudiants entendaient un stimulus auditif qui pouvait ou non correspondre au premier stimulus visuel. Le temps de reaction constituait la variable dependante. Un effet facilitateurde la presentationanterieuredu stimulus auditif a ete observe pour la lecture de phrase, la suite de mots sans signification, et les suites aleatoires de mots mais pas pour les mots sans signification. Les resultats semblent indiquer qu'il y a convergence entre les processus de traitementauditifs et visuels au niveau du mot et que les deux types de processus activentle meme lexique mental. Ces resultats montrent qu'une fois les processus de decodage en lectureet de comprehensionorale de mots sont maitrises, les processusde comprdhension les memes.
LA COMPREHENSION

116

READING RESEARCHQUARTERLY

* Spring

1990

XXV/2

La convergencia entre la audicion y el procesamiento de la lectura


tanto parala teoria como para la praictica. LARELACI6N entre escuchary leer es importante Una vez que una palabra ha sido reconocida, ,resulta entonces que el proceso de comprensi6nes el mismo que el de escuchar?Este estudio examin6el puntode convergencia de la informaci6nlinglifsticaprovenientede los canales auditivoy visual. Se les pidi6 a 40 estudiantesuniversitarios que indicaransi dos estimulos visuales presentadosen una pantalla eran iguales o diferentes. Se escogieron cuatrotipos de pares de estimulos de computadora para reflejar diferentes niveles de procesamiento: oraciones, cadenas de palabras sinticticamentecorrectaspero sin sentido, cadenasde palabrasescogidas al azar, y palabras inexistentes. Antes de que cada par fuera presentado, el estudiante escuch6 un estimulo auditivoque se pudieraaparearo no con el primer estimulo visual. Como se comprob6 al al escuchar tomarel tiempo de reaccidn, la comparaci6nvisual se facilit6 significativamente primeroun estimulo auditivoapareadoa las oraciones en las cadenasde palabrassinticticas sin sentido y en las palabrasal azar, pero no en las palabras inexistentes. Los resultados sugieren que el procesamientode lecturay la audici6n convergen al nivel de palabra,y que las palabrasprocesadasde formaauditivay visual compartenel mismo lexic6n. Este hallazgo sugiere que, al menos en el niniopequenio,una vez que la decodificaci6n ha sido dominada, los procesos de comprensi6nde la lecturay comprensi6nauditivason los mismos.

Die Konvergenz der Hiir- und Leseverarbeitung


DIEBEZIEHUNG zwischen H6ren und Lesen ist ffir die Theorie und die Praxis wichtig. Ist der

ffirdas Lesen und H6ren identisch, wenn ein Worteinmal erkanntworden VerstfindnisprozeB der linguistischenInformation ist? Die vorliegende Studie fiberpruftden Konvergenzpunkt der akustischenund visuellen Kandile. Vierzig Studentenwurdenaufgefordertanzugeben,ob zwei visuelle auf einem Computerbildschirm dargestellteStimuli identischwarenoder nicht. Vier Stimuluspaare wurden ausgewihlt, um unterschiedliche Verarbeitungsstufen darzustellen: Sditze, syntaktisch richtige aber unbedeutende Wortketten, willkdirliche einen und Nicht-W6rter. Bevorjedes Paardargestelltwurde, h6rtendie Studenten Wortketten akustischen Stimulus, der entweder mit dem ersten visuellen Stimulus identisch war oder nicht. Das Messen der Reaktionszeitergab, daBder visuelle Vergleichwesentlicherleichtert wurde, wenn die Testpersonenden passenden akustischen Stimulus ffir Sditze,syntaktisch richtige aber unbedeutende Wortketten und willkfirliche Wortketten, aber nicht fdir des H6rens und Nicht-W6rter,vorherh6rten. Die Resultatedeuten an, daBdie Verarbeitung Lesens auf der Wortebene zusammenliuft, und daB W6rter, die akustisch und visuell verarbeitet werden, dasselbe Lexikon benutzen. Dieses Ergebnis liBt vermuten, dali-zumindest beim Kleinkind-die Prozesse des Lese- und H6rverstindnisses identisch sind, wenn das Dekodiereneinmal verwirklichtwordenist.

Listening and reading processing

SINATRA

117

Much hension and listening comprehension makes the assumptionthat, after word identification, the cognitive processes and the mental elicited by the two modes of inrepresentations put are the same (e.g., Fries, 1963; Goodman, 1970; Kavanagh & Mattingly, 1972; Sticht, Beck, Hanky, Kleiman, & James, 1974; Perfetti, 1985). In other words, a unitary (or single) comprehensionprocess is activated regardless of the mode of input (Danks, 1980). According to this unitaryprocess view, reading consists of listening comprehensionplus decoding. Thus, once decoding is mastered, reading should not require any separate skills distinct from general language skills. Sticht et al. (1974) have claimed, for example, that reading uses the same languageability and cognitive resources as listening, plus the ability to search a visual display and decode printinto speech. The assumptionthat, following word identification, the processes of comprehending speech and printdo not differ leads to a number of hypotheses concerning the relation between reading performance and listening performance. For example, Sticht et al. (1974) suggest that performance in listening will exceed performance in reading until reading skill is mastered. However, once decoding skills are mastered,measuresof listening comprehension performance should be predictive of performance on measures of reading comprehension, and gains from instruction in a listening skill (e.g., listening for the main idea) should transfer to performanceon the same skill in reading. Sticht et al. (1974) presenta voluminousreview of studies that provide evidence concerning these hypotheses, but the studies are based largely on correlationalresearch and are thus not conclusive with respect to the relation between the cognitive processes of listening and reading. Although most researchers seem to have adopted the unitary process view, several have suggested that the differences between the linguistic stimuli of the two modalities are sufficient to postulateseparateprocesses for listening and reading. The dual process theory maintains

of the research in reading compre- common elements, the differencesare sufficient

that, althoughreadingand listening share some

to assume that reading comprehensionand listening comprehensionare differentprocesses. The most obvious distinction between the two modalities is that reading requiresthe decoding of printedsymbols in orderto recognize words, whereas listening does not (Townsend, Carrithers, & Bever, 1987). This distinction suggests the tasks are intrinsicallydifferentand thus requireindependentsets of processes. For example, Rubin (1980) noted that, because the speakerand the listener sharethe same context, the listener can take advantageof such nonlinguistic cues as gestures and facial expressions, which contribute to communication. Carroll and Slowiaczek (1985) suggest a number of other differences in the nature of the stimuli processed in spoken and in writtenlanguage. In spoken language, the signal decays rapidly; in writtenlanguage, informationis relatively permanent.The rateof informationis controlledby the producer in spoken language, but by the perceiver in written language. In spoken language, sentences are often fragments, whereas in writtenlanguage, sentences are usually complete and grammatical. In spoken language, there is a great deal of prosodic information;in writtenlanguage, there is no prosodic information except for the minimal cues provided by punctuation. In addition to differences in the linguistic stimuli, researchers have pointed to developmental differences suggesting that there are separate processes for listening and reading comprehension. Mattingly (1972) has noted that every normal child develops the ability to understandhis or her native spoken language. In contrast,childrenmust be deliberatelytaught to read, and many fail to learn to do so, in spite of having adequate listening skills. Miller (1972) points out that written language is historically a more recent developmentthan spoken language. Furthermore, he notes that writing did not originateas a record of speech; rather,it evolved from pictographsas an alternate form of communication.Danks (1980) argues that although differences in the historical development of spoken language and written

118

READING RESEARCHQUARTERLY

* Spring

1990

XXV/2

languagedo not necessarilyindicatedifferential processing, they do suggest that the processing of spoken and written language may not be identical. Perfetti (1987) has emphasized that these two views on the natureof listening and reading processing have significant implications for reading instruction. The unitary process view suggests that readingshouldbe taughtonly until the process of decodingis mastered,andthatinstructionin other readingskills is not necessary. The dual process view suggests that it is necessary to teach not only decoding but also skills necessary for the specific task demands of reading comprehension.Danks (1980), for example, suggests that, if there are separateprocesses for listening and readingcomprehension, then reading instruction should continue even after children have become skilled decoders. He suggests that more advanced reading instructioncould emphasize skills he sees as specific to reading,such as outlining,analyzingthe structure of paragraphs, and learning how to follow styles of argumentdevelopment. Danks points out thatif curriculumdesignersknew exactly how listening comprehensionand reading comprehension differ, they could design specialized reading curricula to address the demandsof both types of processing. Convergence of listening and reading processing Most unitary- and dual-process theorists agree that listening and reading share common processing at some point; however, little research has been aimed at discovering at what point listening and readingprocesses converge. As noted above, much of the researchregarding the relation between listening and reading has been limited to studies of the correlation between listening performance and reading performance. Some empiricalevidence aboutthe relation of listening processes to reading processes has come from cross-modality priming studies. These studies have examined how information processing in one modality (e.g., listening) influences processing in the second modal-

ity (e.g., reading). For example, a number of researchers (e.g., Seidenberg, Tanenhaus, Leiman, & Bienkowski, 1982; Swinney, 1979) have used cross-modalitypriming to study the processing of ambiguous words (words with multiple meanings). They first presented subjects aurally with an ambiguous word within a semantic contextthat would bias the interpretation of that word (i.e., lead the listener to choose one meaning out of the multiple meanings possible). The subjects were then presented visually with a letter string and asked to make a lexical decision (i.e., to decide whether the letter string was a word or not). The researchers found that less time was requiredto make a lexical decision for words relatedto one of the possible meaningsof the ambiguousword when both words were presented simultaneously (or within 200 msec). When the visual string was presentedafterseveralmore syllables of auraltext, the lexical decision was facilitated only for words that were relatedto the biasing context, but not for words that were relatedto one of the other meanings of the ambiguous word. These studies show that when a listener hears an ambiguousword, initially both meanings of the word are accessed, but after 200 msec only the contextuallyrelatedmeaning remains activated. Thus, the processing of auditory information can affect the processing of visual information. These studies also suggest that there is a common lexicon for aurally and visually presentedwords. Kirsner and Smith (1974) investigated whetherthere is a single lexicon or separatevisual and auditory lexicons by examining both cross-modality and within-modalityeffects on word recognition. A lexical decision task was presentedto subjects either visually or aurally. Each item was then repeateda second time, in either the same or the opposite modality. Kirsner and Smith'sresults showed that lexical decision time was less for the second presentation of a word or a nonwordwhen both presentations were in the same modality.Also, in the cross-modality condition, the lexical decision was facilitatedsomeon the second presentation what for words, but was not facilitatedfor nonwords. In addition, the accuracy data showed

Listening and reading processing

SINATRA

119

that there were fewer errors on the second presentation for both words and nonwords (averaged over conditions). Accuracy at the second presentation of words was greater when both presentations were in the same modality than when the second presentationwas in the opposite modality. The accuracy data for nonwords did not show this advantage.These results support the notion of a common lexicon for words that are read and heard. More recently, Hanson (1981) investigated whether written and spoken words share common processing systems. She presentedwords simultaneously in both modalities, but asked subjectsto attendto only one modality.Subjects were asked to make decisions regardingthe semantic, phonological, or physical propertiesof the attended word. In the semantic task, subjects were asked to decide whetherthe word in the attendedmodalitywas a memberof a particular semantic category.The words presentedin the unattended modality were (a) redundant with the attended word, (b) a member of the same semantic category as the attended word (e.g., chair/lamp), or (c) a memberof a different semanticcategory.In the phonologicaltask, subjects were asked to decide whether the attended word contained a target phoneme. The unattendedword was (a) redundant with the attendedword, (b) a differentwordthat contained the targetphoneme, or (c) a differentword that did not contain the target phoneme. The physical task required subjects to make decisions regardingnonlinguisticpropertiesof the stimulus. When attendingto the visual modality,the subject was to decide whether the word was in upper or lower case. When attendingto the auditory modality, the subject was to decide whether the stimulus was presented by a male or a female voice. In both conditions, the unattended word was either redundantwith or different from the attendedword. By manipulating the level of stimulus analysis required for response decisions, Hanson was able to test for the influence of a common code for writtenand spoken words at different levels of processing. Hanson arguedthat if there is a common representationof words presentedin the two modalities at any of these levels of processing, then

decisions involving that level of analysis should be influenced by the properties of the unattended word. She found response facilitationin the semantic and phonological tasks, but not in the physical task. Hanson concluded that written and spoken words share semantic and phonological processing, but that information is coded separatelyfor the two modalitiesprior to the point of convergenceof the two inputs. In his model of readingbased on a biological metaphor,Royer (1985) proposeda convergence of listening and reading processing at a point in the model called the syntactic/conceptual level. According to this model, when a person hears a sentence, activation passes up throughan auditoryprocessing hierarchycomposed of auditoryfeaturedetectors, an auditory spelling patternechelon, an auditoryword echelon, a syntactic/conceptual echelon, an episodic echelon, and a scriptalechelon. When the person reads the same sentence, activation passes up a similar visual pathway that converges with the auditorypathwayat the syntactic/conceptual level. A sentence that is read immediately after it is heard would be processed more quicklybecause nodes at the syntactic/conceptual, episodic, and scriptal echelons would have a loweredthreshold,due to the activation caused by the auditorysentence. Royer's proposal that the point of convergence of the two modalities is at the syntactic/conceptual level is based on a developmentalview: In beginning readers, processing at higher levels is already well developed from their experience with spoken language. Processing at lower levels, on the other hand, is modality-specific: Learningto read requiresthe developmentof a visual analysis pathwayfor words. Current study Most previous studies of the convergence of listening and reading processes have been limited to examining the processing of single words, rather than complete sentences. In the currentstudy, the convergence of listening and readingprocesses was examinedusing full sentences as well as other linguistic stimuli. These various types of linguistic stimuli were used in order to activate processing at the phonemic,

120

READING RESEARCHQUARTERLY

Spring 1990

XXV/2

tic, or meaning, level. These sentences were semanticallyand syntacticallycorrect. The logic behind the study was that the processing of an auditorystimuluswill have an effect on the processingof a similarvisual stimulus only if the auditorystimulus can be represented at or beyondthe point of convergenceof visual and auditoryprocessing. For example, if there is no common representation of words (i.e., if there are separateauditory and visual lexicons), then hearinga stringof words such as tire book month would have no effect on the processing of the visual string tire book month; differentrepresentations would be activated.If, in a lexihowever,these wordswere represented con common to both modes of presentation, then hearing the string tire book month would facilitatethe processing of the visual string tire book month, because both strings would activate the same representation in the lexicon. There are several possible sources of a cross-modalityeffect of the processingof an auditory stimulus on the processing of two visual stimuli. First, as described above, hearing an auditorystimuluscould facilitatethe process of encoding the same stimulus presentedvisually. Reading the first visual stimulus might also facilitate the encoding of the second visual stimulus when the two visual stimuli are the same. There may also be some carry-overof the facilitation from the auditorystimulusto the second visual stimulus. In addition, there may be a facilitativeor inhibitoryeffect of the auditorystimuluson the decision component(Forster,1979) of the comparison task, due to expectationsset up by the task structure. For example, a match between the auditorystimulusand the first visual stimulus may set up an expectation of a match between the two visual stimuli; this expectation would facilitatethe decision time. Whetherthe effect of the processing of the auditorystimulus is on the encoding of the visual stimuli, the decision component of the visual comparison task, or both the encoding and the decision processes, there is likely to be an effect on the time necessary to make a response. In the present experiment, an auditory sentences were used to evoke full good processing at all levels up to and includingthe seman- stimulus is expected to have an effect on the

lexical, syntactic, and semanticlevels. Subjects listened to an auditory stimulus and then were asked to decide whether two visual stimuli were the same or different. The auditory stimulus was either identical to or completely differentfrom the first visual stimulus. The two visual stimuli were either identical to each other or differedby one word. The subject's task was to decide whetherthe two visual stimuli were identicalto each other or different from each other. Forster (1979) has noted that responsetimes in a comparisontask of this kind consist of the following components: (a) the time needed to establish mental representations of the two stimuli, (b) the time needed to compare the representations,and (c) the time needed to evaluatethe outcome of the comparison in terms of the task decision. Forsternotes that the comparisontask has been used successfully in the area of word recognition, and that, by varying the natureof the stimuli, it can also be used to investigatesentenceprocessing. Four types of stimuli were included in the present study to activatevarious levels of processing. First, nonword strings were used to evoke processing at the phonemic level. These were strings of pronounceablenonsense words, which could be processed up to the phonemic level but no further.Second, for processing up to the lexical or word level, I used randomword strings, which were groups of real words that together had no semantic interpretation and were not syntactically correct. Such a list of randomwords could have a lexical representation for each word (in additionto phonemicrepresentations), but could not be representedin terms of the syntax of the group of words or the sentence-level meaning. Third, syntactic nonsense strings were used to evoke a representation at the syntactic level of processing. These syntactically correct strings could be represented at a level where the syntax and the meanings of individual words were preserved. However,because they had no possible semantic interpretation, they could not be represented at the level in the processing hierarchy where the meaning of sentences is preserved. Finally,

Listeningand readingprocessing

SINATRA

121

process of encoding an identical visual stimulus when the auditory and visual stimuli share a common representation. This assumption leads to the following set of predictions: 1. If the auditory and visual pathways do not converge until the semantic level (the point in the processing system where the meaning of sentences is represented),then an encoding effect wouldbe expectedfor the full good sentences only. 2. If the two pathwaysconverge at the syntactic level (the point in the processing system where syntactic informationis represented), then an encoding effect would be expected for the full good sentencesand syntacticnonsense strings, but not for the randomword strings or nonwordstrings. 3. If the auditoryand visual pathwaysconverge at the word or lexical level, then an encoding effect would be expected for the random word strings as well as the full good sentences and syntactic nonsense strings, but not for the nonwordstrings. 4. If the two pathwaysconverge at the phonemic level, then an encoding effect would be expected for all four stimulussets. 5. If the auditory and visual pathways do not converge, then there should be no encoding effect for any of the stimulustypes.

versity of Massachusetts.The studentsreceived class credit for their participation in the experiment. Apparatus A GodboutCompuPromicrocomputer controlled the presentationof both the auditoryand the visual stimuli. Subjects sat facing a CRT terminal and a three-buttonconsole that were connected to the computer.All written stimuli were presentedon the computerscreen. The auditory stimuli were presented through headphones. Subjects used a button on the left,
marked
START,

to begin a set of trials, and used

the two buttonson the right, markedSAME and DIFFERENT, to registertheir decisions aboutthe stimuli. Materials The 192 sentencesfrom which stimuli were developed were simple 5- to 9-word sentences. They were taken from examples presented in three style manuals: The Practical Stylist (Baker, 1973), Modern English: A Practical ReferenceGuide (Frank, 1972), and TheStructure of English Clauses (Young, 1980). These sentences were randomlydivided into four sets of 48 sentences each. A different set of sentences was used to generateeach type of stimulus in order to avoid excessive repetition of the same words across trials. Table 1 shows examples of each stimulus type. (See the Appendix for a complete list of stimuli.) Full good sentences. The first set of sentences was used in their original form. All sentences were semanticallyand syntacticallycorrect.

Method
Subjects
Subjects were 40 undergraduate students recruited from psychology classes at the Uni-

Table 1
Type

Examples of the four stimulustypes


Examples 1. Sue wantsto go for a walk. 1. They can spend the car. 1. owe riot monthcourse 1. trings sland sork 2. The churchstands in the square. 2. It had looked the hours of the street. 2. sendingvery happenedheard 2. rameteru mest

Full good sentences Syntacticnonsense strings Randomword strings Nonword strings

122

READING RESEARCHQUARTERLY * Spring

1990

XXV/2

Syntactic nonsense strings. The syntactic nonsense strings were generatedfrom the second set of 48 sentencesby replacingeach noun, verb, adjective, and adverb with a word of the same part of speech that was randomlyselected from anothersentence in the set. The verb was changedto agree in numberwith the noun in the string, if necessary. If the resulting sentence could be considered meaningful, some words were re-selecteduntil a syntacticallycorrectbut meaninglesssentenceresulted. Random word strings. These stimuli were generatedby scramblingthe words in the third set of 48 sentences. Articles, prepositions,conjunctions, quantifiers,and auxiliaryverbs were omittedto minimize differencesin readingtime between stimulus types, as people generally read sentences more quickly thanthey readlists of randomwords. Nonwordstrings. The nouns, verbs, adjectives, and adverbs of the final 48-sentence set were manipulated to produce strings of pronounceablenonsense words. One or two letters of each contentwordwere replacedto obtainthe pronounceable nonwords. A pilot study was conducted to normalize the spelling of each nonword. Fifteen subjects each listened to a tape recording of the spoken nonwords and wrote down their best guess as to the spelling of each nonword. For each nonword, the spelling that was producedmost frequentlywas used for in the main study. its visual presentation Length of all stimuli. A pilot study was conductedto determinethe length for each type of stimulus. Ten subjects listened to 52 stimuli of varying length that were selected randomly from the four stimulus types and presented in random order. The stimuli included sentences and syntactic strings of 5-9 words in length; random word strings of 4-6 words, and nonword strings of 3-5 nonwords.After each stimulus had been presented, subjects were asked first to do a mentalarithmeticproblemand then to recall what they had heard. The length that showed the highest average recall across subjects was 3 items for nonword strings and 4

words for randomword strings. There was no differencein recall due to length for the syntactic nonsense strings or full good sentences. Therefore, each nonword string was limited to exactly 3 nonwords and each random word string contained3 or 4 words; the length of the syntacticnonsense strings and full good sentences was unaltered. Foilsfor all stimuli. For the trials in which the auditorystimulusdifferedfrom the first visual stimulus, 16 of the 48 stimuli of each type were randomlyselected to be used as auditory foils. Stimuli used as auditory foils were not presented visually at any time. The remaining 32 stimuli of each type were used as the first visual stimulusin each trial (andas the auditory stimulusin the auditorymatchcondition). For half of the trials, in which the two visual stimuli were to match, the same sentence or string appearedtwice on the computer screen. For the otherhalf of the trials, the second visual stimulus differed from the first visual stimulus by only one word or nonword. For nonword strings, the replacement nonword was of the same length as the nonwordit replaced. For all other stimuli, the replacementword was similar in length and meaning to the word it replaced. For example, I can't find my car keys was find my car keys. Preserving changed to I won't length and meaning was important to ensure consistent task demands across trials; otherwise, subjects could have made judgments based on differencesin meaningor length of the stimulus, ratherthan on differencesin wording. Order of presentation. Three stimuli were presented in each trial: one auditory stimulus and two visual stimuli. The auditory stimulus either matchedor did not matchthe first visual stimulus, and the first visual stimulus either matched or did not match the second visual stimulus; these conditions were crossed. Thus, there were four experimental conditions for each type of stimulus, or 16 total experimental conditions. The three stimuli presented in each trial were always of the same type (three full good sentences, three syntactic nonsense strings, three random word strings, or three

Listeningand readingprocessing

SINATRA

123

nonwordstrings). There were 32 trials for each type of stimulus, resulting in a total of 128 trials. Each subject completed 128 trials. Experimental conditions and stimulustypes were presented in one of four randomorders, in orderto minimize the effects of order of presentation. Each stimulus appearedin a differentcondition in each of the four orders. The auditory stimuli were recorded on audiotapein each of these four orders. An auditory foil was substitutedwheneverthe auditory stimuluswas to appearin an auditorymismatch condition. Four orders of the visual stimuli were developedin correspondencewith the four tape recordings, such that subjectswho listened to a particularaudiotapewould see the appropriate visual stimuli presentedon the computer screen. Subjectswere randomlyassigned one of the four presentationorders. Each subject saw every type of stimuluspresentedin every condition (but saw each individual stimulus in only one condition). Procedure

ing the button for either SAMEor DIFFERENT.

The computermeasuredthe time from the onset of the second visual stimulus until the subject pressed the buttonin response. Immediately after the subject had responded, either the word CORRECTor the word ERROR appearedon the screen. This feedback was given in orderto remindsubjectsto pay attentionto the visual stimuli ratherthanthe auditory stimulus in making their comparison, and to remind subjects to monitor their accuracy (ratherthanjust respondingquickly).

Results
The results of this experimentwill be presented in two sections. General results will be presentedfirst, althoughthey are not critical to the purpose of the experiment.The results that test the point of convergence of listening and readingprocessing directly will be presentedin the second section.

General findings A preliminary analysis of variance Subjects received instructions explaining the natureof the task and asking them to com- (ANOVA)showed no significant difference due pare the two visual stimuli only. Each subject to order of presentationof the trials; therefore, then participated in 8 practicetrials. The exper- the data for all four presentation orders were imental trials began once the subject appeared collapsed for subsequentanalyses.' The average to understandthe task demands. Each trial be- accuracy rate across all conditions was 95 pergan with the words Press left buttonfor next cent correct. One subjectwas replacedbecause trial displayed on the screen. As soon as the of equipmentproblemsduring the subject'strisubject pressed the button, the message disap- als; three other subjects were replacedbecause pearedfrom the screen andthe subjectheardthe they each took more than 5 seconds to respond auditory stimulus through the headphones. on two or more trials. Table2 shows the means When the auditorystimulusended, the first vis- and standarddeviations of all subjects on all ual stimulusimmediatelyappearedon the com- conditions. puter screen. All the words or letters of the Subjects'reaction times were analyzed usvisual stimulus appearedon the screen simulta- ing a 4 x 2 x 2 within-subjectsANOVA,with neously. After a half-secondpause (the intentof stimulus type (full good sentence, syntactic which was to encouragethe subject to read the nonsense string, random word string, or nonentire first stimulus), the second visual stimulus word string), auditory condition (match/misappeared on the screen. All letters again ap- match) and visual condition (match/mismatch) peared simultaneously, and were aligned di- as independent variables. Separate ANOVAs rectly beneath the letters of the first visual were conductedfor responses by stimulus item stimulus. The two visual stimuli remained on and by subject;results for both analyses are rethe screen until the subject respondedby press- portedtogether.

124

READING RESEARCHQUARTERLY *

Spring 1990

XXV/2

Table 2

Means (and standard deviations)for subjects'reactiontimes by stimulustype and experimentalcondition


Auditorymatch Auditorymismatch Visual match 1,933 ( 489) 2,276 ( 601) 1,887 ( 417) 1,891 ( 536) Visual mismatch 1,595 ( 335) 1,669 ( 450) 1,469 ( 344) 1,285 ( 316)

Stimulustype

Visual match 1,714 ( 378) 1,964 ( 469) 1,597 ( 351) 1,700 ( 448)

Visual mismatch 1,422 ( 304) 1,545 ( 327) 1,393 ( 285) 1,340 ( 330)

Full good sentences Syntacticnonsense strings Randomword strings Nonwordstrings

arerounded to thenearest msec. Note. All figures

A significant main effect was found for stimulustype in analysesboth by subjectand by item, Min F'(3, 161) = 8.44, p < .01. The mean reaction time was 1,681 msec for full good sentences, 1,863 msec for syntactic nonsense strings, 1,586 msec for random word strings, and 1,554 msec for nonword strings. These means are misleading when compared because stimulustype is confoundedwith stimulus length; for example, the nonwordstrings, which show the fastest mean reaction time, were the shortest in length. The mean reaction time per unit (word or nonword)was 210 msec for full good sentences, 232 msec for syntactic nonsense strings, 453 msec for random word strings, and 518 msec for nonwordstrings. The ANOVAs by both subjects and items also revealedsignificantmain effects of the auditory condition, Min F'(1, 81) = 38.08, p < .01, and of the visual condition, Min F'(1, 80) = 106.38, p < .01. These analyses, and the means, indicate that subjects responded more quickly overall when the auditory stimulus matchedthe first visual stimulus (M = 1,584) thanwhen it did not (M = 1,758), and that subjects respondedmore quickly when the two visual stimuli were the same (M = 1,465) than when they were different(M = 1,878).

Significant interaction effects were found in the analyses by both subjects and items for Auditory Condition x Visual Condition, Min F'(1, 145) = 17.25, p < .01, and Visual Condition x StimulusType, Min F'(3, 22) = 3.53, are not rep < .05. However,these interactions lated to the question of interest in the current was not signifistudy.The three-wayinteraction cant (p > .05). Findings for the convergence of listening and reading processing The analysis of the main effect of auditory condition showed that there was an effect of processing an auditorystimuluson the processing of a visual stimulus, suggesting some overof auditoryand lap between the representations visual linguistic stimuli. To locate the point of convergence of the two types of processing, we must look at the interactionbetween auditory condition and stimulus type. The ANOVAsfor both subjectsand items showeda significanteffect of this interaction,Min F'(3, 239) = 2.90, p < .05. Figure 1 displays this interactionfor the analysis by subjects. Bonferronit tests on the analysis by subjects were used to identify the source of the interaction. (The family-wise error rate for all

Listening and reading processing

SINATRA

125

Figure 1 Mean reactiontime (in msec) for both auditoryconditionsby stimulustype


2400

2200 Auditory Mismatch

2000-

1800

1600 -

Auditorv Match

there is a significantdifferencebetween the auditory match and auditorymismatchconditions for the nonwordstrings when the visual stimuli match. (This difference was significant for all four stimulus types at the .0125 level.) However, there is no significant difference between the auditorymatch and auditorymismatchconditions for the nonwordstrings when the visual stimuli do not match, t(39) = 1.25, p > .05. The effect of auditoryconditionwhen the visual stimuli do not matchwas significantfor the full good sentences, t(39) = 3.49, p < .0125, and syntactic nonsense strings, t(39) = 3.09, p < .0125, and was nearly significant for the random word strings, t(39) = 1.92, p = .062.

1400

(;S

SNS

RWS

NW \

Stimulus Type

Bonferronitests was controlledat .05 by evaluating each contrastinvolving two means at the .0125 level and each contrast involving four means at the .01 level.) For three of the four stimulus types, reaction time was significantly shorter when the auditory stimulus was the same as the first visual stimulus:The difference was significant for full good sentences, t(39) = 5.60, p < .0125; syntactic nonsense strings, t(39) = 5.79, p < .0125; and random word strings, t(39) = 6.37, p < .0125. There was no Figure 2 significant difference between the two auditory Mean reactiontime (in msec) for both auditory conditions in reaction time for the nonword conditionsby stimulustype when strings, t(39) = 2.21, p > .0125. Bonferroni visual stimuli were the same comparisons of the magnitudeof the three differences were not significant. That is, the dif2000 ferences between the auditory match and 1900 auditory mismatchconditions for the full good rannonsense and sentences, syntactic strings, 1800 dom word strings were comparable. S Although not significant, there was a 68 1700Auditory msec difference between the auditory match Mismatch 1600 and auditory mismatch conditions for the nonword strings. To examine this difference, I plotted separately the interaction of auditory Match 1400 conditionand stimulustype for the visual match FGS SNS RWS NWS condition (Figure 2) and visual mismatch conStimulus Type dition (Figure 3). As shown in these figures,

Comparisonsof the above differences between the auditorymatch and auditorymismatchconditions for each stimulus type revealedthat the magnitudesof these differences were comparable for the match condition (i.e., no significant difference at the .05 level). For the mismatch condition, the magnitudes of the differences were comparable for the full good sentences, syntactic nonsense strings, and random word strings. The magnitudeof the differencefor the nonwordstrings, however,was significantlydifferentfrom that for each of the otherthree stimulus types (at the .01 level).

126

READING RESEARCHQUARTERLY

Spring 1990

XXV/2

Figure 3 Mean reactiontime (in msec) for both auditory conditionsby stimulustype when visual stimuli were different
1700

1600

1500

S 1400

Auditory
Match

1300

1200

Auditory Mismatch FGS SNS RWS NWS

Stimulus Type

Discussion
The results show that listening to a prior auditory stimulus affects the time required to decide whethertwo visual stimuli are identical. This effect was found for stimulithathave a linat the semantic, syntactic, guistic representation and lexical level-namely, the full good sentences, the syntacticnonsense strings, and the random word strings. For nonwordstrings, a significanteffect of a prior auditorystimuluswas found only in the visual match condition. This effect may have been due to facilitation(or inhibition)of the decision process ratherthanof the encoding of the visual nonword string. That is, if the auditory stimulus matched the first visual stimulus, the subject might have expected that the two visual stimuli would also match. Such an expectation could acceleratethe decision thatthe two visual stimuli were the same. However, when the auditorystimulus did not match the first visual stimulus, subjects may have expected a visual mismatch, and may have responded more slowly to a visual match. The difference observed for the nonword strings in the visual match condition thus could be explained by either facilitationor inhibitionof the visual match

decision, but it appears to be an effect on the decision process ratherthan the encoding process. Whereasthe resultsfor the nonwordstrings appear to indicate an effect on the decision process only, the results for the full good sentences, the syntactic nonsense strings, and the randomword stringsappearto show an effect of the auditorystimuluson the encoding of the visual stimulus. The results of this study thus suggest that listening processes and reading processes converge at the wordlevel. This findingis inconsistent with the assumption that reading and listening processes are completely separate.On the other hand, it also refutes the notion that listening and reading processes are the same except for initial perceptualdifferences. In addition, the finding is inconsistentwith the point of convergence of listening and reading suggested by some other models. For example, Royer (1985) suggests that the two processes level; to be converge at the syntactic/conceptual consistentwith this model, an effect of the auditory stimuluswouldhave to be found for the full good sentences and syntactic nonsense strings only, and not for the the randomword strings. Hanson's (1981) suggestion that written and spoken words share phonological processing is also inconsistent with the current results. On the other hand, the present study does support Kirsner and Smith's (1974) conclusion that writtenand spoken words share a common lexicon. Some important questions were not addressedby the presentstudy.First, althoughthe results suggest that the processes of listening and readingconverge at the word level, the two processes may diverge at some laterpoint in the processing continuum. For example, the processing of lengthy, connected text may require processing strategies that are qualitativelydifferentfrom the strategiesused in the processing of aural discourse. This possibility demands furtherstudy. Second, the currentstudy does not address interactionsbetween various levels of processing. Rather,each condition was designed to examine how processing an auditorystimulusat a particularlevel would affect the processing of a

Listening and reading processing

SINATRA

127

visual stimulus at the same level (e.g., the effects of auditory word processing on visual word processing). Although interactioneffects were not examined in the present study, a comof how processing auditory plete understanding linguistic informationaffects the processing of visual linguistic informationwould have to include an examination of the influence of the processing of higher-level auditoryinformation on the processing of lower-levelvisual information. The differences in unit processing time between the stimulus types warrantfurtherinvestigation. The stimulus type at the highest level of the processing hierarchy(the full good sentence) also requiredthe shortestprocessingtime per word. The processing time per unit increased as the level at which the stimuluscould be representedin the processing hierarchydecreased; the nonsense strings requiredthe most time to process. The ease of comparisonfor linguistic stimuli may depend upon the size of the unit thatmust be processed. In otherwords, it is possible thatreaderscan comparefull good sentences as single units of meaning, but that they must compare random word strings word-byword, and nonwords phoneme-by-phoneme. (Syntactic nonsense strings would have to be compared in some sort of syntactic units.) Thus, there may actually be more units to be compared in the nonword stimulus, despite its shorterlength, than in the full good sentence. Another importantquestion is whether the effect of an auditorystimuluson the processing of visual stimuli is a facilitationeffect or an inhibition effect. For example, in the current study the effect of the auditorystimulus on the processingof the visual stimuli may have been a facilitationeffect (in the auditorymatch condition) or an inhibition effect (in the mismatch condition). Also, because the current findings for nonwordswere complex, the effect of auditory nonwordson the processing of visual nonwords deserves furtherinvestigation. The results of the present study may have implications for instruction in both listening and reading. The findings suggest that there is some commonality of processing between the two modalities. If reading comprehensionde-

pends on some of the same processes as listening comprehension, then the ratio between a student'slistening skills and reading skills may be a useful indicatorof a student's"readingpotential" (e.g., Durrell, 1969; Carroll, 1977; Sticht & James, 1984). A student whose skills in listening comprehension and reading comprehension are comparablemay be reading as well as can be expected, and may be able to improvehis or her readingability only by building a larger vocabularyor a larger knowledgebase. On the other hand, a student whose reading skills are far below his or her listening skills should benefit from further instruction in decoding. Although the current study shows commonality between listening processes and reading processes, the relation between the two is complex. In a recentarticle, Perfetti(1987) proposes that there is an asymmetric relation between the two skills, which changes as the child develops. He argues thatthe two skills are quite different in the beginning reader, but that the process of readingbecomes similar to the process of listening once a child masters decoding. In the adult fluent reader, this relation may change again, and aural processing may become more similar to the process of reading. Furtherresearch is needed to understandfully the relationbetween these two processes. If the relation between listening comprehension and readingcomprehensionevolves over time, then a more complete understanding of this developmental process may have differentimplications for instructionin the two modalities for the beginning reader,the experiencedreader,and the fluent reader.

REFERENCES
Crowell. J.B.(1977). Developmental of readCARROLL, parameters
ing comprehension. In J.T. Guthrie (Ed.), Cognition, curriculum, and comprehension (pp. 1-15). Newark,

S. (1973).The Thomas BAKER, practical stylist.NewYork:

DE:International Association. Reading

CARROLL, P.J., & SLOWIACZEK, M.L.

(1985, June). Modes and modules: Multiplepathways to the language processor Paperpresentedat the Conferenceon Modularity, Amherst, MA.

128

* Spring 1990 READING RESEARCH QUARTERLY

XXV/2

DANKS,J.H. (1980). Comprehension in listening and read-

ing: Same or different? In J.H. Danks & K. Pezdek (Eds.), Readingand understanding (pp. 1-39). Newark, DE: International ReadingAssociation.
DURRELL,D.D. (1969). Listening comprehension versus

processing. CognitivePsychology, 14, knowledge-based 489-537.


STICHT, T.G., BECK, L.J., HANKY, R.N., KLEIMAN, G.M., &

JAMES,J.H. (1974). Auding and reading: A developmen-

reading comprehension.Journal of Reading, 12, 455460. M. (1972). Modern English: A practical reference FRANK, guide. EnglewoodCliffs, NJ: Prentice-Hall. C.C. (1963). Linguistics and reading. New York: FRIES, Holt, Rinehart& Winston.
K.I. (1979). Levels of processing and the structure FORSTER,

tal model. Alexandria,VA: HumanResourcesResearch Organization.


STICHT,T.G., & JAMES,J.H. (1984). Listening and reading.

In P.D. Pearson (Ed.), Handbookof reading research (pp. 293-318). New York:Longman.
D.A. (1979). Lexical access during sentence comSWINNEY,

of the languageprocessor.In W.E. Cooper& E. Walker (Eds.), Sentenceprocessing (pp. 27-85). Hillsdale, NJ: Erlbaum.
K.S. (1970). Reading: A psycholinguistic guessGOODMAN,

ing game. In H. Singer & R.B. Ruddell(Eds.), Theoretical models and processes of reading (pp. 497-508). Newark, DE: International ReadingAssociation.
Processing of written and spoken

HANSON, V.L. (1981).

of context effects. Jourprehension:(Re)Consideration nal of Verbal Learning and Verbal Behavior, 18, 645-659. TOWNSEND, D.J., CARRITHERS, C., & BEVER, T.G. (1987). Listening and readingprocesses in college- and middle school-age readers. In R. Horowitz & S.J. Samuels oral and writtenlanguage (pp. (Eds.), Comprehending 217-242). New York:AcademicPress. YOUNG,D.J. (1980). The structureof English clauses. New York:St. Martin'sPress.

words: Evidence for common coding. Memory& Cognition, 9(1), 93-100. KAVANAGH, J.E, & MATTINGLY, I.G. (1972). Language by ear and by eye. Cambridge,MA: MIT Press. KIRSNER, K., & SMITH, M.C. (1974). Modality effects in word identification. Memory & Cognition, 2(4), 637640.
I.G. (1972). Reading, the linguistic process, MATTINGLY,

Footnotes
The study reportedhere was submittedas part of the requirementsfor a Masterof Science degree at the University of Massachusetts,Amherst. I would like to gratefully acto the researchof CharlesClifknowledgethe contributions ton, James M. Royer, and Arnold Well. Special thanks to James M. Royer for help with the revisions. I would also like to thankKaraKritis for her assistancewith datacollection. 'An additionalANOVAon the data by subjects was conductedin which orderwas includedalong with all othervariables. There was no significant main effect of order (p > .05), and there were no significant two-way interactions with order.There was one significantthree-wayinteraction involving order; however,it was not interpretable.

and linguistic awareness.In J.E. Kavanagh& I.G. Mattingly (Eds.), Language by ear and by eye (pp. 133148). Cambridge,MA: MIT Press.
G.A. (1972). Reflections of the conference. In J.E. MILLER,

Kavanagh & I.G. Mattingly (Eds.), Language by ear and by eye (pp. 373-381). Cambridge,MA: MIT Press.
C.A. (1985). Reading ability. New York: Oxford PERFETTI,

University Press.
C.A. (1987). Language, speech, and print: Some PERFETTI,

asymmetries in the acquisitionof literacy. In R. Horooral and witz & S.J. Samuels (Eds.), Comprehending written language (pp. 355-369). New York: Academic Press.

ROYER, J.M. (1985). Reading from the perspective of a bio-

Educational Psychollogical metaphor.Contemporary ogy, 10, 150-200. A. (1980). A theoreticaltaxonomyof the difference RUBIN, between oral and writtenlanguage. In R.J. Spiro, B.C. Bruce, & W.F. Brewer (Eds.), Theoretical issues in reading comprehension(pp. 239-252). Hillsdale, NJ: Erlbaum.
SEIDENBERG, M.S., TANENHAUS, M.K., LEIMAN, J.M., &

M. (1982). Automatic access of the meanBIENKOWSKI,

ings of ambiguouswords in context:Some limitationsof

Received July 29, 1988 Revision received July 7, 1989 Accepted August 31, 1989

Listening and reading processing

SINATRA

129

APPENDIX
Stimuli used in the study
in boldfacewereusedforthevisualmismatch condi(Note.Words tion.) Full good sentences 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. Sue wantsto go for a walk/ride. David kept his savings/dollar in an old sock. I will come to see/get you on Thursday. The projectwas wholly/mostly ineffectual. He would not think of letting us/me help. The bottle/carafe fell off the table. She workedin the garden/fields yesterday. The/His dataare inconclusive. By late afternoon,William was exhausted/deficient. The churchstandsin the square/common. The young man was elected class president/treasurer. Ellen is the one who will/must succeed. He workedhard/alot because he needed the grade. The policemanarrestedthe burglar/robber. He is living/acting like a millionaire. Sailing a boat/yawl is fun. Most membersare in favor of the motion/action. I move/hold thatthe nominationsbe closed. They will consent/concede to any arrangements. He taught/helped me to play the piano. The room was full of sunlight/daylight. The school offers three separate/distinctcurricula. The letter was signed by the author/writer. Cathy wanted/neededa singing career. He objectedto the suggestion/statements. The studentsare organizingsocial/sports activities. You seem/look uninterestedin the problem. Nobody realized thatthe trainwas late/slow. We suggest/request that you take warm clothes with you. 30. The old house was empty/quiet. 31. I can't/won'tfind my car keys. 32. He blamedthe managementfor the dispute/quarrel. Syntacticnonsense strings 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. It had looked the hours of the street/routes. He was awakenedreading/lookingearly for miles. The book had reachedsteadily/normally. Mary must be for the novel/books. The subway/trains reduces away fair. I ran/jog he must not have now. He has been raining/pouringa library. A crowd left under/belowtwo lakes. The play can more than this picture/etchingabroad. Most for the heat were bright/golden. She won/got that he snappedpay. He drovethe indifferenttaxes late/long. She amuseda hundrednovels all her week/days. Summer books went home and are mending/helping late.

15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32.

They were at his/her Sarahlast. I walkedin the last February/December. His wide shop/mill heardhis agreement. The match/gamesarrivedthe end easily. He and I perfectlybegan/arose the money. I have provedfor this story/tales since June. The apartment plan/idea is in good suit. Genius managershad a workablejudge/chief. They closed/sealed the dollars this imitation. We signed each other by beginning/producing month. Now and then she saw a flu hope/wish. Since they were loud, it showedtwigs/stick. Youare mendingin a second/double book. He was she who came the noise/sound. We said as unfinished/incompleteas Bill. It has been her this door/gate. He was by his tire/tube and could tonight. They can spend/wastethe car. Randomword strings

1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32.

owe riot monthcourse/routes window work/jobs today playing boughtreached/grabbedcotton listening becoming book matter/affairs proportionsenteringrepaired/adjusted studentpiano cutting/slicing place girls open/ajar tricky saw looks/seems line telegrams grass/lawns demonstration setting o'clock this lobby serious/sincere next continuous/constantlynear post excellent sending very happened/occurredheard seeks/looks three saw dress ago him flowers/blossom ten step dollars/payment neveroffice they room/area marriedvoices must privatetomorrowcrisis apples/fruits shoes/boots childrenthis need caught/seized consideredthese soon forward/leadingeconomy them normaldone/over calm whole wake remained/survivedemploymentleft five near cheating getting/gaining one caughtproblem/dilemma bankrupt reachedlake/pond fail sun keeps/saves sale stationlike transportation kinds/types clouds tomorrow voting return/arrivegardeneraccident those duringplace/point three beautiful/wonderfulpopulationher asked/urged public part questions/proposals days going radio Nonwordstrings

1. 2. 3. 4.

mar/fot sorth pelink tinnerthepe/varps esterway trote/lurts nostfardtuss cotridering/tropormiannass empering

130

* Spring 1990 READING RESEARCH QUARTERLY

XXV/2

5. nost/dilt touse ponet 6. storpmape/foon hassist 7. saff/pake fopt zact 8. turls othen povies/vearns 9. remectedtropidles/siffementtobs 10. feld/relt menerapliffapent 11. tust dake/jeat romether 12. sitoved/deturnsreasing poom 13. swoe blamnetzbep/cak 14. renake/emegedpilk romithy 15. goint fess/brop fote 16. itherant/levetals feasy nime 17. torst povempter/leparates elethion 18. romissionsmithedtiport/nublic 19. neturned/cetordlydalimorniatogust 20. trings sland/masps sork 21. rameteru mest/rork 22. dack asarant/rastardcomiterid 23. anways/borkedvived heary 24. dinnowterpect/beatheddroken 25. awfost/kenner linished nork 26. offite sint/poat nocuments 27. peam neag/fant neckord 28. sustec/rimart plame filse 29. noy sardlyrotain/mesent 30. tald srope/lounts stoff 31. bonis med/tor dubosity 32. veth/lext themedrinstithoon

Syntacticnonsense strings 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. Youare liked walkingof the students. The bridgeturnedthe afternoon. I was old hauntedto the party. None in the grass borrowedthe audience. She wanderedthe remotestudents. Tomwas you to give everyonea prospect. Chris had the attentionout. The righttape novel was tired. He is invited of them. They noticed indeed empty picnics. The mystery recordermust wantat the class. I reada house on the game. The presentby the newspaperpassed. Everybodycaughtthatyou would find. Thatexaminationwas whole only. If you seemed me, we were excited quickly. Randomwordstrings 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. trip fruit classroomglad know eaten results serves doorjob course new harderfound Susanwould going Jane memberssong telephonekey lawn last educationoccurs fashion plan night objected play stronglyrestaurant over studentslong were skirtsunlocks again studied rang sing late here passed club cancelled agency wine tennis tending secretarypie gardeningrest spoiled liked rains special Nonwordstrings 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. broupsorkedpell renandsnost donether trobletdurely nonitical san smainslang stayetrome peneral brontnaid mell goith houte mearched nent nolimays neabons mot wathflace teft harrall domnorippennetrafients roon sanersbonorrow prereprothwhike prenerrud wibe soat daces dape clane flet dountains

Auditory foils used in the study


Full good sentences 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. He left home an hour ago. They stoppedwhen they reachedthe lake. I'm going to put the books away. Peter'stakingthem on a tour. They failed to reportthe crime. She went to the grocery store for milk. He looked as if he were confident. We can save fuel by using less electricity. We expect to go there next week. The ship broke loose from its moorings. Amy is the one in the raincoat. Jim gave every game his all. The jurors disagreedamong themselves. Mr. Jones has ignoredthe evidence. Markpersuadedhim to buy some shares. There will be some tickets available.

Das könnte Ihnen auch gefallen