Sie sind auf Seite 1von 3

The psychological validity of phonotactics has been demonstrated experimentally

in studies of word recognition (Massaro & Cohen, 1983; Onishi, Chambers, & Fisher,

2002; Pitt & McQueen, 1998). Church (1987) argues that phonotactics provide ‘rich

contextual constraints’ that can allow for a more efficient search procedure for matching

the input to entries in the lexicon. For these reasons, phonotactics has become the focus

of much research in speech perception.

Phonotactics can also provide information on less categorical constraints, such as

probabilities between sounds within a language. Words consist of concatenations of

sounds, and the correlations between co-occurring sounds within a word are greater than

the correlations between co-occurring sounds that spread across a word boundary (Hayes

& Clark, 1970; Saffran, Newport, & Aslin, 1996). Therefore, it is plausible that a

listener might discover word boundaries based on the probabilities of these co-occurring

sounds

Saffran et. al. (1996) briefly exposed listeners (in three 7-minute

sessions) to an artificial language made up of six trisyllabic words (created by

concatenating twelve English-like CV syllables). The synthetically produced speech

stream contained random configurations of these ‘words’ with no pauses between the

items. They found that listeners were able to learn the ‘words’ contained in the speech

stream using only the transitional probabilities (i.e., the probability of co-occurrence)

between syllables.
Intuitively, the idea of phonotactic information as a cue to speech segmentation is

intriguing. Because phonotactic constraints govern how phonemes combine, it is an

inherent property of naturally-occurring speech.

For successful communication, listeners must be able to segment between words, not within
words.

to show listeners’ use of phonotactic information to locate boundaries between two words in a
context that approximates connected speech.

It is predicted that the presence of both phonological and lexical information will guide
segmentation,

Phonotactic information appeared to be more useful when lexical information was ambiguous

The goal of this paper was to better understand the contribution of phonotactics in
word segmentation. Previous research on phonotactics (McQueen, 1998; Norris et. al.,
1997; van der Lugt, 2001) suggests that it is a reliable source of information in the
segmentation of syllable boundaries within isolated words.
.1 Phonotactics and the syllable
Native speakers of a language are usually able to count the syllables of words without any
difficulty. Consider the word incidental. It can be broken up into 4 syllables ‘in.ci.den.tal’.
We have clear intuitions about where to put the syllable boundaries - we know, for example,
that the segments ‘nt’ cannot belong together at the left edge of a syllable, and we must break
them up so that each segment belongs to a different syllable: ‘den’ and ‘tal’. (We say that
syllables that have consonants on both edges are closed.) There is no problem, however,
when the consonant sequence ‘nt’ appears at the right edge of a syllable, as in the word
incident, ‘in.ci.dent’.
Why do we have this tacit knowledge? There are two reasons: first, the phonotactic rules or
constraints of a language and second, the internal structure of the syllable itself. The basic
principles of phonotactics and syllable structure are universal among all languages, but they
are language-specific in terms of the particular rules or settings they allow. Thus, for native
speakers of English for example, the phonotactic constraints determine that we recognise the
possible words in our language (even if we do not know their meanings); equally, we know
what are impossible or non-words - and sometimes we anglicise those words we can’t
pronounce (like ‘Tokyo’ or ‘Thabo Mbeke’) to suit English phonotactics.

The sonority hierarchy


Why do we know that words like sprint, split, strut, plinth, shrimps, squeak are possible
words of English? Native speakers have this knowledge because, firstly, English allows
syllables with complex onsets and codas and, secondly, sound sequences within a syllable
generally obey the sonority sequencing principle. Sonority is related to the openness of the
vocal tract during the pronunciation of a sound, and this is reflected in the basic shape of the
syllable: vowels are most sonorous, and for this reason occupy the syllable peak, or nucleus;
consonant sounds are less sonorous and occupy the syllable margins. The sound sequence
vowels > approximants and glides > nasals > fricatives > stops is known as the sonority
hierarchy and describes falling sonority, going from the most sonorous to the least sonorous
sounds.

Das könnte Ihnen auch gefallen