Sie sind auf Seite 1von 12

Assessing cochlear-implant users perception of speech.

Overview of speech and hearing deficits in cochlear implant users


Hearing loss and deafness contribute to various sensory,
perceptual and cognitive deficits. Hearing impaired individuals exhibit
poor frequency sensitivity and selectivity and have difficulty tracking
frequency-sweep direction . Impaired frequency perception is related to
poor pitch perception. For example, deaf cochlear-implanted children
cannot accurately recall or mimic pitch patterns (melodies) and their
vocal pitch range is considerably smaller than that of normal-hearing
(NH) children (Nakata et al., 2006). Additionally, aging degrades
acoustic temporal processing (Strouse et al., 1998; Buus and
Florentine, 1985; Frisina et al., 2001), which may account for poor
discrimination of voiced consonants in older and hearing-impaired
individuals (Gordon-Salant et al., 2006; Tremblay, Piskosz and Souza,
2003). These spectrotemporal deficits can impair CI users ability to
discriminate speakers and even gender (Qian-Jie, Chincilla and Galvin,
2004; McDonald et al., 2003). Unfortunately, hearing loss-induced
deficits are not limited to auditory perception.
Cochlear-implanted (CI) children demonstrate poor visual
sequencing skills (Conway et al., 2011) and impaired visual and
auditory working memory skills (Harris et al., 2013; Dawson, Busby and
McKay, 2002). Because working memory and sequencing abilities
provide a basis for phonological memory and sequencing, these
deficits likely contribute to poor speech perception and production in
hearing-impaired individuals. This is supported by research on the
effects of working memory capacity on speech intelligibility for
hearing-assisted individuals. Increased working memory capacity is
correlated with hearing aid users ability to identify speech-in-noise,
especially as the amount of distortion produced by their device
increases (Kathryn et al., 2013). Additionally, Harris et al. found that

verbal working memory proficiency can predict speech and language


outcomes in cochlear-implanted children (2013). Together, research on
speech and language proficiency and working memory capacity
suggests that poor speech learning and proficiency in hearing-impaired
individuals emerges from the reduced capacity to retrieve, produce
and operate on meaningful acoustic and phonological models due to
degraded auditory information.
Hearing aids and CIs enhance the auditory speech information
available to profoundly deaf individuals by various methods of
frequency and temporal signal processing (Peterson, Pisoni and
Miyamoto, 2010; Rubinstein, 2004), but successful speech intelligibility
is highly variable across individuals and remains limited, plateauing
within the first year of use (Rouger et al., 2007). Additionally, CI users
appear unable, or do not learn, to efficiently use the enhanced speech
signal as they remain heavily dependent on visual speech cues, such
as lip reading, and perform poorly in understanding auditory-only
speech .
Various factors influence successful speech and language
outcome for CI users. Early implantation and phonological language
capacity contribute most to successful speech and language outcomes
in cochlear-implanted children and adults (Sarant et al., 2001; Lazard
et al., 2010). Limitations in success are partially due to CI users
increased susceptibility to noise (Rubinstein, 2004) and the degraded
acoustic information provided by CIs (Stelmachowicz et al., 2004).
However, enriching the frequency information by modulating highfrequency gain, number of channels and bandwidth do not consistently
improve hearing aid or CI users speech intelligibility (Friesen et al.,
2001; Turner and Cummings, 1999). This suggests that proficient
speech perception depends on more factors than just the acoustic
signal per-se.

Assessing cochlear implant users personal speech and language


profile
Degraded auditory speech information for hearing-impaired and
cochlear implant (CI) users motivates enhanced visual speechinformation processing and optimized audiovisual speech integration.
For example, Rouger et al. found that at the time of initial implantation,
CI users lip read (speech-read) with an average 35% accuracy
compared to 9% accuracy of normal-hearing (NH) subjects (2007). In
addition to superior speech reading, hearing-impaired and CI users
demonstrate optimal audiovisual integration of spectrotemporal
speech information. In contrast, NH subjects were found to
probabilistically depend on either visual or auditory information when
exposed to degraded speech. This illustrates separate speech
perception strategies between CI users and NH subjects depending on
the strength of visual speech perception.
Understanding a hearing-impaired (HI) individuals dependence
on visual speech may inform treatment strategies as HI individuals who
retain visual speech processing perform better upon cochlear
implantation compared to those who rely on semantic processing
instead (Lazard et al., 2010; Pisoni, 2000). This is because visual
speech processing exploits phonological speech representations
fundamental to auditory speech processing (Giraud and Lee, 2007).
The McGurk task described in Rouger et al. (2008) can reveal
specific and systemic interactions between visual and auditory speech
information in individuals. In this task, participants are presented with
incongruent audiovisual (McGurk) stimuli whereby the heard speech
sound does not match the seen mouth movements for that sound.
These stimuli are randomly presented with congruent audiovisual
speech stimuli, audio and visual-only speech stimuli. Participants are
asked to repeat what they heard. Responses to congruent and singlemodality stimuli provide a comprehensive profile of phoneme

identification performance, phonemic confusability and phonemic


processes for perception of mode and place of articulation. Responses
to McGurk stimuli reveal to what extent participants speech percept
depends on visual versus auditory information.
Rouger et al. applied this task to CI users and NH subjects and
found that, as expected, overall auditory and audiovisual speech
performance was poorer in CI users than NH subjects and CI users
demonstrate greater phonemic confusion in audio-only conditions.
Additionally, although CI users and NH subjects rely heavily upon
auditory speech information for perceiving mode of articulation (e.g.
voicing, nasalization), CI users rely more upon visual speech
information for place of articulation information than NH subjects. For
example, 82% of NH subjects who saw a person mouthing a voiced
bilabial (/ama/) but were presented with a voiced dental (/ada/)
perceived a combination of the auditory and visual place of articulation
(/apta/). In contrast, over 98% of CI users perceived the visual place of
articulation (/aba/). In an extreme example, children who are born deaf
and implanted after 30 months ignore the auditory signal completely
(Schorr et al., 2005). CI users degraded auditory and congruent
audiovisual speech perception and their enhanced reliance on visual
information reflects their substitution of visual speech information in
place of the degraded auditory signal.
Auditory perceptual training may enhance CI users reliance on
auditory speech information and attenuate their phonemic
confusability. Specifically, training may improve speech perception and
comprehension for individuals suffering sensorineural hearing loss by
refining spectral, temporal and phonemic models of auditory
information. Perceptual training has been shown to refine auditory
phonemic (Fenn, Nusbaum, Margoliash, 2003) and spectrotemporal
models (Sabin, Eddins, Wright, 2012) and enhance speech recognition
in NH subjects (Song et al, 2012), as well as language-learning and

hearing-impaired individuals (Merzenich et al. 1996; Sullivan,


Thibodeau, Assmann, 2013).
Additionally, phoneme recognition can be enhanced in hearing
aid users beyond the benefit acquired by the hearing aid alone
(Stecker et al., 2006) and CI users can improve upon hearing-in-noise
with training (Ingvalson et al., 2013; Qian-Jie and Galvin, 2007).
Perceptual training can also reduce the amount of time needed to bind
auditory and visual information (Powers, Hillock and Wallace, 2009),
illustrating the malleability of multisensory percepts.
References
Bowen, C.B. Speech-Language Therapy. Worksheets: Contrasts;
Minimal Pairs;
Near Minimal Pairs. Retrieved 24 April, 2013 from:
http://www.speech-language-therapy.com/index.php?
option=com_content&view=article&id=13:contrasts&catid=9:res
ources&Itemid=117
Bradlow, A.R., Pisoni, D.B., Akahane-Yamada, R., Tohkura, Y. (1997).
Training
Japanese listeners to identify English /r/ and /l/: IV. Some effects
of
perceptual learning on speech production. JASA 101(4):2299-310.
Buus, S., Florentine, M. (1985). Gap detection in normal and impaired
listeners:
The effect of level and frequency. In Time Resolution in Auditory
Systems (Michelsen, A., ed.), pp. 159-179. Springer-Verlag, New
York.
Chang, E.F. and Merzenich, M.M. (2003). Environmental noise retards
auditory
cortical development. Science 300(5618):498-502.

Conway, C.M., Pisoni, D.B., Anaya, E.M., Karpicke, J., Henning S.C.
(2011).
Implicit sequence learning in deaf children with cochlear
implants.
Developmental Science 14(1):69-82.
Dawson, P.W., Busby, P.A., McKay, C.M. (2002). Short-term auditory
memory in
children using cochlear implants and its relevance to receptive
language.
JSLHR 45:789-801.
Eisner, F. and McQueen, J.M. (2005). The specificity of perceptual
learning in
speech processing. Perception and Psychophysics 67:224-38.
Elvira, B., Teija, K., Mari, T., Paavo, A., Luigi, A., Vincenzo, M. (2005).
Long-term
exposure to occupational noise alters the cortical organization of
sound
processing. Clinical Neurophysiology 116(1):190-203.
Fenn, K.M., Nusbaum, H.C., Margoliash, D. (2003). Consolidation during
sleep of
perceptual learning of spoken language. Nature 425:614-616.
Friesen, L.M., Shannon, R.V., Baskent, D., Wang, Xiaosong. (2001).
Speech
recognition in noise as a function of the number of spectral
channels:
Comparison of acoustic hearing and cochlear implants. JASA
110(2):115063.
Frisina, R.D., Frisina, R.D. Jr., Snell, K.B., Burkard, R., Walton, J.P., Ison,
J.R.
(2001). Auditory temporal processing during aging. In Functional

Neurobiology of Aging (Hof, P.R. and Mobbs, C.V., eds.)


Academic Press,
New York.
Giraud, A.L., Hyo-Jeong, L., (2007). Predicting cochlear implant
outcome from
brain organization in the deaf. Restorative Neurology and
Neuroscience
25(3-4):381-90.
Goldrick, M. and Larson, M. (2008). Phonotactic probability influences
speech
production. Cognition 107:1155-64.
Gordon-Salant, S., Yeni-Komshian, G.H., Fitzgibbons, P.J., Barrett, J.
(2006). Agerelated differences in identification and discrimination of
temporal cues in
speech segments. JASA 119(4):2455-2466.
Grant, K.W., Seitz, P.F. (1998). Measures of auditory-visual integration in
nonsense syllables and sentences. JASA 104(4):2438-50.
Harris, M.S., Kronenberger, W.G., Sujuan, G., Hoen, H.M., Miyamoto,
R.T., Pisoni,
D.B. (2013). Verbal short-term memory development and spoken
language
outcomes in deaf children with cochlear implants. Ear & Hearing
34(2):179192.
Harrison, R.V., Nagasawa, A., Smith, D.W., Stanton, S., Mount R.J.
(1991).
Reorganization of auditory cortex after neonatal high frequency
cochlear
hearing loss. Hearing Research 54(1):11-19.

Ingvalson, E.M., Lee, B., Fiebig, P., Wong, P.C.M. (2013). The effects of
short-term
computerized speech-in-noise training on postlingually deafened
adult
cochlear implant recipients.
Kathryn, A.H., Souza, P., Rosalinda, B., James, K.M. (2013). Working
memory, age,
and hearing loss: Susceptibility to hearing aid distortion. Ear &
Hearing
34(3):251-60.
Klinke, R., Kral, A., Heid, S., Tillein, J., Hartmann, R. (1999). Recruitment
of the
auditory cortex in congenitally deaf cats by long-term cochlear
electrostimulation. Science 285(5434):1729-33.
Kraljic, T. and Samuel, A.G. (2006). Generalization in perceptual
learning for
speech. Psychonomic Bulletin and Review 13(2):262-8.
Kujala, T., Shtyrov, Y., Winkler, I., Saher, M., Tervaniemi, M., Sallinen, M,
Teder-Slejrvi, W., Alho, K., Reinikainen, K., Ntnen, R. (2004). Longterm exposure
to noise impairs cortical sound processing and attention control.
Psychophysiology 41(6):875-881.
Lazard, D.S, Lee, H.J., Gaebler, M., Kell, C.A., Truy, E., Giraud, A.L.
(2010).
Phonological processing in post-lingual deafness and cochlear
implant
outcome. NeuroImage 49:3443-3451.
Massaro, D.W. and Cohen, M.M. (1983). Phonological constraints in
speech
perception. Perception & Psychophysics 34:338-48.

McQueen, J.M. (1998). Segmentation of continuous speech using


phonotactics.
Journal of Memory and Language 39:21-46.
Merzenich, M.M., Jenkins, W.M., Johnston, P., Schreiner, C., Miller, S.L.,
Tallal, P.
(1996). Temporal processing deficits of language-learning
impaired
children ameliorated by training. Science 271(5245):77-81.
Norea, A. and Eggermont, J.J. (2005). Enriched acoustic environment
after
noise trauma reduces hearing loss and prevents cortical map
reorganization. Journal of Neuroscience 25(3):699-705.
Onishi, K.H., Chambers, K.E., Fisher, C. (2002). Learning phonotactic
constraints
from brief auditory experience. Cognition 83:B13-23.
Peterson, N.R., Pisoni, D.B., Miyamoto, R.T. (2010). Cochlear implants
and spoken
language processing abilities: Review and assessment of the
literature.
Pisoni, D.B. (2000). Cognitive factors and cochlear implants: Some
thoughts on
perception, learning, and memory in speech perception. Ear
Hear.
21(1):70-8.
Polley, D.B., Steinberg, E.E., Merzenich, M.M. (2006). Perceptual
learning directs
auditory cortical map reorganization through top-down
influences. Journal
of Neuroscience 26:4970-4982.
Qian-Jie, F., Chinchilla, S., Galvin, J.J. (2004). The role of spectral and
temporal

cues in voice gender discrimination by normal-hearing listeners


and
cochlear implant users. JARA 5(3):253-60.
Qian-Jie, F., Galvin, J.J. III. (2007). Perceptual learning and auditory
training in
cochlear implant recipients. Trends in Amplification 11(3):193205.
Rochet-Capellan, A., Richer, L., Ostry, D.J. (2011). Nonhomogeneous
transfer
reveals specificity in speech motor learning. Journal of
Neurophysiology
107(6):1711-7.
Rouger, J., Fraysse, B., Deguine, O., Barone, P. (2008). McGurk effects in
cochlear-implanted deaf subjects. Brain Research 1188:87-99.
Rouger, J., Lagleyre, S., Fraysse, B., Deneve, S., Deguine, O., Barone, P.
(2007).
Evidence that cochlear-implanted deaf patients are better
multisensory
integrators. PNAS 104(17):7295-300.
Rubinstein, J.T. (2004). How cochlear implants encode speech. Current
Opinion
in Otolargyngology & Head and Neck Surgery 28(2):237-250.
Sabin, A.T., Eddins, D.A., Wright, B.A., (2012). Perceptual learning
evidence for
tuning to spectrotemporal modulation in the human auditory
system.
Journal of Neuroscience 32(19):6542-9.
Samuel, A.G. and Kraljic, T. (2009). Perceptual learning for speech.
Attention,
Perception, & Psychophysics 71(6):1207-18.

Sarant, J.Z., Blamey, P.J., Dowell, R.C., Clark, G.M., Gibson, W.P.R.
(2001).
Variation in speech perception scores among children with
cochlear
implants. Ear & Hearing 22(1):18-28.
Schorr, E.A., Fox, N.A., van Wassenhove, V., Knudsen, E.I. (2005).
Auditory-visual
fusion in speech perception in children with cochlear implants.
PNAS
102:18748-50.
Scott, S.K., Blank, C.C., Rosen, S., Wise, R.J.S. (2000). Identification of a
pathway
for intelligible speech in the left temporal lobe. Brain
123(12):2400-6.
Shiller, D.M., Sato, M., Gracco, V.L., Baum, S.R. (2009). Perceptual
recalibration of
speech sounds following speech motor learning. JASA
125(2):1103-13.
Song, J.H., Skoe, E., Banai, K., Kraus, N. (2012). Training to improve
hearing
speech in noise: Biological mechanisms. Cerebral Cortex
22(5):1180-90.
Stecker, C.G., Bowman, G.A., Yund, W.E., Herron, T.J., Roup, C.M.,
Woods, D.L.
(2006). Perceptual training improves syllable identification in new
and
experienced hearing aid users. JRRD 43(4):537-552.
Stelmachowicz, P.G., Pitman, A.L., Hoover, B.M., Lewis, D.A., Moeller,
M.P.
(2004). The importance of high-frequency audibility in the
speech and

language development of children with hearing loss. AOHNS


130(5):55662.
Strouse, A., Ashmead, D.H., Ohde, R.N., Grantham, D.W. (1998).
Temporal
processing in the aging auditory system. JASA 104(4):2385-99.
Sullivan, J.R., Thibodeau, L.M., Assmann, P.F. (2013). Auditory training
of speech
recognition with interrupted and continuous noise maskers by
children
with hearing impairment. Journal of the Acoustic Society of
America 133(1):495-501.
Tremblay, K.L., Piskosz, M., Souza, P. (2003). Effects of age and agerelated
hearing loss on the neural representations of speech cues.
Clinical
Neurophysiology 114(7):1332-43.
Turner, C.W. and Cummings, K.J. (1999). Speech audibility for listeners
with highfrequency hearing loss. AJA 8:47-56.
Zhang, L.I., Bao, S., Merzenich, M.M. (2001). Persistent and specific
influences of
early acoustic environments on primary auditory cortex. Nature
Neuroscience 4:1123-30.

Das könnte Ihnen auch gefallen