Sie sind auf Seite 1von 176

COCHLEAR IMPLANT

PRESENTED BY

DR PRAVEEN KUMAR

What is a Cochlear Implant?


The cochlear implant (CI) is a prosthetic replacement for the inner ear (cochlea) and is only appropriate for people who receive minimal or no benefit from a conventional hearing aid.

continued The cochlear implant bypasses damaged parts of the inner ear and electronically stimulates the nerve of hearing. Part of the device is surgically implanted in the skull behind the ear and tiny wires are inserted into the cochlea.

The Fundamental Concept of Cochlear Implant

To bypass the damaged hair cells.

Hearing Aid Concept : Amplify sounds reaching the ear

Cochlear implant Direct stimulation of auditory nerve

Indications : SNHL Deaf Child CHL, when surgery is not feasible due to various reasons

(a) (a)

External part Microphone Speech processor Transmitter coil Internal part Receiver stimulator Electrode array

Types : Depending on number of electrodes and channels (a) Non electrical Electrical Nucleus 24 contour (b) Air conduction type (MC) Bone conduction Clarion C 11 type MED-EL combi 40+ (c) Implantable hearing aids Advantages : i. Cost effective as compared to cochlear implant ii. Good patient compliance iii. Used in patients where surgery is not feasible iv. Done on OPD basis i. Better efficacy in postlingual deaf cases ii. Better results in children in prelingual deafness Observed in terms of Speech intelligibility scores Language development rate expressive skills i. More useful for patients having profound SNHL

i. Cost factor i. May cause intolerable distortion of sound in ii. Can not be used in psychologically imbalanced individuals patients with SNHL ii. Difficult to use in patients with discharging ear iii. Involves surgical procedure (technically difficult) or otitis external (Bone conduction type can be iv. Long postoperative rehabilitation programmes v. Longer programmes hospital stay used in these cases)

Disadvantages

1. 2. 1. Recurrent infections of external auditory canal 3. 4. and middle ear 5. 6. 7.

Complications :

Facial nerve palsy Wound infection/dehiscence Device failure (early/late) CSF leak (rare) Post op vertigo Post op meningitis (rare) Extrusion/exposure of device (rare)

Auditory Nerve Potentials


The work of Wever and Bray (1930) demonstrated that the electrical response recorded from the vicinity of the auditory nerve of a cat was similar in frequency and amplitude to the sounds to which the ear had been exposed.

The Importance of the Cochlea


Meanwhile, the Russian investigators Gersuni and Volokhov in 1936 examined the effects of an alternating electrical stimulus on hearing. They also found that hearing could persist following the surgical removal of the tympanic membrane and ossicles, and thus hypothesized that the cochlea was the site of stimulation.

Stimulating the Auditory Nerve


In 1950, Lundberg performed one of the first recorded attempts to stimulate the auditory nerve with a sinusoidal current during a neurosurgical operation. His patient could only hear noise.

Tonotopic Stimulation
Simmons, in 1966, provided a more extensive study in which electrodes were placed through the promontory and vestibule directly into the modiolar segment of the auditory nerves The nerve fibers representing different frequencies could be stimulated The subject demonstrated that in addition to being able to discern the length of signal duration, some degree of tonality could be achieved

The House 3M Single-Electrode Implant


In 1972, a speech processor was developed to interface with the single-electrode implant and it was the first to be commercially marketed as the House/ 3M cochlear implant More than 1,000 of these devices were implanted between 1972 to the mid 1980s In 1980, the age criteria for use of this device was lowered from 18 to 2 years and several hundred children were subsequently implanted

Multi-Channel Implants
During the late 70s, work was also being done in Australia, where Clark and colleagues were developing a multi-channel cochlear implant later to be known as the Cochlear Nucleus Freedom Multiple channel devices were introduced in 1984, and enhanced the spectral perception and speech recognition capabilities compared to Houses single-channel device

Anatomy

Anatomy
Scala tympani

Scala vestibuli
Cochlear duct Basilar membrane Vestibular membrane Tectoral membrane Hair cells (outer/inner) Cochlear nerve fibers

Physiology of Hearing

Anatomy

Anatomy of Sound

The spectrum of sound is shaped by the external ear, and the sound pressure is enhanced by the middle ear. Changes in pressure move the basilar membrane, which moves the tectorial membrane, which moves the stereocilia of the hair cells. Ions flow into the hair cells. Outer hair cells vibrate and boost the basilar membrane motion. Inner hair cells release neurotransmitter that leads to action potentials in the auditory nerve fibers that contact the inner hair cells, which are
transmitted to the brain

Action potentials

If a low-frequency sound occurs, the peak of the the basilar membrane motion is toward the apex of the cochlea, and the action potentials are phase locked to the low frequency. If a high-frequency sound occurs, the peak of the basilar membrane motion is toward the base of the cochlea, and if the frequency is high enough, the action potentials will not be phase locked to the sound

Action potentials Low frequency

High frequency

The computer contains a bank of band-pass filters, that splits the incoming sound into a series of frequency bands. The intensity of the sound in each frequency band is scaledby the amplitude compressors-- so that it fits within the dynamic range of the auditory nerve fibers (a point to which we shall return). Then the output of each bandpass filter is delivered to one electrode. Low frequency sounds stimulate apical electrodes, and therefore more apical neurons, High frequency sounds will stimulate basal electrodes and therefore more basal neurons

The purpose of the bank of bandpass filters, connected to electrodes in different positions in the cochlea is to represent the
waveform of sound amplitude spectrum of sound phase spectrum of sound

ELECTRODE POSITIONING
There are several ways in which the rate-place code in an implant differs from that in a normal ear. In a normal cochlea you have about 3000 inner hair cells arrayed along the basilar membrane, each of which reports on incoming sound in terms of its position along the array, the timing of its firing pattern and the rate at which it fires. This message is transmitted to the brain by lots of nerve fibers attached to each hair cell. In the implanted ear, the 12-22 electrodes only cover the first turn of the cochlea. The full range of audible frequencies therefore stimulate nerve fibers that respond to middle- to high frequencies in the normal ear. The most apical nerve fibers are not stimulated. Whether this would be a problem was hard to predict. It may be that the brain expects to receive information about certain frequencies from certain places in the cochlea. Or it may be that the we can learn to use the neurons that are stimulated to get information about whatever frequencies stimulate them.

In a normal cochlea the neurons connected to each of those 3000 inner hair cells arrayed along the basilar membrane responds to a band of frequencies that is about 1/3 octave wide. In the implanted ear, there are at most 22 electrodes, so entire audible frequency range has to be divided into 22 bands, each will be fairly wide, compared to the bandwidth of the inner hair cells in normal ear. NO OF ELECTRODE DETERMINE THE BANDWIDTH EACH CARRIES

Current from a single electrode spreads along cochlea and excites many auditory nerve fibers

Electrical current spread also increases the bandwidth to which each nerve fiber responds

The combined effect of the reduced number of frequency bands and current spread could be to make frequency selectivity at the level of the auditory nerve poor. FREQUECY SELECTIVITY MAY BE POORER OF NEURON STIMULATED BY CI DUE: (1)The neurons are damaged. (2)The outer hair cells are malfunctioning. (3)The implant has a rather small number of electrodes and they may not act independently

Frequency (kHz)

Auditory nerve fiber stimulated by a cochlear implant

Each filter + other devices + electrode = one channel We would say that cochlear implant has fewer frequency channels than a normal ear, and that each carries information about a broader range of frequencies in the implant.

Simulations with normal hearing listeners


It is possible to manipulate sounds so that the number of frequency channels available to a normal hearing listener is reduced. You cant just filter the sound the normal ear can just filter the filtered sound so that its back to normal. Speech is filtered into 1 to 4 frequency bands in this figure. Then the overall amplitude envelope in each band is multiplied by a band of noise. So what the listener knows is that in this broad frequency band, the sound is going up and down in amplitude in a certain way-- they lose all the other frequency information-- just like a cochlear implant listener would. The demo plays the same sentence

Cochlear-implant simulation

8 6 4

Waveform of Original Sound

2 1.5

x 10

Simulated w avef orm

Amplitude

2 0 TextEnd -2 -4 -6 0 0.5 1 1.5 2 2.5 Time (sec) 3 3.5 4 4.5 5

Amplitude

1 0.5 0 -0.5 -1 0 0.5 1 1.5 2 2.5 Time (sec) 3 3.5 4 4.5 TextEnd

Spectrogram of Original Sound 8000

8000

Spectrogram of simulated w avef orm

6000

6000

Frequency

4000 TextEnd 2000

Frequency

4000 TextEnd 2000

0 0 0.5 1 1.5 2 Time 2.5 3 3.5 4 4.5

0.5

1.5

2 Time

2.5

3.5

4.5

From herrick_uedamodel/script_demo1: best 6 of 16 channels, 250 Hz pulserate, 16 kHz sampling H/U filterbank

Shannon Implant Demo

Simultaneous sentences

same Fo

difft Fo

difft Fo & VT

Shannon 4-channel implant simulation

summer

ignore

difft Fo

difft Fo & VT

Normal hearing listeners could get close to perfect performance on consonant, vowel and even sentence identification with only 4 frequency bands, or channels Simulation results

Later studies showed that normal hearing people needed more frequency bands to understand speech in noise. This figure shows the result of another study of normal) increased as the number of bands increased up to about 16 bands. But when the materials were presented in noise, the normal people hearing didnt do as well, and even with 16 frequency bands, they did worse than in quiet. . It shows that in quiet the open symbolspeoples speech recognition performance (with different materials than in the Shannon et al study So we need more frequency bands in noise. . Implant listeners tested on the same speech materials in quiet, go about 10% correct equivalent to 9-10 bands in the normal hearing groupit is as if the CI users have 9-10 channels available to themeven though their implants provide them with 22 bands. Even more striking is that implant listeners only got 6-7% correct in noise it is as if they only have 4 frequency channels to use.

Engineering: attempts to improve frequency selectivity Pulsatile processing Electrode configuration monopolar, bipolar Current steering
theory is that current flow leads to channel

interactions we present different frequencies


through different electrodes, but the electrodes all stimulate the same neurons. CI manufacturers have worked on ways to make the stimulation patterns more frequency specific.

PULSATILE PROCESSING
Instead of simply delivering an analog current the waveform of the sound in each frequency bandto the nerve fibers, each electrode presents a series of pulses. Hence pulsatile. But the pulses are not presented exactly simultaneously they are delivered in sequence, but very rapidly. Because the pulses are not on simultaneously they, cant add together (so-called interaction).

This is a more detailed account of pulsatile processing, with what is called continuous interleaved sampling (pulses are interleaved, not simultaneous). The top panel is the schematic of the device. The middle panel is the schematic of the parts of the processor a filter bank, just like before, and then a device that takes off the envelope of the sound and throws away the fine structure. Then the envelope waveform in each channel gets multiplied by a series of pulses. When the waveform is big, the pulse is big, when the waveform is small, the pulse is small. The actual changes in the electrical signal waveform are shown in the bottom panel. The right most graphs show what is actually delivered to the nerve fibers in cochlea.

Pulsatile Processing: Continuous Interleaved Sampling (CIS)

Electrode configuration
One attempt to improve frequency selectivity is to change the way that the electric currents flow. In the monopolar configuration, the ground electrode is the most basal electrode, and that is the ground for all of the other electrodes. That means that the pattern of current flow is pretty broad. In a bipolar configuration, each electrode is paired with its own ground electrodethe one next to it in the array. That makes a much narrower pattern of current flow. A narrower pattern of current flow will excite a more restricted set of neurons, and that is what we are after. In fact, people have narrower tuning curves with bipolar than with monopolar configurations.

Monopolar

Bipolar

Advanced bionics came up with this idea that you could divide the information between two electrodes to steer the current toward different groups of neurons.

Idea: steer current to places between electrodes

Electrode Configurations: Enhanced Bipolar Electrode (ENH) Diagonal electrode pairs provide a wider electrode separation so that loudness growth can be achieved with bipolar stimulation. ENH+Electrode Positioning System (ENH+EPS) EPS pushes electrode array towards the modiolus, where the spiral ganglion cell bodies reside High Focus Electrode+EPS (HF+EPS) Longitudinally-arranged plate electrodes orient the current field toward spiral ganglion cell bodies "dielectric partitions" designed to reduce current spread to adjacent electrodes

Processing strategies

Perception of Speech

Introduction to Speech Processing

The Speech Signal


Speech communication is the transfer of information via speech, either between persons or between humans and machines. Language is one of the most important of human capabilities. This makes it an ideal form of communication between humans and machines.

SPEECH ORGANS
The Lungs and Thorax Generate the airflow that passes through the larynx and vocal tract. Larynx and Vocal Folds/Cords Obstruct airflow from the lungs to create turbulent noise or pulses of air. Vocal Tract Produces the many sounds of speech by: Modifying the spectral distribution of energy and Contributing to the generation of sound

The Vocal Folds


Control the fundamental frequency (F0) of a speaker's voice by controlling the rate of vocal fold vibration when air passes between the folds.

Sound produced with vocal fold vibration are called voiced.


Sounds without vocal fold vibration are called unvoiced. Turbulence may also be created using the vocal folds for the production of sounds like /h/ and for whispered sounds.

MANNER OF ARTICULATION
Manner of articulation describe the configuration in vocal tract Different manner of articulation categories are
CATEGORY

DESCRRIPTION
Little constriction of the vocal tract Vowel with changing configuration Transient sounds with fixed starting points Greater obstruction that vowels All the air passes through the nose Restricting airflow to create turbulence Plosive followed by fricative sound Closure of air passage then release

EXAMPLE

VOWEL DIPTHONONG GLIDE Liquid NASAL FRICATIVE AFFRICATE PLOSIVE

Bat bay way' ray' may


say

jay
bay

Indian Language properties

Scripts used are phonetic in Nature Better Articulatory discipline Systematic manner of production Five or Six distinct places of Articulation Various types of Flaps/Taps or Trills Fewer fricatives compared to English / European languages

Presence of retroflex consonants


A significant amount of vocabulary in Sanskrit with Dravidian or Austroasiatic origin gives indications of mutual borrowing and counter influences.

Place of Articulation
Place of articulation describes the configuration of the vocal tract that distinguishes between the phonemes within a manner of articulation group.
These are generally controlled by the position and shape of the tongue, though for some sound teeth and lips are important articulators

Speech Sounds
Coarse classification with phonemes. A phone is the acoustic realization of a phoneme. Allophones are context dependent phonemes.

Phoneme Hierarchy
Speech sounds Vowels iy, ih, ae, aa, ah, ao,ax, eh, er, ow, uh, uw Language dependent. About 50 in English.

Diphtongs ay, ey, oy, aw

Consonants

Lateral liquid Glide Retroflex l w, y Plosive liquid p, b, t, Nasal Fricative r d, k, g m, n, ng f, v, th, dh, s, z, sh, zh, h

Speech Waveform Characteristics


Loudness Voiced/Unvoiced. Pitch.
Fundamental frequency.

Spectral envelope.
Formants.

Speech Waveform Characteristics Cont.


Voiced Speech Unvoiced Speech

/ih/

/s/

Short-Time Speech Analysis


Segments (or frames, or vectors) are typically of length 20 ms.
Speech characteristics are constant. Allows for relatively simple modeling.

Often overlapping segments are extracted.

Categorical Perception
Experience of percept invariances in sensory phenomena that can be varied along a continuum. Can be inborn or can be induced by learning. Related to how neural networks in our brains detect the features that allow us to sort the things in the world into separate categories Area in the left prefrontal cortex has been localized as the place in the brain responsible for phonetic categorical perception.

Perception of Vowels
/a/ vowel has greatest intensity with unvoiced // as weakest consonant Front vowels perceived on basis of F1 frequency and average of F2 and F3, whereas back vowels are perceived on the basis of the average of F1 and F2, as well as F3 So is it the absolute frequency values of the formants? Or the ratio of F2 to F1? Perhaps it is the invariant cues (frequency changes that occur with coarticulation

F1/F2 F3

F1 F2/F3

CI Speech Coding Strategies


ACE: Unique to Cochlears Nucleus 24 CI system. ACE optimizes detailed pitch and timing information of sound. SPEAK: (spectral peak) Increases the richness of important pitch information by stimulating electrodes across the entire electrode array. MPEAK: multipeak CIS : (Continuous-Interleaved Sampling) This high rate strategy uses a fixed set of electrodes. Emphasizes the detailed timing information of speech.

ACE (Advanced Combination Encoder) Strategy


Sound enters the speech processor through the microphone and is divided into a maximum of 22 frequency bands. Up to 20 narrow-band filters divide sound into corresponding frequency (pitch) ranges. Each frequency band stimulates a specific electrode along the electrode array. The electrode stimulated depends on the pitch of the sound. For example, in the word "show," the high pitch sound (sh) causes stimulation of electrodes placed near the entrance cochlea, where hearing nerve fibers respond to high pitch sounds. The low pitch sound (ow) stimulates electrodes further into the cochlea, where hearing nerve fibers respond to low pitch sounds. ACE varies the rate of stimulation of the electrodes with a total maximum stimulation rate of 14,400 pulses per second.

Sound enters the speech processor through the microphone and is divided into 20 frequency bands. SPEAK selects the six to ten frequency bands containing maximum speech information. Each frequency band stimulates a specific electrode along the electrode array. The electrode stimulated depends on the pitch of the sound. For example, in the word "show" the high pitch sound (sh) causes stimulation of electrodes placed near the entrance of the cochlea, where the hearing nerve fibers respond to high pitch sounds. The low pitch sound (ow) stimulates electrodes further into the cochlea, where the hearing nerve fibers respond to low pitch sounds. SPEAK's dynamic stimulation along 20 electrodes allows you to perceive the detailed pitch information of natural sound.

SPEAK

CIS
Sound enters the speech processor through the microphone. The sound is divided into 4, 6, 8 or 12 bands depending upon the number of channels used. Each band stimulates one specific electrode along the electrode array, sequentially. The same sites along the electrode are stimulated for every sound at a fast rate to deliver the rapid timing cues of speech.

SpeL stragedy
This is a new approach to sound processing for cochlear implants, currently under investigation in Melbourne, Australia. This scheme aims to reduce the perceptual problems related to mapping input dynamic range to the limited electrical dynamic range of hearing and to compensate for loudness summation effects. It derives its name from Specific Loudness, which describes the way loudness is distributed across frequencies or electrode positions. SpeL takes the novel approach of computing models of normal auditory perception and perception with electric stimulation. These models are computed in real time and sound processing platform is being developed. The results confirm that the use of the models restores loudness perception close to normal over an input dynamic range of at least 50 dB and improves therefore speech understanding

Sensorineural Hearing Loss


Death of hair cells vs. ganglion cells Otte, et al estimated we need 10,000 ganglion cells with 3,000 apically to have good speech discrimination Apical ganglion cells tend to survive better (?acoustic trauma) Central neural system plasticity

Anatomy of Speech
Mix of frequencies Speech recognition is top-down process Formant frequencies: frequency maximum based on vocal tract F0 is fundamental frequency F1 & F2contribute to vowel identification F3l,r (lateral and retroflex glides) F4 & F5higher frequency speech sounds Some speech based on amplitudek, f, l, s

Structure of Cochlear Implant


1. External components

2. Internal components

Components of Cochlear Implant

Four Basic Parts of a Cochlear Implant


A microphone, which picks up sound from the environment;

A speech processor, which selects and arranges sounds picked up by the microphone;

Continued

A transmitter and receiver/ stimulator, which receive signals from the speech processor and convert them into electric impulses; And electrodes, which collect the impulses from the stimulator and send them to the brain.

Amplification
Occurs within the processor Amplifiers used to increase the signal levels Gain of amplifier determines the amount of increase Gain = ratio of output signal level to input signal level Can increase or decrease signal level

Compression
Impaired hearing has decreased acoustical dynamic range - 10 to 25dB. Linear and non-linear compression. Gain of amplifier changed so output to input ratio changes - automatic gain control. Automatic gain control - keep output voltage in a certain range. Wide range of compressor types in use.

Filtering
Filter on the basis of frequency - 100 to 4000Hz Three types: low pass, high pass, and band pass Two reasons for filtering: 1) remove unwanted information 2) separate bands for independent processing Extract frequency dependent features Divide acoustic frequency spectrum into channels Feature extraction systems - filter F0, F1, and F2 Multichannel processing refers to multiple filtered bands

Encoding
Encoded to transmit to the receiver Preserves information and enables information to get to the auditory nerve Analog signal first enters the processor One type - changes analog to radio-frequency Another - converts from analog to digital

Types of Cochlear Implants


Single vs. Multiple channels
Audio example of how a cochlear implant sounds with varying number of channels

Monopolar vs. Bipolar Speech processing strategies


Spectral peak (Nucleus) Continuous interleaved sampling (Med-El, Nucleus, Clarion) Advanced combined encoder (Nucleus) Simultaneous analog strategy (Clarion)

Nucleus Freedom Body Worn Sound Processor


Sound Processing Module
Microphone(s)

Transmitting Cable/Coil
Coil & Magnet Controller Controller Shoe & Cable Batteries or Rechargeable Battery Module

Nucleus Freedom Standard BTE Sound Processor


Sound Processing Module Microphone(s) Transmitting Coil/Cable Coil Coil & Magnet Controller Batteries or Rechargeable Battery Module

Nucleus Esprit 3G Sound Processor


Sound Processor

Microphone(s)
Transmitting Cable/Coil (missing in slide) Coil & Magnet (missing in slide)

Controls
Battery module

Nucleus Sprint Body Worn Sound Processor


Sound Processing Module & Controller Microphone(s) Short Transmitting Coil Coil & Magnet Long Transmitting Coil Batteries or Rechargeable Battery Module

Advanced Bionics Harmony BTE Sound Processor and Components


Sound Processing Module & Controller Microphone & Ear Hook Transmitting Cable/ Coil Coil & Magnet Rechargeable Battery Module

Advanced Bionics Harmony BTE Sound Processor and Components


There are two other models of BTE processors
Auria Platinum/CII BTE

MED-EL Ear-level Speech Processors


Tempo+/OPUS 1
Program/volume switches Sensitivity dial

OPUS 2
Switch free design FineTuner

MED-EL Tempo+ BTE Sound Processor


Sound Processing Module & Controller Microphone(s) Transmitting Coil & Magnet

Transmitting Cable
Ear Hook

Battery Module

MED-EL Opus BTE Sound Processor


Sound Processing Module & Microphone(s) Coil & Magnet Transmitting Cable Ear hook

Battery Module
Connecting piece

Anatomy of a Cochlear Implant

How Does a CI Work?


Sound is received by an microphone that rests over the ear like a behind-the-ear hearing aid. Sound is sent from the microphone to the signal processor by a thin cable. Signal processor translates the sound into electrical codes.

Codes are sent by a thin cable to the transmitter held to the scalp by its attraction to a magnet implanted beneath the skin.

continued Transmitter sends codes across the skin to a receiver/stimulator implanted in the mastoid bone. Receiver/stimulator converts the codes to electrical signals. Electrical signals are sent to the specified electrodes in the array within the cochlea to stimulate neurons.

Neurons send messages along the auditory nerve to the central auditory system in the brain where they are interpreted as sound.

Neural Responses to Sound


1. Temporal coding: Provide information about timing cues (rhythm and intonation.

2. Place coding: Rely on the tonotopic organization of a neural fibers.

3. Provide information about quality (timber of a speech signal sharp to dull)

Site of Stimulation
1. Extracochlear

2. Intracochlear
3. Retrocochlear (lateral recess of the fourth ventricle over the cochlear nuclei.

Stimulus
a. Stimulus type:

- Analog (continuous)
- Digital (pulsatile)

b. Stimulus configuration
1. Bipolar localized site of stimulation

2. Monopolar stimulates large population of neurons

Speech Coding
As speech is produced, the mouth, nose & pharynx modify the frequency spectrum so that peaks and formants are produced at certain frequencies. Speech processing used 3 formants:

F0 = 100 to 200 Hz F1 = 200 to 1200 Hz

F2 = 550 to 3500 Hz

Number of Channels

1. Single channel no place coding

2. Multi channel

Stimulation Mode
1. Simultaneous: More than one electrode is activated at a given succession - CIS 2. Sequential: A continuous series of electrode activates in succession speak

Electrode Design
1. Single electrode 2. Multi electrode

Indication for Cochlear Implant


Adults
18 years old and older (no limitation by age) Bilateral severe-to-profound sensorineural hearing loss (70 dB hearing loss or greater with little or no benefit from hearing aids for 6 months) Psychologically suitable No anatomic contraindications Medically not contraindicated

Indications for Cochlear Implantation -Children


12 months or older Bilateral severe-to-profound sensorineural hearing loss with PTA of 90 dB or greater in better ear No appreciable benefit with hearing aids (parent survey when <5 yo or 30% or less on sentence recognition when >5 yo) Must be able to tolerate wearing hearing aids and show some aided ability Enrolled in aural/oral education program No medical or anatomic contraindications Motivated parents

Factors Affecting Patient Selection


a. Onset of deafness (congenital or adventitious) b. Year of deafness c. Length of sensory deprivation (i.e. no hearing aids) d. Socioeconomic factors e. Educational level f. Individual ability to use minimal cues g. General health

Factors Affecting Pt. (cont.)


h. Personality
i. Willingness to participate in rehabilitation program
j. Language skills k. Appropriate expectations

l. Desire to communicate in a hearing society


m. Psychological stability n. Cochlear patency

Audiologic Evaluation
1. Pure tone audiometry under headphones
2. Warble tone audiometry with a hearing aid in a monitored free field

3. Immittance testing
4. Speech recognition testing 5. Speech awareness testing

Audiologic Evaluation (cont.)


6. Environmental sounds (closed and open set)

7. Speech reading (lip reading) ability


8. Electrical response audiometry 9. Auditory discrimination 10.Transtympanic electrical stimulation (promontory or round window test)

Medical Evaluation
1. 2. 3. 4. 5. 6. 7. 8. 9. Clinical history and initial interview Preliminary examination Complete medical and neurologic examination Cochlear imaging using computed tomography (CT or magnetic resonance imaging (MRI) Vestibular examination (electronystagmography) Pathology tests Psychologic or psychiatric assessment or both Vision testing Assessment for anesthetic procedures

CT Findings

ABSENCE OF COCHLEAR NERVE

Cochlear aplasia.

Facial nerve dehiscence.

Labyrinthine ossification in a patient with a history of meningitis and sensorineural hearing loss.

Labyrinthine ossification in a patient with a history of meningitis and sensorineural hearing loss.

Cochlear dysplasia.

Contraindications
Incomplete hearing loss Neurofibromatosis II, mental retardation, psychosis, organic brain dysfunction, unrealistic expectations Active middle ear disease CT findings of cochlear agenesis (Michel deformity) or small IAC (CN8 atresia) Dysplasia not necessarily a contraindication, but informed consent is a must H/O CWD mastoidectomy Labyrinthitis ossificansfollow scans Advanced otosclerosis

Surgical Procedure
The future site of the implant receiver is marked with methylene blue in a hypodermic needle This site at least 4 cm posterosuperior to the EAC, leaving room for a behind-the-ear controller Next, a postauricular incision is made and carried down to the level of the temporalis fascia superiorly and to the level of the mastoid periosteum inferiorly Anterior and posterior supraperiosteal flaps are then developed in this plane

Procedure
Next, an anteriorly based periosteal flap, including temporalis fascia is raised, until the spine of Henle is identified.

Next, a superior subperiosteal pocket is undermined to accept the implant transducer


Using a mock-up of the transducer, the size of the subperiosteal superior pocket is checked

Procedure
Next, using a 6 mm cutting burr, a cortical mastoidectomy is drilled It is not necessary to completely blueline the sinodural angle, and doing so may interfere with proper placement of the implant transducer

Procedure
Using a mock-up of the transducer for sizing, a well is drilled into the outer cortex of the parietal bone to accept the transducer magnet housing

Small holes are drilled at the periphery of the well to allow stay sutures to pass through. These suture will be used to secure down the implant
Stay sutures are then passed through the holes

Procedure
Using the incus as a depth level, the facial recess is then drilled out Through the facial recess, the round window niche should be visualized

Using a 1 mm diamond burr, a cochleostomy is made just anterior to the round window niche

Procedure
The transducer is then laid into the well and secured with the stay sutures The electrode array is then inserted into the cochleostomy and the accompanying guidewire is removed

Procedure
Small pieces of harvested periosteum are packed in the cochleostomy sround the electrode array, sealing the hole Fibrin glue is then used to help secure the electrode array in place The wound is then closed in layered fashion and a standard mastoid dressing is applied

PAEDIATRIC B/L COCHLEAR IMPLANTS


The potential benefits of bilateral implants are threefold Firstly, it ensures that the ear with the best postoperative performance is implanted Second, it may allow preservation of some of the benefits of binaural hearing: head shadow effect, binaural summation and redundancy, binaural squelch, and sound localization

Third, it may avoid the effects of auditory deprivation on the unimplanted ear

Bimodal Listening
Bimodal listeners use a cochlear implant on 1 ear and a conventional hearing aid on the opposite ear
Results of studies with bimodal devices paved the way for bilateral cochlear implantation

Head Shadow Effect


When speech and noise come from different directions, there is always a more favorable signal-to-noise ratio (SNR) at one ear

The head shadow effect is about 7dB difference in the speech frequency range, but up to 20 dB at the highest frequencies
With binaural hearing, the ear with the most favorable SNR is always available

Binaural Summation and Redundancy


Sounds that are presented to 2 ears simultaneously are perceived as louder due to summation
Thresholds are known to improve by 3 dB with binaural listening, resulting in doubling of perceptual loudness and improved sensitivity to fine differences in intensity

Binaural Squelch
The auditory nervous system is wired to help in noisy situations Binaural squelch is the result of brainstem nuclei processing timing, amplitude, and spectral differences between the ears to provide a clearer separation of speech and noise signals

The effect takes advantage of the spatial separation of the signal and noise source and the differences in timing and intensity that these create at each ear

Localization
Interaural timing is important for directionality of lowfrequency hearing For high frequency hearing, the head shadow effect is more important

Head and pinna shadow effects, pinna filtering effects, and torso absorption contribute to spectral differences that can help determine elevation of a sound source

Auditory Deprivation
Work with conventional hearing aids has demonstrated that if only 1 ear is aided, when there is hearing loss in both ears, speech recognition in the unaided ear deteriorates over time This effect has been shown in children with moderate and severe hearing impairments (Gelfand and Silman 1993)

Complications:
A. Intraoperative 1. Intraoperative cannot be placed appropriately. 2. Insertion trauma 3. Gusher

Complications (cont.):
B. Postoperative 1. Postauricular flap edema, necrosis or separation 2. Facial paralysis 3. Transient vertigo is more likely to occur on a totally nonfunctioning vestibular system. 4. Pain is usually associated with stimulation of Jacobsons nerve, the tympanic branch of the glossopharyngeal nerve. 5. Facial nerve stimulation 6. Meningitis 7. Device extrusion

Rehabilitation
Tuning or mapping of the external processor to meet individual auditory requirements after 3 - 4 weeks post op. 1. 2. 3. 4. Multisensory approach Bimodal stimulation Suprasegmental discrimination training Segmental discrimination and recognition training 5. Speech tracking 6. Counseling

Post-Surgery Audiology Appointments


Initial Activation Mapping/ Programming session
1 week post-activation 1 month post-activation Every 3 months for the first year After the first year, Every 6 months for children Annually for adults

Mapping/Programming Defined:
Verb: the process of setting of the electrical stimulation levels appropriate for the patient to hear soft and comfortably loud sounds. Noun: (map) the product of mapping or programming, which determines how the cochlear implant will deliver stimulation

Mapping/Programming Session
Bilateral simultaneous
Evaluation and programming of each individual speech processor and both speech processors together

Bilateral sequential Focus on new ear

Goals for CI Programming


1. To provide audibility for the range of speech sounds 2. Comfort for all sounds (speech, environmental, music, etc.) 3. Ultimately to provide a means for communication and spoken language development 4. Balance loudness between ears

Validation of Programming
Functional gain testing in soundfield
Responses to NBN or warble tones from 2508000 Hz Speech perception All testing conducted with individual speech processors and binaurally

Validation of Programming
Speech Perception Tests
Ling thresholds ESP GASP MLNT LNT PB-K WIPI HINT-C HINT AzBio

Sentence Stimuli
Sentence material Always administer 2 lists Administered at 60 dB SPL HINT Sentences: (A/The) boy fell from (a/the) window. 4 / 6 (A/The) wife helped her husband. 2 / 5 Big dogs can be dangerous. 3 / 5 AzBio Sentences: He got arsenic poisoning from eating canned fish. 5/8 Visual cues are quite powerful. 3/5

Reprogramming
Check impedances at every visit
Track changes or stability over time

Telemetry=relates to the ability of the electrode to deliver current to the surrounding tissue
Detection of short and open circuits

Telemetry
Results
Impedances within normal limits Short circuit Open circuit

Objective Measures
Electrophysiologic (NRT/NRI/ART) Measurement of how the nerve responds to stimulation Use cautiously to create MAP/s Can be used to help train a child for listening games ESRT Measurement of middle ear reflex to loud sounds Elicited electrically through the implant Requires a patient to be free of ear infections and to remain fairly still

Ling Six (Seven) Sound Test ah (/a/) oo (/u/) ee (/i/) sh s m


(Ling & Ling, 1978)

Consider NO SOUND as the 7th Sound


(Rosemarie Drous, Formerly of the Helen Beebe Speech & Hearing Center)

Ling Six Sound Test


Distance for Detection/Recognition/ID Sound 1 3 6 9 12 /u/ oo
BothCIs L-ONLY RONLY

/a/ ah /i/ ee

/ / sh
/s/ ss /m/ mm

Early Speech Perception (ESP)


(Moog & Geers, 1990)

(Moog & Geers, 1990)

Auditory Assessment
Meaningful Auditory Integration Scale (MAIS)
Robbins, Renshaw, & Berry, 1991

Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS)


Zimmerman-Phillips, Osberger & Robbins, 1997

Infant-Toddler Meaningful Auditory Integration Scale

Available from Advanced Bionics

10 Questions 0-4 Rating Scale


(0=Never; 1=Rarely; 2=Occasionally; 3= Frequently; 4=Always)

Meaningful Auditory Integration Scale


(Available from Advanced Bionics Corporation Parent Interview 10 Questions (1a younger than age 5 years/ 1b older than age 5 years) 0-4 Rating Scale (0=Never; 1=Rarely; 2=Occasionally; 3=Frequently; 4=Always)

PEACH
Parents Evaluation of Aural/Oral Performance of Children
Ching & Hill, 2007

11 Peach Items (6 Quiet; 5 Noise) Frequency Ratings (n=5) of Reported Behavior (Never/Seldom/Sometimes/Often/Always) (0%, 25%, 50%, 75%, >75%)

PEACH
Abstract
The PEACH was developed to evaluate the effectiveness of amplification for infants and children with hearing impairment by a systematic use of parents observations. The internal consistency reliability was .88, and the test-retest correlation was .93. The PEACH can be used with infants as young as one month old and with school-aged children who have hearing loss ranging from mild to profound degree.

Test of Auditory Comprehension


Ages 4-17 years Normative data based on age ranges and better ear PTA Stimuli on audiotape Screening task to start Hierarchical Ceiling: 2 consecutive subtest failures

Intervention

Integration of Cochlear Implants &/or Hearing Aids and Auditory Intervention

A Perfect Marriage

Levels of Auditory Hierarchy


Auditory/Sequential Memory Auditory Closure Auditory Analysis Auditory Blending Auditory Figure Ground Auditory Tracking Auditory Processing Auditory Understanding/ Comprehension (adapted from Caleffe-Schenck)

Auditory Hierarchy
Detectionto indicate the presence/ absence of sound (Alarm Clock / Wake-Up /
Marching Games)

Auditory Attention to pay attention to auditory signals, especially speech, for an extended time.

Identification to indicate an understanding of what has been labeled or named or to label or name something. (L to L Sounds //
Recognition / Identification)

Auditory Hierarchy
Auditory Memory / Sequential Memory to store and recall auditory stimuli or different length or number in exact order. Distance Hearing to attend to sounds at a distance. (FM Issue) Localization to localize the source of sound.
(Bird Call Localization)

Auditory Hierarchy
Auditory Figure Ground to identify a primary speaker from a background of noise. Auditory Tracking to follow along in the text of a book as it is read aloud by someone else or in conversation. (see De Filippo &
Scott, 1978)

Auditory Understanding / Auditory Comprehension to synthesize the global meaning of spoken language and to relate it to known information.

Cochlear implants have not solved:


Noise, distance and reverberation Speed, depth and complexity of language Hardware problemsA CI will malfunction Deafnessa child is deaf when the CI is off The diversity of our deaf population

Cued Speech
Educators and parents must safeguard language development of deaf children
Because deaf children are diverse and because cochlear implants dont conquer every obstacle, a visual representation of spoken language is essential

Visual component in oral programs


Even oral and A/V programs use vision to clarify what is heard Auditory Verbal mentions auditory sandwich Auditory Oral programs use Mouth Time, Visible Speech.. Gallaudets programs use Visual Phonics Some oral and auditory verbal programs use Rhythmic Phonetics.

Rehabilitation

Rehabilitation

Conclusions
Cochlear implants are designed to mimic the rate-place code in the acoustic ear. The frequency channels in the CI are fewer and broader than those in the acoustic ear. The number of frequency channels available to the typical user is even smaller than the number in the implant. Nonetheless, CI users understand speech fairly well in quiet, but have much more trouble in noise.

FUTURE RESEARCH
1. Continue investigating the strengths and limitations of present signal processing strategies including CIS-type and SPEAK-type strategies. development of signal processing techniques capable of transmitting more information to the brain. 2. Develop noise reduction algorithms that will help implant patients better communicate in noisy environments. 3. Identify factors that contribute to the variability in performance among patients Knowing these factors may help us develop signal processing techniques that are patient specific. 4. Develop pre-operative procedures that can predict how well a patient will perform with a cochlear implant. 5. Design electrode arrays capable of providing high degree of specificity. Such electrode arrays will provide channel selectivity which is now considered to be one of the limiting factors in performance.

Das könnte Ihnen auch gefallen