Beruflich Dokumente
Kultur Dokumente
PRESENTED BY
DR PRAVEEN KUMAR
continued The cochlear implant bypasses damaged parts of the inner ear and electronically stimulates the nerve of hearing. Part of the device is surgically implanted in the skull behind the ear and tiny wires are inserted into the cochlea.
Indications : SNHL Deaf Child CHL, when surgery is not feasible due to various reasons
(a) (a)
External part Microphone Speech processor Transmitter coil Internal part Receiver stimulator Electrode array
Types : Depending on number of electrodes and channels (a) Non electrical Electrical Nucleus 24 contour (b) Air conduction type (MC) Bone conduction Clarion C 11 type MED-EL combi 40+ (c) Implantable hearing aids Advantages : i. Cost effective as compared to cochlear implant ii. Good patient compliance iii. Used in patients where surgery is not feasible iv. Done on OPD basis i. Better efficacy in postlingual deaf cases ii. Better results in children in prelingual deafness Observed in terms of Speech intelligibility scores Language development rate expressive skills i. More useful for patients having profound SNHL
i. Cost factor i. May cause intolerable distortion of sound in ii. Can not be used in psychologically imbalanced individuals patients with SNHL ii. Difficult to use in patients with discharging ear iii. Involves surgical procedure (technically difficult) or otitis external (Bone conduction type can be iv. Long postoperative rehabilitation programmes v. Longer programmes hospital stay used in these cases)
Disadvantages
Complications :
Facial nerve palsy Wound infection/dehiscence Device failure (early/late) CSF leak (rare) Post op vertigo Post op meningitis (rare) Extrusion/exposure of device (rare)
Tonotopic Stimulation
Simmons, in 1966, provided a more extensive study in which electrodes were placed through the promontory and vestibule directly into the modiolar segment of the auditory nerves The nerve fibers representing different frequencies could be stimulated The subject demonstrated that in addition to being able to discern the length of signal duration, some degree of tonality could be achieved
Multi-Channel Implants
During the late 70s, work was also being done in Australia, where Clark and colleagues were developing a multi-channel cochlear implant later to be known as the Cochlear Nucleus Freedom Multiple channel devices were introduced in 1984, and enhanced the spectral perception and speech recognition capabilities compared to Houses single-channel device
Anatomy
Anatomy
Scala tympani
Scala vestibuli
Cochlear duct Basilar membrane Vestibular membrane Tectoral membrane Hair cells (outer/inner) Cochlear nerve fibers
Physiology of Hearing
Anatomy
Anatomy of Sound
The spectrum of sound is shaped by the external ear, and the sound pressure is enhanced by the middle ear. Changes in pressure move the basilar membrane, which moves the tectorial membrane, which moves the stereocilia of the hair cells. Ions flow into the hair cells. Outer hair cells vibrate and boost the basilar membrane motion. Inner hair cells release neurotransmitter that leads to action potentials in the auditory nerve fibers that contact the inner hair cells, which are
transmitted to the brain
Action potentials
If a low-frequency sound occurs, the peak of the the basilar membrane motion is toward the apex of the cochlea, and the action potentials are phase locked to the low frequency. If a high-frequency sound occurs, the peak of the basilar membrane motion is toward the base of the cochlea, and if the frequency is high enough, the action potentials will not be phase locked to the sound
High frequency
The computer contains a bank of band-pass filters, that splits the incoming sound into a series of frequency bands. The intensity of the sound in each frequency band is scaledby the amplitude compressors-- so that it fits within the dynamic range of the auditory nerve fibers (a point to which we shall return). Then the output of each bandpass filter is delivered to one electrode. Low frequency sounds stimulate apical electrodes, and therefore more apical neurons, High frequency sounds will stimulate basal electrodes and therefore more basal neurons
The purpose of the bank of bandpass filters, connected to electrodes in different positions in the cochlea is to represent the
waveform of sound amplitude spectrum of sound phase spectrum of sound
ELECTRODE POSITIONING
There are several ways in which the rate-place code in an implant differs from that in a normal ear. In a normal cochlea you have about 3000 inner hair cells arrayed along the basilar membrane, each of which reports on incoming sound in terms of its position along the array, the timing of its firing pattern and the rate at which it fires. This message is transmitted to the brain by lots of nerve fibers attached to each hair cell. In the implanted ear, the 12-22 electrodes only cover the first turn of the cochlea. The full range of audible frequencies therefore stimulate nerve fibers that respond to middle- to high frequencies in the normal ear. The most apical nerve fibers are not stimulated. Whether this would be a problem was hard to predict. It may be that the brain expects to receive information about certain frequencies from certain places in the cochlea. Or it may be that the we can learn to use the neurons that are stimulated to get information about whatever frequencies stimulate them.
In a normal cochlea the neurons connected to each of those 3000 inner hair cells arrayed along the basilar membrane responds to a band of frequencies that is about 1/3 octave wide. In the implanted ear, there are at most 22 electrodes, so entire audible frequency range has to be divided into 22 bands, each will be fairly wide, compared to the bandwidth of the inner hair cells in normal ear. NO OF ELECTRODE DETERMINE THE BANDWIDTH EACH CARRIES
Current from a single electrode spreads along cochlea and excites many auditory nerve fibers
Electrical current spread also increases the bandwidth to which each nerve fiber responds
The combined effect of the reduced number of frequency bands and current spread could be to make frequency selectivity at the level of the auditory nerve poor. FREQUECY SELECTIVITY MAY BE POORER OF NEURON STIMULATED BY CI DUE: (1)The neurons are damaged. (2)The outer hair cells are malfunctioning. (3)The implant has a rather small number of electrodes and they may not act independently
Frequency (kHz)
Each filter + other devices + electrode = one channel We would say that cochlear implant has fewer frequency channels than a normal ear, and that each carries information about a broader range of frequencies in the implant.
Cochlear-implant simulation
8 6 4
2 1.5
x 10
Amplitude
Amplitude
1 0.5 0 -0.5 -1 0 0.5 1 1.5 2 2.5 Time (sec) 3 3.5 4 4.5 TextEnd
8000
6000
6000
Frequency
Frequency
0.5
1.5
2 Time
2.5
3.5
4.5
From herrick_uedamodel/script_demo1: best 6 of 16 channels, 250 Hz pulserate, 16 kHz sampling H/U filterbank
Simultaneous sentences
same Fo
difft Fo
difft Fo & VT
summer
ignore
difft Fo
difft Fo & VT
Normal hearing listeners could get close to perfect performance on consonant, vowel and even sentence identification with only 4 frequency bands, or channels Simulation results
Later studies showed that normal hearing people needed more frequency bands to understand speech in noise. This figure shows the result of another study of normal) increased as the number of bands increased up to about 16 bands. But when the materials were presented in noise, the normal people hearing didnt do as well, and even with 16 frequency bands, they did worse than in quiet. . It shows that in quiet the open symbolspeoples speech recognition performance (with different materials than in the Shannon et al study So we need more frequency bands in noise. . Implant listeners tested on the same speech materials in quiet, go about 10% correct equivalent to 9-10 bands in the normal hearing groupit is as if the CI users have 9-10 channels available to themeven though their implants provide them with 22 bands. Even more striking is that implant listeners only got 6-7% correct in noise it is as if they only have 4 frequency channels to use.
Engineering: attempts to improve frequency selectivity Pulsatile processing Electrode configuration monopolar, bipolar Current steering
theory is that current flow leads to channel
PULSATILE PROCESSING
Instead of simply delivering an analog current the waveform of the sound in each frequency bandto the nerve fibers, each electrode presents a series of pulses. Hence pulsatile. But the pulses are not presented exactly simultaneously they are delivered in sequence, but very rapidly. Because the pulses are not on simultaneously they, cant add together (so-called interaction).
This is a more detailed account of pulsatile processing, with what is called continuous interleaved sampling (pulses are interleaved, not simultaneous). The top panel is the schematic of the device. The middle panel is the schematic of the parts of the processor a filter bank, just like before, and then a device that takes off the envelope of the sound and throws away the fine structure. Then the envelope waveform in each channel gets multiplied by a series of pulses. When the waveform is big, the pulse is big, when the waveform is small, the pulse is small. The actual changes in the electrical signal waveform are shown in the bottom panel. The right most graphs show what is actually delivered to the nerve fibers in cochlea.
Electrode configuration
One attempt to improve frequency selectivity is to change the way that the electric currents flow. In the monopolar configuration, the ground electrode is the most basal electrode, and that is the ground for all of the other electrodes. That means that the pattern of current flow is pretty broad. In a bipolar configuration, each electrode is paired with its own ground electrodethe one next to it in the array. That makes a much narrower pattern of current flow. A narrower pattern of current flow will excite a more restricted set of neurons, and that is what we are after. In fact, people have narrower tuning curves with bipolar than with monopolar configurations.
Monopolar
Bipolar
Advanced bionics came up with this idea that you could divide the information between two electrodes to steer the current toward different groups of neurons.
Electrode Configurations: Enhanced Bipolar Electrode (ENH) Diagonal electrode pairs provide a wider electrode separation so that loudness growth can be achieved with bipolar stimulation. ENH+Electrode Positioning System (ENH+EPS) EPS pushes electrode array towards the modiolus, where the spiral ganglion cell bodies reside High Focus Electrode+EPS (HF+EPS) Longitudinally-arranged plate electrodes orient the current field toward spiral ganglion cell bodies "dielectric partitions" designed to reduce current spread to adjacent electrodes
Processing strategies
Perception of Speech
SPEECH ORGANS
The Lungs and Thorax Generate the airflow that passes through the larynx and vocal tract. Larynx and Vocal Folds/Cords Obstruct airflow from the lungs to create turbulent noise or pulses of air. Vocal Tract Produces the many sounds of speech by: Modifying the spectral distribution of energy and Contributing to the generation of sound
MANNER OF ARTICULATION
Manner of articulation describe the configuration in vocal tract Different manner of articulation categories are
CATEGORY
DESCRRIPTION
Little constriction of the vocal tract Vowel with changing configuration Transient sounds with fixed starting points Greater obstruction that vowels All the air passes through the nose Restricting airflow to create turbulence Plosive followed by fricative sound Closure of air passage then release
EXAMPLE
jay
bay
Scripts used are phonetic in Nature Better Articulatory discipline Systematic manner of production Five or Six distinct places of Articulation Various types of Flaps/Taps or Trills Fewer fricatives compared to English / European languages
Place of Articulation
Place of articulation describes the configuration of the vocal tract that distinguishes between the phonemes within a manner of articulation group.
These are generally controlled by the position and shape of the tongue, though for some sound teeth and lips are important articulators
Speech Sounds
Coarse classification with phonemes. A phone is the acoustic realization of a phoneme. Allophones are context dependent phonemes.
Phoneme Hierarchy
Speech sounds Vowels iy, ih, ae, aa, ah, ao,ax, eh, er, ow, uh, uw Language dependent. About 50 in English.
Consonants
Lateral liquid Glide Retroflex l w, y Plosive liquid p, b, t, Nasal Fricative r d, k, g m, n, ng f, v, th, dh, s, z, sh, zh, h
Spectral envelope.
Formants.
/ih/
/s/
Categorical Perception
Experience of percept invariances in sensory phenomena that can be varied along a continuum. Can be inborn or can be induced by learning. Related to how neural networks in our brains detect the features that allow us to sort the things in the world into separate categories Area in the left prefrontal cortex has been localized as the place in the brain responsible for phonetic categorical perception.
Perception of Vowels
/a/ vowel has greatest intensity with unvoiced // as weakest consonant Front vowels perceived on basis of F1 frequency and average of F2 and F3, whereas back vowels are perceived on the basis of the average of F1 and F2, as well as F3 So is it the absolute frequency values of the formants? Or the ratio of F2 to F1? Perhaps it is the invariant cues (frequency changes that occur with coarticulation
F1/F2 F3
F1 F2/F3
Sound enters the speech processor through the microphone and is divided into 20 frequency bands. SPEAK selects the six to ten frequency bands containing maximum speech information. Each frequency band stimulates a specific electrode along the electrode array. The electrode stimulated depends on the pitch of the sound. For example, in the word "show" the high pitch sound (sh) causes stimulation of electrodes placed near the entrance of the cochlea, where the hearing nerve fibers respond to high pitch sounds. The low pitch sound (ow) stimulates electrodes further into the cochlea, where the hearing nerve fibers respond to low pitch sounds. SPEAK's dynamic stimulation along 20 electrodes allows you to perceive the detailed pitch information of natural sound.
SPEAK
CIS
Sound enters the speech processor through the microphone. The sound is divided into 4, 6, 8 or 12 bands depending upon the number of channels used. Each band stimulates one specific electrode along the electrode array, sequentially. The same sites along the electrode are stimulated for every sound at a fast rate to deliver the rapid timing cues of speech.
SpeL stragedy
This is a new approach to sound processing for cochlear implants, currently under investigation in Melbourne, Australia. This scheme aims to reduce the perceptual problems related to mapping input dynamic range to the limited electrical dynamic range of hearing and to compensate for loudness summation effects. It derives its name from Specific Loudness, which describes the way loudness is distributed across frequencies or electrode positions. SpeL takes the novel approach of computing models of normal auditory perception and perception with electric stimulation. These models are computed in real time and sound processing platform is being developed. The results confirm that the use of the models restores loudness perception close to normal over an input dynamic range of at least 50 dB and improves therefore speech understanding
Anatomy of Speech
Mix of frequencies Speech recognition is top-down process Formant frequencies: frequency maximum based on vocal tract F0 is fundamental frequency F1 & F2contribute to vowel identification F3l,r (lateral and retroflex glides) F4 & F5higher frequency speech sounds Some speech based on amplitudek, f, l, s
2. Internal components
A speech processor, which selects and arranges sounds picked up by the microphone;
Continued
A transmitter and receiver/ stimulator, which receive signals from the speech processor and convert them into electric impulses; And electrodes, which collect the impulses from the stimulator and send them to the brain.
Amplification
Occurs within the processor Amplifiers used to increase the signal levels Gain of amplifier determines the amount of increase Gain = ratio of output signal level to input signal level Can increase or decrease signal level
Compression
Impaired hearing has decreased acoustical dynamic range - 10 to 25dB. Linear and non-linear compression. Gain of amplifier changed so output to input ratio changes - automatic gain control. Automatic gain control - keep output voltage in a certain range. Wide range of compressor types in use.
Filtering
Filter on the basis of frequency - 100 to 4000Hz Three types: low pass, high pass, and band pass Two reasons for filtering: 1) remove unwanted information 2) separate bands for independent processing Extract frequency dependent features Divide acoustic frequency spectrum into channels Feature extraction systems - filter F0, F1, and F2 Multichannel processing refers to multiple filtered bands
Encoding
Encoded to transmit to the receiver Preserves information and enables information to get to the auditory nerve Analog signal first enters the processor One type - changes analog to radio-frequency Another - converts from analog to digital
Transmitting Cable/Coil
Coil & Magnet Controller Controller Shoe & Cable Batteries or Rechargeable Battery Module
Microphone(s)
Transmitting Cable/Coil (missing in slide) Coil & Magnet (missing in slide)
Controls
Battery module
OPUS 2
Switch free design FineTuner
Transmitting Cable
Ear Hook
Battery Module
Battery Module
Connecting piece
Codes are sent by a thin cable to the transmitter held to the scalp by its attraction to a magnet implanted beneath the skin.
continued Transmitter sends codes across the skin to a receiver/stimulator implanted in the mastoid bone. Receiver/stimulator converts the codes to electrical signals. Electrical signals are sent to the specified electrodes in the array within the cochlea to stimulate neurons.
Neurons send messages along the auditory nerve to the central auditory system in the brain where they are interpreted as sound.
Site of Stimulation
1. Extracochlear
2. Intracochlear
3. Retrocochlear (lateral recess of the fourth ventricle over the cochlear nuclei.
Stimulus
a. Stimulus type:
- Analog (continuous)
- Digital (pulsatile)
b. Stimulus configuration
1. Bipolar localized site of stimulation
Speech Coding
As speech is produced, the mouth, nose & pharynx modify the frequency spectrum so that peaks and formants are produced at certain frequencies. Speech processing used 3 formants:
F2 = 550 to 3500 Hz
Number of Channels
2. Multi channel
Stimulation Mode
1. Simultaneous: More than one electrode is activated at a given succession - CIS 2. Sequential: A continuous series of electrode activates in succession speak
Electrode Design
1. Single electrode 2. Multi electrode
Audiologic Evaluation
1. Pure tone audiometry under headphones
2. Warble tone audiometry with a hearing aid in a monitored free field
3. Immittance testing
4. Speech recognition testing 5. Speech awareness testing
Medical Evaluation
1. 2. 3. 4. 5. 6. 7. 8. 9. Clinical history and initial interview Preliminary examination Complete medical and neurologic examination Cochlear imaging using computed tomography (CT or magnetic resonance imaging (MRI) Vestibular examination (electronystagmography) Pathology tests Psychologic or psychiatric assessment or both Vision testing Assessment for anesthetic procedures
CT Findings
Cochlear aplasia.
Labyrinthine ossification in a patient with a history of meningitis and sensorineural hearing loss.
Labyrinthine ossification in a patient with a history of meningitis and sensorineural hearing loss.
Cochlear dysplasia.
Contraindications
Incomplete hearing loss Neurofibromatosis II, mental retardation, psychosis, organic brain dysfunction, unrealistic expectations Active middle ear disease CT findings of cochlear agenesis (Michel deformity) or small IAC (CN8 atresia) Dysplasia not necessarily a contraindication, but informed consent is a must H/O CWD mastoidectomy Labyrinthitis ossificansfollow scans Advanced otosclerosis
Surgical Procedure
The future site of the implant receiver is marked with methylene blue in a hypodermic needle This site at least 4 cm posterosuperior to the EAC, leaving room for a behind-the-ear controller Next, a postauricular incision is made and carried down to the level of the temporalis fascia superiorly and to the level of the mastoid periosteum inferiorly Anterior and posterior supraperiosteal flaps are then developed in this plane
Procedure
Next, an anteriorly based periosteal flap, including temporalis fascia is raised, until the spine of Henle is identified.
Procedure
Next, using a 6 mm cutting burr, a cortical mastoidectomy is drilled It is not necessary to completely blueline the sinodural angle, and doing so may interfere with proper placement of the implant transducer
Procedure
Using a mock-up of the transducer for sizing, a well is drilled into the outer cortex of the parietal bone to accept the transducer magnet housing
Small holes are drilled at the periphery of the well to allow stay sutures to pass through. These suture will be used to secure down the implant
Stay sutures are then passed through the holes
Procedure
Using the incus as a depth level, the facial recess is then drilled out Through the facial recess, the round window niche should be visualized
Using a 1 mm diamond burr, a cochleostomy is made just anterior to the round window niche
Procedure
The transducer is then laid into the well and secured with the stay sutures The electrode array is then inserted into the cochleostomy and the accompanying guidewire is removed
Procedure
Small pieces of harvested periosteum are packed in the cochleostomy sround the electrode array, sealing the hole Fibrin glue is then used to help secure the electrode array in place The wound is then closed in layered fashion and a standard mastoid dressing is applied
Third, it may avoid the effects of auditory deprivation on the unimplanted ear
Bimodal Listening
Bimodal listeners use a cochlear implant on 1 ear and a conventional hearing aid on the opposite ear
Results of studies with bimodal devices paved the way for bilateral cochlear implantation
The head shadow effect is about 7dB difference in the speech frequency range, but up to 20 dB at the highest frequencies
With binaural hearing, the ear with the most favorable SNR is always available
Binaural Squelch
The auditory nervous system is wired to help in noisy situations Binaural squelch is the result of brainstem nuclei processing timing, amplitude, and spectral differences between the ears to provide a clearer separation of speech and noise signals
The effect takes advantage of the spatial separation of the signal and noise source and the differences in timing and intensity that these create at each ear
Localization
Interaural timing is important for directionality of lowfrequency hearing For high frequency hearing, the head shadow effect is more important
Head and pinna shadow effects, pinna filtering effects, and torso absorption contribute to spectral differences that can help determine elevation of a sound source
Auditory Deprivation
Work with conventional hearing aids has demonstrated that if only 1 ear is aided, when there is hearing loss in both ears, speech recognition in the unaided ear deteriorates over time This effect has been shown in children with moderate and severe hearing impairments (Gelfand and Silman 1993)
Complications:
A. Intraoperative 1. Intraoperative cannot be placed appropriately. 2. Insertion trauma 3. Gusher
Complications (cont.):
B. Postoperative 1. Postauricular flap edema, necrosis or separation 2. Facial paralysis 3. Transient vertigo is more likely to occur on a totally nonfunctioning vestibular system. 4. Pain is usually associated with stimulation of Jacobsons nerve, the tympanic branch of the glossopharyngeal nerve. 5. Facial nerve stimulation 6. Meningitis 7. Device extrusion
Rehabilitation
Tuning or mapping of the external processor to meet individual auditory requirements after 3 - 4 weeks post op. 1. 2. 3. 4. Multisensory approach Bimodal stimulation Suprasegmental discrimination training Segmental discrimination and recognition training 5. Speech tracking 6. Counseling
Mapping/Programming Defined:
Verb: the process of setting of the electrical stimulation levels appropriate for the patient to hear soft and comfortably loud sounds. Noun: (map) the product of mapping or programming, which determines how the cochlear implant will deliver stimulation
Mapping/Programming Session
Bilateral simultaneous
Evaluation and programming of each individual speech processor and both speech processors together
Validation of Programming
Functional gain testing in soundfield
Responses to NBN or warble tones from 2508000 Hz Speech perception All testing conducted with individual speech processors and binaurally
Validation of Programming
Speech Perception Tests
Ling thresholds ESP GASP MLNT LNT PB-K WIPI HINT-C HINT AzBio
Sentence Stimuli
Sentence material Always administer 2 lists Administered at 60 dB SPL HINT Sentences: (A/The) boy fell from (a/the) window. 4 / 6 (A/The) wife helped her husband. 2 / 5 Big dogs can be dangerous. 3 / 5 AzBio Sentences: He got arsenic poisoning from eating canned fish. 5/8 Visual cues are quite powerful. 3/5
Reprogramming
Check impedances at every visit
Track changes or stability over time
Telemetry=relates to the ability of the electrode to deliver current to the surrounding tissue
Detection of short and open circuits
Telemetry
Results
Impedances within normal limits Short circuit Open circuit
Objective Measures
Electrophysiologic (NRT/NRI/ART) Measurement of how the nerve responds to stimulation Use cautiously to create MAP/s Can be used to help train a child for listening games ESRT Measurement of middle ear reflex to loud sounds Elicited electrically through the implant Requires a patient to be free of ear infections and to remain fairly still
/a/ ah /i/ ee
/ / sh
/s/ ss /m/ mm
Auditory Assessment
Meaningful Auditory Integration Scale (MAIS)
Robbins, Renshaw, & Berry, 1991
PEACH
Parents Evaluation of Aural/Oral Performance of Children
Ching & Hill, 2007
11 Peach Items (6 Quiet; 5 Noise) Frequency Ratings (n=5) of Reported Behavior (Never/Seldom/Sometimes/Often/Always) (0%, 25%, 50%, 75%, >75%)
PEACH
Abstract
The PEACH was developed to evaluate the effectiveness of amplification for infants and children with hearing impairment by a systematic use of parents observations. The internal consistency reliability was .88, and the test-retest correlation was .93. The PEACH can be used with infants as young as one month old and with school-aged children who have hearing loss ranging from mild to profound degree.
Intervention
A Perfect Marriage
Auditory Hierarchy
Detectionto indicate the presence/ absence of sound (Alarm Clock / Wake-Up /
Marching Games)
Auditory Attention to pay attention to auditory signals, especially speech, for an extended time.
Identification to indicate an understanding of what has been labeled or named or to label or name something. (L to L Sounds //
Recognition / Identification)
Auditory Hierarchy
Auditory Memory / Sequential Memory to store and recall auditory stimuli or different length or number in exact order. Distance Hearing to attend to sounds at a distance. (FM Issue) Localization to localize the source of sound.
(Bird Call Localization)
Auditory Hierarchy
Auditory Figure Ground to identify a primary speaker from a background of noise. Auditory Tracking to follow along in the text of a book as it is read aloud by someone else or in conversation. (see De Filippo &
Scott, 1978)
Auditory Understanding / Auditory Comprehension to synthesize the global meaning of spoken language and to relate it to known information.
Cued Speech
Educators and parents must safeguard language development of deaf children
Because deaf children are diverse and because cochlear implants dont conquer every obstacle, a visual representation of spoken language is essential
Rehabilitation
Rehabilitation
Conclusions
Cochlear implants are designed to mimic the rate-place code in the acoustic ear. The frequency channels in the CI are fewer and broader than those in the acoustic ear. The number of frequency channels available to the typical user is even smaller than the number in the implant. Nonetheless, CI users understand speech fairly well in quiet, but have much more trouble in noise.
FUTURE RESEARCH
1. Continue investigating the strengths and limitations of present signal processing strategies including CIS-type and SPEAK-type strategies. development of signal processing techniques capable of transmitting more information to the brain. 2. Develop noise reduction algorithms that will help implant patients better communicate in noisy environments. 3. Identify factors that contribute to the variability in performance among patients Knowing these factors may help us develop signal processing techniques that are patient specific. 4. Develop pre-operative procedures that can predict how well a patient will perform with a cochlear implant. 5. Design electrode arrays capable of providing high degree of specificity. Such electrode arrays will provide channel selectivity which is now considered to be one of the limiting factors in performance.