Beruflich Dokumente
Kultur Dokumente
Home About Us Contact Us Library Products NAL Research Contract Research Publications & Presentations Acoustic Test Facilities
Hearing Devices
Hearing Aids Cochlear Implants (CI) Implantable Hearing Aids Assistive Listening Devices
Hearing Aids
Hearing aids, or hearing instruments, are typically prescribed to people who have some residual hearing, and physically normal and healthy ears, and they are available in a wide range of styles. The main function of the hearing aid is to provide audibility of a wide range of sounds without making them uncomfortably loud to the hearing aid user. In order to achieve this, the hearing aid comprises as a minimum a microphone that picks up surrounding sounds, an amplifier that increases the level of sounds depending of the input level, and a receiver that delivers the amplified sound to the ear. To operate, the hearing aid needs power that it gets from a battery. Most modern hearing aids are digital, which means that the analogue signal picked up by the microphone is converted to digital form before being amplified or otherwise processed in order to best meet the needs of the hearing aid user. While some receivers require the digitally processed sound to be converted back into an analogue signal before delivering the sound to the ear, others produce the analogue signal directly from the digitally processed sound. The use of digital technology makes it possible to combine many special features into the rather small hearing instruments. This is achieved through very complex signal processing schemes, which are implemented as mathematical operations.
Styles
Hearing aids are commonly categorised according to how they are worn and their size.
Hearing Devices
Hearing Rehabilitation Hearing Loss Simulations Search
www.nal.gov.au/hearing-devices_tab_hearing-aids.shtml
1/6
4/15/13
Spectacle
Although not common these days, BTE style instruments can be mounted to spectacle frames via a special adapter. The tube that connects the ear mould with the hearing instrument is in this case attached to the frame near the adapter. The obvious advantage with this combined solution is that it becomes less crowded behind the ear if the BTE style hearing aid is the most suitable hearing solution. The disadvantage is that if either glasses or hearing instrument needs to go in for repair, the user will be left without both hearing aid and glasses for the duration of the repair.
Further Reading:
Dillon H. (2001). Hearing Aids. Sydney, Boomerang Press: pp 10-11, 44-45, 434-447.
Features
While the main objective of hearing aids is to increase the audibility of sounds that have become inaudible, modern hearing aids offer a variety of sophisticated features designed partly to ease listening in a wide range of environments, and partly to restore hearing abilities that may be distorted either by the hearing problem or by the presence of the hearing aid in the ear. Specific features include compression, feedb ack management, directional microphones, noise suppression, trainab ility, environment sensing, b inaural linkage, and data logging.
Feedback Management
Feedback, or whistling, is familiar to many hearing aid users. It frequently occurs when covering the aided ear, or when inserting or extracting the aid while it is switched on. Feedback occurs when the amplified sound reaches the microphone; thus creating a feedback loop. Amplified sound may escape the ear canal through vents in the aid or earmould, or through other leakages from a loosely fitted aid or mould. Unintended leakages may occur while talking or chewing due to jaw movements. High levels of amplification increase the likelihood of feedback. In some cases, feedback may limit the amount of amplification the hearing aid user can be fitted with, significantly compromising the optimum benefit obtainable from the aid. To maximise the amplification that can be provided before feedback, many hearing aids offer a feedback management feature. Three common approaches to feedback management include gain reduction, notch filters, and phase cancellation. While the phase cancellation method is probably the most effective, none of the three approaches completely eliminates feedback. Rather, they allow higher gain levels to be reached before feedback becomes a problem. Feedback management by gain reduction: The aim of this approach is to reduce gain at the frequencies where feedback occurs. Commonly, gain is reduced in the frequency band in which feedback is detected, with an amount that depends on the magnitude of the feedback signal. In some implementations, a different strategy is used to reduce the gain in response to feedback for low-level inputs. Feedback management by notch filters: As above, the aim is to reduce gain at the frequencies where feedback occurs. In this case, gain reduction is achieved by generating a sharp notch filter at the feedback frequency. Feedback management by phase cancellation: In this approach, a signal identical to the feedback signal in frequency and amplitude, but with opposite phase, is created. Adding the generated signal to the feedback path will cancel the original feedback signal.
Further Reading:
1. Chung K (2004). Challenges and recent developments in hearing aids. Part II. Feedback and occlusion effect reduction strategies, laser shell manufacturing processes, and other signal processing technologies. Trends in Amplification, 8(4):126-145. 2. Dillon H. (2001). Hearing Aids. Sydney, Boomerang Press, pp 107-111, 198-202.
Directional Microphones
One of the biggest problems for people with hearing loss and hearing aids is difficulty understanding speech in noisy situations. There are a couple of hearing aid technologies that can help listeners hear better in noisy situations; FM systems and directional microphones. In hearing aids the default microphones are omni-directional microphones, which are equally sensitive to sounds from all directions. Directional microphones are designed to pick up sounds coming from in front of the listener better than they pick up sounds coming from other directions. In this way, directional microphones can improve the wearer's speech understanding in noise because they reduce the level of noise coming from the side/behind the wearer. Many kinds of directionality are offered in current hearing instruments. They are known by their specific polar characteristic, which is used to illustrate the sensitivity of the microphone for sounds arising from different directions (360o ). Common polar characteristics include cardioid (see Figure below), hyper-cardioid and super-cardioid. Directional microphones also have some disadvantages: they are more sensitive to wind noise, they may have audible "static" noise in quiet environments and they reduce the available volume in the hearing instrument. Therefore, there are situations for which directional microphones are preferred and others for which omni-directional microphones are preferred. To meet the demand for different directional modes in different situations, some hearing aids offer: Switchable directionality; i.e. the user can manually switch between directional and omni-directional microphones, Automatic directionality; i.e. based on the acoustic environment, the hearing aid will switch between omni-directional and directional modes without input from the hearing aid wearer, or
www.nal.gov.au/hearing-devices_tab_hearing-aids.shtml
2/6
4/15/13
More recently, frequency-specific directionality has been introduced in hearing aids. In this mode the hearing aid uses the output from the directional microphone over part of the frequency range (usually the high frequencies) and from the omni-directional microphone over another part of the frequency range (usually the low frequencies). While this compromise is not as efficient at improving speech understanding in noise, this particular implementation that causes the spectral shape of a sound to vary with the arrival direction of sound may enhance the ability of the wearers to localise sounds. Directional microphone hearing aids are most effective when the wearer: Faces the direction of the speaker, both in the left-right dimension and up-down dimension. Is within ~2 metres of the speaker (closer if the room is very echoey) Positions him/herself so that dominant noise sources are located behind or beside him/her, and not behind the speaker
Diagram of an omni-directional microphone (b lue line) and a cardioid directional microphone (dashed red line) measured in freespace.
Further Reading:
1. Chung K (2004). Challenges and recent developments in hearing aids. Part I. Speech understanding in noise, microphone technologies and noise reduction algorithms. Trends in amplification, 8(3):110-124. 2. Dillon H. (2001). Hearing Aids. Sydney, Boomerang Press. pp 25-28, 188-191. 3. Kates JM (2008). Digital Hearing Aids. San Diego, Plural Publishing Inc. pp 75-113. 4. Ricketts TA (2001). Directional Hearing Aids. Trends in amplification, 5(4):139-176.
Noise Suppression
Listening in background noise can be tiring for a hearing aid user. The overall aim of noise suppression algorithms is to increase listening comfort in noisy environments without compromising speech understanding. This is achieved by reducing gain in frequency bands where the noise level is estimated to exceed the speech level. Noise suppression algorithms consist of a signal detection component and a decision-making rule. In the signal detection component, the temporal pattern or modulation of the incoming sound is analysed to determine whether noise or speech is the dominating signal in a frequency band. This is possible because speech has a slower modulation rate (measured in Hz) and large temporal fluctuations (modulation depth measured in dB), while most noises have either a constant temporal characteristic (low modulation depth) or a modulation rate greater than that of speech. A newer signal detection method analyses the sound for synchronous energy (comodulation) that is specifically produced by the opening and closing of the vocal folds during the voicing of vowels and voiced consonants. The decision rule in noise suppression algorithms specifies the signal-to-noise ratio that would trigger gain reduction in each frequency band, how much gain reduction would occur, and the speed with which gain reduction occurs. Generally, noise suppression algorithms work well for continuous stationary noises, but may struggle to distinguish between speech and short transient noises, such as a door slamming or clattering cutlery, or background noises that constitute speech, such as babble or party noise. However, some newer devices do successfully manage the former problem to offer relief from sudden, loud sounds.
Further Reading:
1. Bentler et al (2006). Special Issue: Digital noise reduction. Trends in amplification, 10(2):67-104. 2. Chung K (2004). Challenges and recent developments in hearing aids. Part I. Speech understanding in noise, microphone technologies and noise reduction algorithms. Trends in amplification, 8(3):110-124.
www.nal.gov.au/hearing-devices_tab_hearing-aids.shtml
3/6
4/15/13
Trainable Aids
Further Reading:
1. Zakis JA, McDermott HJ and Dillon H (2007). The design and evaluation of a hearing aid with trainable amplification parameters. Ear and hearing, 28(6):812-830. 2. Dillon H, Zakis J, McDermott H, Keidser G, Dreschler W and Convery E (2006). The trainable hearing aid: What will it do for clients and clinicians? Hearing Journal, 59:3036.
Compression
A sensorineural hearing loss is usually associated with a phenomenon called recruitment. Someone with recruitment cant hear soft sounds, may have difficulty hearing medium level sounds, and yet hears loud sounds just as loudly as a normal-hearer. If a hearing aid applies the same gain (volume) to all input sounds, regardless of whether they are soft, medium or loud, medium sounds may be audible and comfortable, but soft sounds may still be inaudible and loud sounds may be uncomfortably loud. To overcome this, it is necessary to reduce the large range of input levels a hearing aid wearer encounters in everyday life into the smaller range of levels they find audible and comfortable (their dynamic range). The compressor in a hearing aid achieves this by automatically altering the gain (volume) of the hearing aid depending on the level of the input signal. Specifically, the amount of amplification decreases as the signal intensity increases. Compression is often referred to as an automatic volume control, which may be slow or fast acting. As the hearing instrument is not aware of exactly how loud or soft the hearing aid wearer would like to hear a sound many people prefer to use a manual volume control, that they can adjust themselves, as well as the automatic volume control feature in their hearing aid. The technical name for an automatic volume control that operates over a large range of input levels is Wide Dynamic Range Compression (WDRC). WDRC improves the audibility of soft speech sounds for the hearing aid wearer by applying more gain (or amplification) to soft sounds. This means loud sounds, which may be uncomfortable for the hearing aid wearer, are reduced. This also reduces the chance of damaging the wearers hearing if the hearing aid volume is worn higher than the recommended level. As soft level sounds are made louder, the hearing aid wearer may find that many low level environmental noises such as air conditioners, fridges and computer noises are now audible to them. When hearing aid wearers are fitted with this type of compression they must be made aware of the possibility of hearing these additional noises. Multi-channel Compression is the term used when the incoming sound is divided into multiple frequency channels, each with their own compression characteristics. This type of compression benefits people who hear some frequencies better than others e.g. they have good low frequency hearing although require a lot more amplification in the high frequencies. Multi-channel compression allows the appropriate amount of compression to be applied at different frequencies. Technically, compression is defined by a static and a dynamic characteristic. The static characteristic refers to the relationship between the input and output levels, while the dynamic characteristic refers to the time it takes the compressor to react to a change in input level. In particular, the static characteristic is described by the compression ratio; i.e. the change in input level (I) needed to produce a 1 dB change in output level (O), and the compression threshold; i.e. the input level above which the hearing aid starts compression. The dynamic characteristic is described by the attack and the release time; i.e. the times it takes the compressor to react to an increase and decrease in input level, respectively. As recruitment gets worse as the hearing loss gets more severe, it would seem that people with severe/profound hearing loss in particular could benefit from WDRC. Applying WDRC to this population with the aim of presenting a large range of input level into the very narrow dynamic range of hearing would require the use of fairly high compression ratios. Unfortunately, the use of high compression ratios tends to severely distort important spectral and temporal cues of speech. Further, a recent study conducted at NAL found that hearing aid users with severe/profound hearing loss preferred very low compression ratios across the low frequencies and moderate compression ratios across the high frequencies.
Further Reading:
1. Dillon H (2001). Hearing Aids. Sydney: Boomerang Press 2. Dillon H (1996). Compression? Yes, but for low or high frequencies, for low or high intensities, and with what response times? Ear and Hearing, 17(4):287-307. 3. Kuk FK (1996). Theoretical and practical considerations in compression hearing aids. Trends in Amplification, 1(1):5-39.
Multiple Programs
Many modern hearing aids are equipped with multiple programs that each present a different amplification characteristic with different
www.nal.gov.au/hearing-devices_tab_hearing-aids.shtml
4/6
4/15/13
Further Reading:
1. Keidser G (1995). The relationship between listening conditions and alternative amplification schemes for multiple memory hearing aids. Ear and Hearing, 16(6):575-586. 2. Keidser G, Dillon H and Byrne D (1995). Candidates for multiple frequency response characteristics. Ear and Hearing, 16(6): 562-574. 3. Keidser G, Dillon H and Byrne D (1996). Guidelines for fitting multiple memory hearing aids. Journal of the American Academy of Audiology, 7(6):406-418. 4. Keidser G, Limareff H, Simmons S, Gul C, Hayes Z, Sawers C, Thomas B, Holland K and Korchek K (2005). Clinical evaluation of Australian Hearings guidelines for fitting multiple memory hearing aids. Australian & New Zealand Journal of Audiology, 27(1):51-68.
Binaural Linkage
When a pair of bilaterally fitted hearing instruments operates independent of each other there is a potential for the wearer to obtain conflicting information in each ear about where sounds are coming from. Binaural linkage of hearing aids means that two hearing aids fitted bilaterally are in communication with each other, so that the signal processing performed in each instrument takes into account the sound arriving at both instruments. The intention of using the combined information from two instruments in hearing aid signal processing, is to improve binaural hearing. Binaural hearing allows listeners to take advantage of spatial listening. In particular, good binaural hearing enables: a) comparison of sounds arriving at each ear to detect interaural time and level differences used to determine the direction sounds are coming from, and b) improved ability to focus on a target sound source. Thus, binaural hearing increases the awareness of the environment and reduces mental strain when following conversation in complex listening environments. There is further an expectation that a greater signal-to-noise ratio (SNR) advantage can be achieved by combining information arriving at the two instruments than if the hearing aids operate independently. A SNR advantage is particularly important to listeners with hearing deficits who are typically deprived of the sensory acuity that allows people with normal hearing to deal with complex acoustic environments. The combined benefit of improved SNR and spatial hearing is likely to improve the ability of hearing impaired listeners to deal with multitalker listening situations. There are commercial hearing aids available for which wireless communication takes place between bilaterally fitted instruments. Some of these hearing aids utilise binaural linkage to enable simultaneous activation of the volume control and program adjustments across two instruments fitted together, and to optimise sound classification. At least one product utilises information about the level of sound arriving at each ear to synchronise adaptive gain adjustments across ears. However, there are currently no commercially available instruments that combine the sounds from both sides of the head in order to improve the SNR, but the development in this area is still in early stages. With the exponential advancement in digital hearing aid technology, binaural hearing aid features will become significantly more complex. Some future strategies likely include: recognising and adapting to different environments, integrating artificial sounds to improve the clarity of speech (augmented reality), and directional responses producing significantly greater SNR improvements. Such strategies have the potential for allowing hearing aid users to hear better than unaided normal hearing listeners. As a result, such technology, developed for hearing aids, is likely to become increasingly desirable to people with normal hearing as the demands for assistive listening devices increases with increasing noisy environments.
Further Reading:
1. Markides A (1977). Binaural hearing aids. Academic Press Inc, London. 2. Blauert J (1999). Spatial hearing. The Psychophysics of human sound localization. The MIT Press, Cambridge, Massachusetts. 3. Kates JM (2008). Digital Hearing Aids. San Diego, Plural Publishing Inc: 401-439.
www.nal.gov.au/hearing-devices_tab_hearing-aids.shtml
5/6
4/15/13
Data Logging
Many modern hearing aids are equipped with a "datalogger". The datalogger does not directly affect the hearing aid response, but is used to collect information about the hearing aid user's usage pattern in daily life. It is anticipated that this information may be used by the clinician to fine-tune the hearing aid to the user's specific needs. The datalogger may collect information about how much the hearing aid is used, in which acoustic environments, and on which programs (if multiple programs are a feature of the instrument). It may also be used to track adjustments made to the response by the hearing aid user. For example, the audiologist may use datalogging to review the client's usage pattern of the volume control. Currently, there are no evidence-based reviews available for this feature.
Home
Contract Research
www.nal.gov.au/hearing-devices_tab_hearing-aids.shtml
6/6