In everyday life people experience a multitude of acoustic environments: quiet conversations at home, lunch with a friend in a noisy restaurant, or asking for directions on a busy street corner. The acoustic parameters in these different environments can vary significantly and change dynamically. A hearing instrument wearer’s listening experience depends on how well their hearing instrument settings match their acoustic environment. As their sound scene changes, the response of their hearing instruments may need to be adapted to the new environment. This can be done manually by the wearer or automatically by the hearing instruments. For the hearing instruments to adapt automatically, they need to be able to correctly identify the changing environment and then make appropriate adjustments.
Unitron’s signal processing philosophy
“At Unitron, we believe that a sophisticated automatic program, capable of characterizing the listening scenario and adjusting performance accordingly, offers the advantage of ease of use and reduces the risk of potential errors, such as selecting the wrong manual program or not changing the active program at all.”
(Cornelisse, 2017)
Unitron hearing instruments have been capable of accurately classifying acoustic environments for years. Unitron’s sound processing technology has continued to evolve to better optimize the sound in different environments. The directional microphone mode is one of the features that can have the biggest impact on performance, particularly in high complexity environments. For years Unitron has engaged and adjusted the directional response according to the classified environment. With the Vivante™ platform, the ability of Unitron hearing instruments to adapt their directional response based on the wearer’s acoustic environment has been further improved with the addition of HyperFocus. HyperFocus offers the greatest potential boost in signal-to-noise ratio (SNR) in the most complex environments.
In order to implement Unitron’s philosophy, the first step is accurately identifying the listening scenario, which can be characterized across multiple aspects of the acoustic environment, eg. the overall level of sound, the presence or absence of speech and/or background noise, the SNR, and the location of speech relative to the listener. Cornelisse (2017) described how any listening scenario can be quantified along three key dimensions: 1) overall level, 2) type of noise, and 3) SNR.
Hayes (2021) compared the performance of Unitron’s Conversational Classifier to that of normal-hearing listeners and found that Unitron’s classification system was able to accurately classify both simple and complex listening environments. In this study, 26 different acoustic environments were first classified by 20 normal-hearing listeners to establish a baseline. The performance of the classifiers used in the premium hearing instruments from Unitron and four other major manufacturers was then compared to the performance of the baseline established by normal-hearing listeners. The performance of Unitron’s classifier was highly consistent with that of the normal-hearing listeners.
The three dimensions described by Cornelisse (2017) are the basis of Unitron’s conversational classifier. This classification system has evolved over time, and Vivante hearing instruments are now capable of classifying up to eight listening environments, including the most recent addition, conversation in loud noise which employs the HyperFocus microphone mode:
- Conversation in quiet
- Conversation in a small group
- Conversation in a crowd
- Conversation in noise
- Conversation in loud noise
- Quiet
- Noise
- Music
No matter how accurate, just identifying the acoustic environment is not enough. If a conversation in a noisy environment is identified, it is still vital to determine the location of the conversation partner in order to apply the correct directional response. Walden et al. (2004) reported that in 20% of listening situations the listener does not face the talker. Similarly, Hayes (2022) reported that in complex listening situations people are not facing the direction of speech (25% of the time) almost as often as they are (30% of the time). Hayes (2022) also reported that the percentage of time with no target talker was highly correlated with the time classified as Noise only by Unitron’s classifier.
Before discussing how Vivante hearing instruments adapt their directionality based on the classified environment and the location of speech, it is important to understand some directional microphone basics. The directional response of modern hearing instruments is created by combining the input of two or more different microphones that are located in different physical locations on the hearing instrument. This will be referred to as the directional beamformer.
Prior to the launch of the Vivante platform, Unitron hearing instruments used a traditional beamforming mode that used the two microphones from a single hearing instrument to create the directional response. The signals from the two microphones are combined and due to a delay between the microphones (external caused by physical distance and internal applied during signal processing) the sensitivity to inputs from different locations will vary (Ricketts, 2005). Since the directional response is created using the microphones from a single hearing instrument, this is referred to as a monaural beamformer.
In a binaural fitting, data exchange between the two hearing instruments is used to coordinate the directional response of each hearing instrument. This allows the directional response of a pair of Unitron hearing instruments to work together as binaural system.
With the launch of Vivante, Unitron now has a binaural beamforming mode called HyperFocus. This directional microphone effect is created by combining the audio from all four microphones in a pair of hearing instruments. This is different than the data exchange historically used. The full audio signal is exchanged between the two hearing instruments to create a narrower directional response than is possible with a traditional monaural beamformer
Perceptually this mix between left and right hearing instrument signals leads to the impression that all sources are located in one location, to the front, and is experienced as a narrow beam with less interfering noise from the back and particularly from the sides (Derleth et al., 2021).
Unitron’s system can accurately classify the acoustic environment of the user and can also detect the location of speech. Hayes (2022) found that on average people spend about 26% of their time in complex environments. We know if the user is in a simple or complex environment. If the user is in a complex environment we know if there is speech or not, and if there is speech present, we know what direction it is coming from.
Integra OS is the name of the sophisticated automatic system within the Vivante platform which adjusts multiple parameters within the hearing instruments in response to changes in the acoustic environment. One of the parameters it automatically adjusts is the directional response of the hearing instruments.
Low complexity environments
The goal in low complexity listening environments is to provide awareness of environmental sounds while maintaining the acoustic cues required for sound localization. When the SNR is high, it does not need to be increased by the directional system. When a lower complexity environment is detected, the microphone mode used by Unitron Vivante products is Pinna Effect 2. Pinna Effect 2 was developed to recreate the directional response of the average human ear. Like a human ear, the directional response is different between the left and right hearing instruments so Pinna Effect 2 is only available for hearing instruments fit in a pair. Pinna Effect 2 was designed to compensate for cues typically lost with hearing instruments, which are needed for localization.
High complexity environments
In a complex environment the objective depends on the presence or absence of speech in the environment. If speech is detected, we want the directional response to increase the SNR of the target talker but if speech is not detected we want to apply mild directionality to decrease some of the background noise while maintaining environmental awareness. In Vivante hearing instruments this is achieved automatically within Integra OS which activates the AutoFocus 360 microphone mode.
No target talker
When no speech is detected in a complex environment, AutoFocus 360 applies a symmetrical fixed-wide forward facing directional response. The beamforming pattern is intended to reduce background noise from the back, while maintaining environmental awareness from the front and sides. Compared to Pinna Effect 2, this mode reduces the ambient background noise, but not as much as a fully engaged front facing beamformer.
Talker from the side
When speech is located to either the right or left in a complex environment, an asymmetrical response is applied. On the side that speech is detected, a side facing beamformer pattern is applied to emphasize speech from the same side. On the other side an adaptive front beamforming pattern is applied to reduce noise from that side and back. For example, if speech is located to the right, the right hearing instrument will focus to the right and the left hearing instrument will focus to the front. The effect is increased saliency of the talker to the side, while providing overall reduction of ambient background noise.
Talker from the back
When speech is located to the back in a complex environment, both hearing instruments apply a rear facing directional beam. The rear facing directional beam focuses to the back while maintaining some audibility for sounds from the front.
The effect is increased saliency of the talker to the back, while providing overall reduction of ambient background noise from other directions. In this case, although signals from the front are reduced, they are still audible in order to maintain a balance between focus of the talker to the back and awareness of other off-axis sounds – especially for sounds from the front.
Talker from the front
When speech is located to the front in a complex environment, the directional response of Vivante hearing instruments depends on the overall level of the environment. In moderately loud environments, the directional response of both hearing instruments is a traditional beamformer with an adaptive front pattern.
If the environment is very loud, a dedicated Integra OS environment will automatically engage HyperFocus, the binaural beamformer. It is applied to provide maximal directional performance. For rechargeable products with a built-in accelerometer the response also depends on whether or not the hearing instrument wearer is moving. HyperFocus will not engage if the hearing instruments detect that the wearer is walking.
Talker from the front in loud noise
With the addition of HyperFocus to the Vivante platform, Integra OS now has an additional microphone mode to help wearers in their most challenging environments. With all features at default strength, HyperFocus can provide an average SNR improvement across the audiometric frequencies (250-8000 Hz) of 2.8 dB compared to Fixed Wide microphone mode and an improvement of 1.2 dB compared to AutoFocus 360 for speech located to the front (Unitron, 2023). The SNR benefit was estimated using a signal inversion technique described by Hagerman and Olofsson (2004).
Since HyperFocus provides maximal directionality, why not always use it when speech is from the front? This is for a number of reasons. Binaural beamformers use more current than traditional beamformers due to the full audio exchange between the pair of hearing instruments required to create this beamforming mode. A binaural beamformer is created by combining the input signals from all four microphones in a pair of hearing instruments. This means that both hearing instruments in the pair output the same signal which impacts localization cues (Derleth et al., 2021). Although a binaural beamformer provides the best improvement in SNR for a talker directly in front of the listener, it reduces awareness of sounds that are off axis, which is less desirable, if a conversation is not taking place.
HyperFocus is only available as part of the automatic program for Vivante hearing instruments at technology level 9. HyperFocus is available in a manual program at both the 9 and 7 technology levels. Within a manual program the strength of HyperFocus can be adjusted. At maximum strength the SNR benefit provided by HyperFocus is 3.7 dB, an additional 0.5 dB of SNR benefit when compared to the default setting (Unitron, 2023).
Unitron’s philosophy is to create a sophisticated automatic system that is capable of characterizing a listening scenario and adjusting accordingly with the goal of increasing ease of use and reducing potential errors. Our classification system can accurately classify both simple and complex listening environments. Unitron hearing instruments can detect the presence or absence of speech, and when speech is present, detect its direction. Integra OS and AutoFocus 360 allow Vivante hearing instruments to use the environment
classification and speech location to intelligently adapt their directional response to the wearer’s acoustic environment. The performance for each target location is a balance between focus to the target location, reduction of the overall level of ambient background noise, maintaining awareness of off-axis sounds and reduction of audible transitions as the target location changes. The addition of HyperFocus our most aggressive, binaural microphone mode allows Integra OS to better respond in the most complex listening environments.
To access a demo that allows you to listen to and compare the different microphone modes available in Unitron Vivante hearing instruments go to
Cornelisse, L. (2017). A conceptual framework to align sound performance with the listener’s needs and preferences to achieve the highest level of satisfaction with amplification. http://dx.doi.org/10.13140/RG.2.2.10315.08486
Derleth, P., Georganti, E., Latzel, M., Courtois, G., Hofbauer, M., Raether, J., & Kuehnel, V. (2021). Binaural Signal Processing in Hearing instruments. Seminars in hearing, 42(3), 206–223. https://doi.org/10.1055/s-0041-1735176
Hagerman, B. & Olofsson, A. (2004). A method to measure the effect of noise reduction algorithms using simultaneous speech and noise. Acta Acustica united with Acustica. 90, 356-361.
Hayes, D. (2021). Environmental Classification in Hearing instruments. Seminars in Hearing, 42(3), 186-205. https://doi.org/10.1055/s-0041-1735175
Hayes, D. (2022). Hey! I’m over here: Log It All and the direction of speech. Unitron. https://www.unitron.com/content/dam/echo/en_us/learn/UH_FieldStudyNews_LogIt_AllDirectionOfSpeech_EN.pdf
Ricketts T. A. (2005). Directional hearing instruments: then and now. Journal of rehabilitation research and development, 42(4 Suppl 2), 133–144.
Unitron (2023). 23EQ Verification Report - Moxi V9 R RIC Directional Performance. PDL-13831 [2]. Unpublished internal company document.
Walden, B. E., Surr, R. K., Cord, M. T., & Dyrlund, O. (2004). Predicting hearing instrument microphone preference in everyday listening. J Am Acad Audiol, 15(5), 365-396. https://doi.org/10.3766/jaaa.15.5.4