In Focus February-March 2026
The 53rd Erlangen Colloquium hosted by WSA
What's in focus Introduction
The 53rd Erlangen Colloquium: Inspiring Scientific Exchange
On February 5th and 6th, 125 scientists, researchers, and developers gathered at Erlangen’s historic Redoutensaal for the 53rd Erlangen Colloquium, hosted by WSA. The event featured the latest advances in audiology, signal processing, and hearing aid development.
The program featured 15 high-quality talks and five posters. Two renowned international keynote speakers highlighted the event:
- Erin Picou (Vanderbilt University, TN, USA) offered an inspiring perspective on how hearing aid processing influences users’ emotion perception and mood.
- Jesper Rindom Jensen (Aalborg University, Denmark) provided powerful insights into combining classical signal processing with deep neural networks to advance AI-driven hearing aid applications.
Read more about the content of these two interesting and inspiring talks in the sections below.
Explore Keynote presentation
Hearing More Than Words: Why Access to Sound Means Access to Emotion
Keynote talk by Erin N. Picou, Vanderbilt University Medical Center, Nashville, TN, U.S.A.
For most people, the auditory world is rich with emotional meaning. We experience joy through music, calm through nature sounds, amusement in shared laughter, and social connection through subtle changes in vocal emotion. These affective experiences support communication, regulate mood, and contribute to overall well-being. Access to clear and rich auditory input is essential for fully experiencing this emotional landscape.
Emerging research demonstrates that adults with hearing loss experience a muted perception of emotion. Compared to similarly aged peers with typical hearing, they show reduced ability to recognize emotion in others’ voices and diminished emotional responses to non-speech sounds such as music and environmental stimuli. These muted perceptions are associated with downstream consequences, including increased social isolation, poorer mental health, and reduced quality of life.
Critically, many individuals with hearing loss are unaware of these differences in emotion perception. While misunderstanding speech is often corrected through direct feedback, misinterpreted emotional cues and missed opportunities for joy are less obvious and therefore less likely to be addressed in clinical care. As a result, traditional clinical measures focused primarily on speech understanding may underestimate the broader impact of hearing loss on emotional experiences. Accordingly, it is incumbent upon scientists and clinicians to understand the factors that drive these affective phenomena and to advance effective hearing technologies that improve access to sound.
The presentation synthesized recent programmatic and collaborative research examining how access to high-quality sound influences vocal emotion recognition, emotional responses to non-speech sounds, and mood as an affective state. Focusing on adults with hearing loss, the presentation did also explore the future implications of these findings for hearing aid design and clinical practice. Ultimately, the talk argued that improving access to sound is not only about improving communication, but also about restoring access to feeling and, in doing so, enhancing human connection and overall well-being.
Explore Keynote presentation
From Speech Presence to Low Rank Filters: Integrated AI Solutions for Modern Audio Processing
Keynote talk by Jesper Rindom Jensen, Audio Analysis Lab, Department of Electronic Systems, Aalborg University, Denmark
Advances in artificial intelligence are creating new opportunities to improve how sound is processed and enhanced in everyday environments. In this talk, the speaker described how AI can complement established audio signal processing methods to support clearer communication, more responsive hearing devices, and audio technologies that better understand and adapt to their surroundings.
The talk highlighted recent progress in using AI to estimate when someone is speaking and what the surrounding noise environment looks like, key pieces of information that many noise reduction and enhancement systems rely on. The speaker did also introduce new approaches to simplifying models of acoustic environments, enabling devices to react more quickly to changing conditions without degrading sound quality.
In addition, the presentation covered emerging work on predicting speech, an approach that can give hearing devices and other audio systems more time to respond under tight latency constraints. Finally, the talk outlined how these statistical and AI based methods can support higher level analyses, such as assessing conversational engagement or alignment between talkers, using both traditional speech activity cues and more advanced AI derived representations.
Overall, the talk offered a forward looking perspective on integrated AI driven audio processing, combining data driven models, and classical signal processing principles to enable smarter, faster, and more supportive audio technologies.