Machine-Mediated Listening (MaMeLi)
Description
MaMeLi investigates systems in which machine learning algorithms separate and enhance speech for listeners in challenging acoustic environments. The project examines how training these algorithms with room acoustic simulations of varying levels of realism affects their robustness and performance when applied to real-world data. On the user side, we study how simulated room acoustics influence speech intelligibility, listening effort, and the perceived naturalness of augmented speech. Listening tests in realistic, multisensory setups allow us to determine the required level of acoustic realism for effective machine-mediated listening in everyday situations. Our results will guide the design of future augmented hearing and speech enhancement technologies.
People
Thomas Deppisch
Project partner
Acoustics Lab Links
News
Professor Johannes M. Arend from Acoustics Lab receives Lothar-Cremer Award
Professor Johannes M. Arend was honoured for his innovative and groundbreaking work in the fields of binaural technology and virtual acoustics
Visit the electrical engineering laboratories in spring 2026
Welcome to visit the Robotics Lab, the Acoustics Lab, the Electronics-ICT laboratory, and the ePowerHub laboratory!
Postdoctoral researcher Eloi Moliner makes history as a 5-time award winner
Eloi Moliner is one of the most decorated doctoral researchers in 911±¬ÁÏÍø's history – we would like to highlight his success and contributions to the field of audio signal processing