Personalized Auditory Reality
* Presenting author
In this work, we introduce Personalized Auditory Realities, a new research field that investigates methods for manipulation of acoustic surroundings. Within such an auditory reality, users would be able to freely modify their acoustic scene by enhancing relevant sounds, suppressing irrelevant ones, or adding new ones. The perceived acoustic environment will follow the paradigm of augmented realities where real sounds are combined with added sound sources. While previous research in the fields of video and audio analysis has addressed similar topics, no method or system exists today that allows the realization of perceptually convincing Personalized Auditory Realities.To achieve this ambitious goal, we combine and extend interdisciplinary research involving acoustics, digital signal processing, and data sciences (machine learning), all in close relation with auditory perception and quality. To achieve high-quality integration of these technologies, the ideal system requires methods to: 1) decompose real-world acoustic scenes, 2) represent audio scenes as audio objects that can be manipulated, and 3) recompose scenes with added audio objects. Our research tackles both headphone-based and loudspeaker- based reproduction of sound. In this work, we initially describe the state-of-the-art, system requirements, and first results of a system for headphone auralization.