Speech intelligibility in audio-visual environments
* Presenting author
Virtual acoustic environments are increasingly used for evaluating hearing devices in complex acoustic conditions. For this purpose, we developed an interactive low-delay real-time simulation method via multi-channel loudspeaker systems or headphones. The method focuses on a time-domain simulation of the direct path and a geometric image source model, which simulates air absorption and the Doppler effect of all primary and image sources. To establish the feasibility of the approach, the interaction between reproduction method and technical and perceptual hearing aid performance measures was investigated using computer simulations for regular circular loudspeaker arrays with 4 to 72 channels. Results demonstrate the potential of the method for hearing aid evaluation. Visual content was added to the acoustical simulation using game-engine based real-time multimedia technology, allowing for an assessment of listener performance in interactive audiovisual environments. To demonstrate the approach, speech intelligibility was measured in spatial multi-talker audiovisual conditions, requiring the subject to first identify the target of interest, and then to track it in order to understand the target’s utterances. Results show that, in order to achieve ecologically valid data, consistent acoustic and visual information is required, with the visual environment delivering lip-reading cues and consistent gestures such as head movements.