10th Speech in Noise Workshop, 11-12 January 2018, Glasgow

Automatic scene classification improves speech perception of CI users in simulated real world listening scenarios

Anja Eichenauer(a), Uwe Baumann, Tobias Weissgerber
University Hospital Frankfurt, Audiological Acoustics, Germany

(a) Presenting

Introduction — Speech perception in everyday listening conditions is demanding due to the presence of multiple noise sources and reverberation in rooms. Cochlear implant (CI) users in general experience deteriorated speech perception compared to normal hearing subjects. Current CI audio processors apply signal processing algorithms to improve speech perception in such complex listening scenes. However, measurements that determine the benefit of those algorithms are regularly performed under free field conditions in setups consisting of only a few sound sources. Consequently, the impact of signal processing on hearing performance in real world scenarios is not yet determined. Aim of the present study was to assess the benefit of an automatic scene classification algorithm (Cochlear SCAN) on speech perception in CI users.
Methods — 10 unilateral CI users with Cochlear Nucleus 6 speech processors and a control group of 5 normal hearing (NH) subjects participated in the study. The Oldenburg sentence matrix test (OLSA) was used to assess the speech reception thresholds (SRTs) in an “everyday life simulation” task consisting of acoustic scenes of different acoustic complexity (e.g. multiple sources, reverberation). A 128-channel loudspeaker setup in an anechoic chamber was used for sound reproduction. Reverberation (simulation of an auditorium, reverberation time RT=1.2s) was reproduced based on reflection patterns (room simulation software ODEON, Lyngby Denmark). Within the simulation task the order of test conditions was randomized every five sentences. SRTs were calculated individually for each acoustic scene and averaged to assess the mean everyday life performance SRT.
Results — Mean SRT across all test conditions of the NH group was -9.9 dB SNR. Compared to free-field conditions, the SRT degraded up to 5 dB in reverberation (depending on the acoustic scene). Mean SRT of the CI group was 1.9 dB SNR (without SCAN) and 0.4 dB SNR (with SCAN). Without SCAN, CI users demonstrated decreased speech perception in reverberant conditions comparable with the NH subjects (up to 5.5 dB).
Conclusion — CI users and NH subjects showed similar detrimental effects of reverberation on speech perception. However, the mean SRT of CI users is almost 12 dB worse than the SRT of NH subjects. Automated signal processing based on scene classification is able to improve speech perception of CI users in free-field and reverberant conditions.

Last modified 2017-11-17 15:56:08