Do noise-induced latency shifts of the auditory brainstem response to speech reflect degradation in neural synchrony?
Understanding speech in noisy environments is crucial for human communication. Noise is known to increase the latency of the auditory brainstem response (ABR) to speech sounds, and this has been suggested to reflect noise-induced degradation in the neural temporal processing of speech. However, speech ABR latencies are also strongly influenced by the distribution of response contributions from different cochlear regions, tuned to different frequencies. This is, because cochlear regions tuned to lower frequencies have considerably slower responses than regions tuned to higher frequency regions. Thus, if noise masking changed the cochlear distribution of the speech ABR, then this might provide an alternative explanation for the noise-induced increase in speech ABR latency. The aim of the current experiment was to investigate this by examining the effect of noise masking on speech-evoked ABRs from frequency-restricted cochlear regions. We used the ‘derived-band’ technique to obtain speech ABRs from octave-wide regions centred at 0.7, 1.4, 2.8 and 5.6 kHz. Frequency-restricted ‘derived-band’, and unrestricted (‘broadband’) speech ABRs were recorded both in quiet and in noise. Consistent with previous findings, noise masking caused a significant increase in the latency of the broadband speech ABR. In contrast, however, the noise effects on the latencies of the derived-band speech ABRs were invariably small, and also inconsistent across bands. Instead, the predominant effect of noise on the derived-band speech ABRs was to change the distribution of their amplitudes: with increasing noise level, the predominant amplitude moved from the 2.8-kHz band down to the 0.7-kHz band. These results suggest that the latency increase of the speech ABR as a result of noise is primarily caused by a cochlear place mechanism.