Semantic predictability and brain state modulate neural representations of speech in noise
When listening to speech against competing talkers, the human brain draws on compensatory cognitive resources. I will demonstrate that both cognitive resources provided by a listener’s brain state as well as cognitive resources demanded by signal quality modulate electroencephalographic (EEG) signatures of speech processing. In the first study, we revisited an effect as old as the EEG itself: closing the eyes increases the power of parieto-occipital alpha oscillations (8–12 Hz), which are – coincidentally? – also a proxy of focusing attention to the auditory modality. To test whether closing the eyes benefits auditory attention to speech, participants attended to one of two streams of spoken numbers in darkness. Whereas eye closure per se did not enhance perceptual sensitivity, closing the eyes increased the tendency to judge probes as attended in participants exhibiting stronger attentional modulation of alpha power under closed eyes. Our findings demonstrate that closing the eyes has the potency to enhance the neural dynamics of auditory attention by supporting the adaptive inhibition of the dominant visual system. Next, I will give an outlook for my future research on the neural underpinnings of building up predictions for forthcoming speech segments informed by semantic context (e.g., “The ship sails the sea” for a highly predictable final keyword). To test how semantic context alters speech comprehension under adverse listening conditions, participants attended to one of two narrative speech streams both varying in sound intensity over time. We expect the signal-to-noise ratio to moderate the mapping of semantic predictability on the power of neural oscillations related to auditory attention and semantic predictability. In sum, I will argue that a listener’s brain state as well as the use of predictions efficiently balance the recruitment of limited cognitive resources during speech comprehension.