The human brain has the amazing ability to integrate prior knowledge with sensory input to drive our subjective experience. This enables us to quickly make sense of incoming information and adapt accordingly – while failure to do so can lead to perceptual illusions. So, how does our brain combine expectations and sensory information during human communication? Over the years, scientists have proposed a number of neural mechanisms. According to the predictive coding theory, the brain processes sensory input and matches it with prior experiences to drive neuronal computations. However, the precise mechanisms by which neural circuits exploit prior experiences remain elusive.
Decoding brain responses in speech/face recognition
Undertaken with the support of the Marie Skłodowska-Curie programme, the EXPECTBRAIN project combined different state-of-the-art methodologies to analyse brain activity patterns and decode brain responses to speech and face recognition. “The goal was to specify the neural mechanisms implicated in perception and misperception,” states the Marie Skłodowska-Curie fellow Helen Blank. The work focused on the role of prior expectations in speech and face recognition. In one study human participants were trained to associate images of scenes and faces. Using functional magnetic resonance imaging fMRI, researchers measured brain responses after the learning phase during testing, when individuals were presented with particular scenes or faces. Results demonstrated that the human brain represents prior face expectations in face-sensitive regions of the brain in a manner directly proportional to the certainty of this expectation. In another study, a similar approach for manipulating prior expectations was undertaken by presenting written words before degraded spoken words. By reading the word, participants formulated an experience to which they could compare the incoming spoken word. Blank explains: “Hearing the correct word enabled participants to easily perceive the spoken word, even if degraded; written words can also bias perception of a degraded spoken word, especially when there is partial overlap between written and spoken word.” Then behavioural and brain fMRI data were computed into models to test alternative theoretical assumptions. With respect to speech perception, the brain seems to use prediction errors to determine whether sensory input matches or deviates from prior expectations. If the error is weak, then it confirms the prior expectation; if the error is strong, however, the prior is discarded.
Face and speech recognition provide the ideal context for investigating how expectations influence perception, since the respective face- and speech-sensitive regions in the human brain are well described. In addition, both face and speech stimuli can easily be manipulated in a controlled fashion. Understanding how the human brain combines expectations and sensory information during human communication is paramount for certain clinical conditions. Although prior expectations can improve speech perception, they can also lead to misperception, with diverse outcomes including shared amusement and serious miscommunication. In hearing-impaired individuals, misperception is a frequent phenomenon that may result in social withdrawal and isolation, with severe consequences for well-being. Additionally, EXPECTBRAIN results have implications for neuropsychiatric conditions associated with unusual perceptual experiences, and for machine speech recognition efforts in engineering.
EXPECTBRAIN, brain, face recognition, speech, fMRI, hearing impairment, sensory input, speech perception, neural circuits