Skip to main content
European Commission logo print header

Identification of different neuro-cognitive mechanisms of prediction in language comprehension

Periodic Reporting for period 2 - NeuroPred (Identification of different neuro-cognitive mechanisms of prediction in language comprehension)

Reporting period: 2021-04-01 to 2022-03-31

People are able to comprehend language at an amazing speed. Sports commentators sometimes utter more than 300 words per minute (five words per second) but usually, we have no problem following their commentary. This happens even despite the fact that language is highly ambiguous at the lexical level (for example, "bank" may correspond to a shore of a river or a financial institution), but also at the phonemic level (the same speech sound can correspond to very different phonemes, depending on the context). Finally, in typical situations, we do not hear speech in absolute silence but rather we struggle to pick out the words from a background noise that is often louder than the speech itself. In sum, successful language comprehension requires a lot of reconstruction and making informed guesses about the message intended by the speaker. This is possible to a large extent because we, as listeners, predict incoming language. Based on our extensive knowledge about the world, language, the speaker, and the context of a conversation, we are able to predict that some words are more likely to be uttered by this speaker in this context at a given position in a sentence. Thanks to this, when a soccer commentator mentions a "ball", our brains expect him to talk about a spherical game accessory that all players are running after, not a social gathering for dancing.
The overarching aim of this line of research is to better understand the mechanisms of predictive processing. What specific types of information are used to inform predictions? How are the different sources of information combined to form unified predictions? What are the brain mechanisms involved? These are fundamental questions about the computations involved in human language comprehension. In the longer perspective, this line of research may translate to more practical applications: how we teach language, how well we understand different language impairments, and how well we can rehabilitate individuals who suffer from them?
In a first study, we asked how flexible are predictions? When we hear a word mismatching our lexical and semantic predictions, are our brains able to quickly reroute them in a different direction? And does this redirection also involve suppressing the previous, no longer viable, predictions? To this end, we ran an EEG study in which participants read short sentences presented to them on the screen. For example, they could read the following sentence “He always hid an extra set of keys under a …“. From additional behavioral tests, we learned that people expect that such sentences will be continued by “mat”, or to a smaller extent, by “rug”. We were interested in looking at whether preceding these nouns with an adjective, such as “rubber” or “Persian”, will lead to quickly updating the degree to which participants expect the nouns. In particular, we were curious if preceding the noun with an adjective promoting the noun (“rubber mat”, “Persian rug”) will increase the participant’s predictions of the noun, and more crucially, if preceding the noun with an adjective promoting the other noun (for example, “rubber rug” or “Persian mat”) will suppress the participants’ predictions about the noun. We focused on the brain’s reaction to the noun. We were interested in the amplitude of the N400 component, an index of the degree to which the meaning of a given word was activated by reading the preceding part of the sentence. We found that the adjectives indeed modulated the N400 amplitude to the noun. This showed us that predictions can be flexibly redirected on very short time-scales, on a word-to-word basis.

In the second study, we asked a more fundamental question: Do these predictions occur at the semantic or lexical level? Is the brain helped by any degree of overlap in the meaning between the semantic context and the actual word, even, if in general, the word itself is completely unpredictable? For example, both "dog" and "tree" are improbable continuations of the sentence "He invited a famous ...", but still, the first word continuation may still fit the context better because “dog” is animate and thus is more likely to be invited than “tree”. If the predictions are formulated at the semantic level, then the processing of both words will differ because “dog” shares more semantic features with the context than “tree”. If predictions are formulated at the lexical level, then both words should lead to similar processing difficulty because they are both improbable as the sentence continuation. We addressed these questions by employing GPT-2 – a state-of-the-art computer model of English (similar to, for example, models employed by Google which help them understand the search queries typed in by users) which is able to “understand” the sentence and estimate the probability of any word at any position in the sentence in a way that is sensitive to overlap in semantic features. For example, in the above sentence, the model clearly estimates that “dog” is far more probable than “tree”, even though both words have a very small probability. In all experiments, participants read short sentences, while their EEG was recorded. The final word of each sentence varied in its predictability, and many of the sentence endings were unpredictable (but their probability estimated by the language model still varied). Overall, our analyzes of the N400 amplitude to the sentence endings showed that predictions are formulated at the level of semantic features (see attached figure: brain waves elicited by the final word of tested sentences, depending on their probability as estimated by experiment participants and by language model; the differences between the waveforms occur in the N400 component).
Apart from addressing the questions outlined above, our research has also brought a few unexpected findings with substantial impact. For example, in the first line of studies, which tested if predictions can be rapidly updated, we also looked for indices of the updating process itself. However, when we analyzed brain reactions to the adjectives themselves, we found no variation in brain responses due to the degree to which adjectives update the predictions about the upcoming noun. This showed us that our initial assumption that adjectives update predictions about the noun was wrong. The results were consistent with a different mechanism, whereby adjectives do not get integrated with the context, and instead, they are stored in semantic short-term memory until the noun is encountered. When that happens, the adjective modifies and guides the process of accessing the meaning of the noun in its context. This is a significant finding because it provides an exception to the rule that sentence comprehension is incremental, that is, that the understanding of the sentence is gradually built and enriched at each word of the sentence.

The studies described above primarily focused on the advancement of knowledge rather than solving a specific practical problem. However, a direct application is that we demonstrated that machine learning models of language (such as GPT-2) predict words in the language in a similar manner as humans do. This will have consequences for the preparation of further studies testing language comprehension and has the potential to create computer-assisted interfaces for people with impaired ability to produce language.
ERPs evoked by sentence final words depending on words' probability