Periodic Reporting for period 2 - NeuroPred (Identification of different neuro-cognitive mechanisms of prediction in language comprehension)
Periodo di rendicontazione: 2021-04-01 al 2022-03-31
The overarching aim of this line of research is to better understand the mechanisms of predictive processing. What specific types of information are used to inform predictions? How are the different sources of information combined to form unified predictions? What are the brain mechanisms involved? These are fundamental questions about the computations involved in human language comprehension. In the longer perspective, this line of research may translate to more practical applications: how we teach language, how well we understand different language impairments, and how well we can rehabilitate individuals who suffer from them?
In the second study, we asked a more fundamental question: Do these predictions occur at the semantic or lexical level? Is the brain helped by any degree of overlap in the meaning between the semantic context and the actual word, even, if in general, the word itself is completely unpredictable? For example, both "dog" and "tree" are improbable continuations of the sentence "He invited a famous ...", but still, the first word continuation may still fit the context better because “dog” is animate and thus is more likely to be invited than “tree”. If the predictions are formulated at the semantic level, then the processing of both words will differ because “dog” shares more semantic features with the context than “tree”. If predictions are formulated at the lexical level, then both words should lead to similar processing difficulty because they are both improbable as the sentence continuation. We addressed these questions by employing GPT-2 – a state-of-the-art computer model of English (similar to, for example, models employed by Google which help them understand the search queries typed in by users) which is able to “understand” the sentence and estimate the probability of any word at any position in the sentence in a way that is sensitive to overlap in semantic features. For example, in the above sentence, the model clearly estimates that “dog” is far more probable than “tree”, even though both words have a very small probability. In all experiments, participants read short sentences, while their EEG was recorded. The final word of each sentence varied in its predictability, and many of the sentence endings were unpredictable (but their probability estimated by the language model still varied). Overall, our analyzes of the N400 amplitude to the sentence endings showed that predictions are formulated at the level of semantic features (see attached figure: brain waves elicited by the final word of tested sentences, depending on their probability as estimated by experiment participants and by language model; the differences between the waveforms occur in the N400 component).
The studies described above primarily focused on the advancement of knowledge rather than solving a specific practical problem. However, a direct application is that we demonstrated that machine learning models of language (such as GPT-2) predict words in the language in a similar manner as humans do. This will have consequences for the preparation of further studies testing language comprehension and has the potential to create computer-assisted interfaces for people with impaired ability to produce language.