Skip to main content
European Commission logo print header

Parallel Orthographic Processing and Reading

Periodic Reporting for period 4 - POP-R (Parallel Orthographic Processing and Reading)

Reporting period: 2022-04-01 to 2022-09-30

Reading is perhaps the most complex skill that humans have to learn, and unfortunately, almost 10% of the population fail to achieve success in this endeavour. Understanding why some persons never attain complete mastery of this skill is therefore a crucial aim for the Cognitive Sciences. Reading is both a visual and a linguistic skill, and orthographic processing occupies the key interface between visual and linguistic processing. In written languages that introduce extra between-word spacing, words are the buildings blocks of reading, and in those languages that use an alphabetic script, letters are the building blocks of words. Hence the importance of understanding orthographic processing, that is, the processing of information about letter identities and letter positions, for understanding reading. Thus, much prior research attempting to link low-level visual processes with higher-level cognitive processes involved in reading has focused on letter-level and word-level processing. However, this research has not been well-connected with the study of even higher-level (sentence, text) processes involved in reading. The POP-R (Parallel Orthographic Processing and Reading) project aims to correct this by linking basic orthographic processing with the higher-level processes involved in sentence and text comprehension. A key feature of this new approach to understanding reading is that we hypothesise that orthographic information spanning several words (separated by spaces) is processed in parallel and fed into a single channel for subsequent orthographic processing and word identification. Moreover, the results obtained so far in this project point to a much greater extent of parallel processing in reading compared with the traditional, one-word-at-a-time approach that has dominated theorising until now.
Some key findings.
In Snell et al. (Psychological Review, 2018), we present a computational model of eye movements and word identification during reading, OB1-reader. Key features of OB1 are as follows: (1) parallel processing of multiple words, modulated by an attentional window of adaptable size; (2) coding of input through a layer of open bigram nodes that represent pairs of letters and their relative position; (3) activation of word representations based on constituent bigram activity, competition with other word representations and contextual predictability; (4) mapping of activated words onto a spatio- topic sentence-level representation to keep track of word order; and (5) saccade planning, with the saccade goal being dependent on the length and activation of surrounding word units, and the saccade onset being influenced by word recognition.

In Mirault et al. (Psychological Science, 2018), we report a novel transposed-word effect in speeded grammaticality judgments made about five-word sequences. The critical ungrammatical test sequences were formed by transposing two adjacent words from either a grammatical base sequence (e.g. “The white cat was big” became “The white was cat big”) or an ungrammatical base sequence (e.g. “The white cat was slowly” became “The white was cat slowly”). These were intermixed with an equal number of correct sentences for the purpose of the grammaticality judgment task. In a laboratory experiment (N = 57) and an online experiment (N = 94), we found that ungrammatical decisions were harder to make when the ungrammatical sequence originated from a grammatically correct base sequence.

In Wen et al. (Cognition, 2019), we tested different account of the sentence superiority effect (Snell & Grainger, Cognition, 2017) – the observation that single words are easier to identify in a briefly presented syntactically correct word sequence compared with a scrambled version of the same set of words. Interactive-activation models of sentence comprehension can account for this phenomenon by implementing parallel processing of word identities. The cascaded and interactive nature of such processing allows sentence-level structures to influence on- going word processing. Alternatively, prior observations of a sentence superiority effect in post-cued word-in- phrase identification might be due to the sophisticated guessing of word identities on the basis of partial information about the target word and the surrounding context. Here, for the first time, we used electro- physiological recordings to plot the time-course of the sentence superiority effect. According to an interactive- activation account of this phenomenon, the effect should be visible in the N400 component, thought to reflect the mapping of word identities onto higher-level semantic and syntactic representations. Such evidence for changes in highly automatised linguistic processing is not predicted by a sophisticated guessing account. Our results lend support to the interactive-activation account.

In Declerck et al. (Psychonomic Bulletin & Review, 2020), we asked whether syntactic representations shared across languages, and how might that inform the nature of syntactic computations? To investigate these issues, we presented French-English bilinguals with mixed-language word sequences for 200 ms and asked them to report the identity of one word at a post-cued location. The words either formed an interpretable grammatical sequence via shared syntax (e.g. ses feet sont big – where the French words ses and sont translate into his and are, respectively) or an ungrammatical sequence with the same words (e.g. sont feet ses big). Word identification was significantly greater in the grammatical sequences – a bilingual sentence superiority effect. These results not only provide support for shared syntax, but also reveal a fascinating ability of bilinguals to simultaneously connect words from their two languages through these shared syntactic representations.

In Mirault et al. (Psychophysiology, 2020) we asked the following question: When reading, can the next word in the sentence (word n + 1) influence how you read the word you are currently looking at (word n)? Serial models of sentence reading state that this generally should not be the case, whereas parallel models predict that this should be the case. Here we focus on perhaps the simplest and the strongest Parafoveal-on-Foveal (PoF) manipulation: word n + 1 is either the same as word n or a different word. Participants read sentences for comprehension and when their eyes left word n, the repeated or unrelated word at position n + 1 was swapped for a word that provided a syntactically correct continuation of the sentence. We recorded EEG and eye-movements, and time-locked the analysis of fixation-related potentials (FRPs) to fixation of word n. We found robust PoF repetition effects on gaze durations on word n, and also on the initial landing position on word n. Most important is that we also observed significant effects in FRPs, reaching significance at 260 ms post-fixation of word n. Given the timing of this effect, we argue that it is driven by ortho- graphic processing of word n + 1, while readers were still looking at word n, plus the spatial integration of orthographic information extracted from these two words in parallel.
Given that all our research has used novel paradigms or novel applications of old paradigms, all the work that has been performed so far has gone beyond the state of the art, and there are no reasons to believe that this will stop now.