Objective
Understanding spoken language involves a complex set of processes that transform the auditory input into a meaningful interpretation. Our percept is not of acoustic-phonetic detail but of the speaker’s intended meaning. This effortless transition occurs on millisecond timescales, with remarkable speed and accuracy, and without any awareness of the complex computations on which it depends. How is this achieved? What are the processes and representations that support the transition from sound to meaning, and what are the neurobiological systems in which they are instantiated? In this proposal, we combine advanced techniques from neuroimaging, multivariate statistics and computational linguistics to probe directly the dynamic patterns of neural activity, over bilateral fronto-temporal and parietal cortices, that are elicited by spoken words and sentences. Combined MEG + EEG imaging, linked to parallel fMRI studies, capture the real-time electrophysiological activity of the brain. Representational Similarity Analysis (RSA) and related multivariate techniques make it possible to probe the different types of neural computation that support these dynamic processes of incremental interpretation. Computational linguistic analyses of language corpora allow us to build quantifiable models of different dimensions of language interpretation – from phonetics and phonology to argument structure and anaphora - and to test for their presence, using RSA, as the utterance unfolds in real time. By this means we aim to determine directly the nature of the intermediate processes involved in the transition from early perceptual processing through different representational states to the development of a meaningful representation of an utterance, the dynamic spatio-temporal relationship between these processes, and their evolution over time.
Fields of science
Keywords
Programme(s)
Topic(s)
Funding Scheme
ERC-ADG - Advanced GrantHost institution
CB2 1TN Cambridge
United Kingdom