Despite written language is not part of our genetic endowment, literate adults process an impressive amount of information as they read, and do that extremely flawlessly and nearly error-free. How this happens is largely unknown, and represents a fundamental issue for theories of human learning. Building on data from nonhuman primates, human infants and psycholinguistic experiments on word internal structure, STATLEARN tests the hypothesis that one fundamental cognitive mechanism underlies visual word identification, i.e., statistical learning. Human infants learn to chunk smaller perceptual units (e.g., oriented lines) into larger, meaningful objects (e.g., tools, faces), taking advantage of recurrent patterns in their distribution. As developing readers, they would apply this very same mechanisms to a newly–encountered type of visual objects, i.e., letters. On this basis, they would build progressively higher–order orthographic units, which eventually make their visual word identification as adult readers astonishingly efficient.
The project is composed of four work packages. One aims at identifying which principle(s) drive(s) statistical learning, and contrasts overall frequency, contextual diversity, and letter transitional probabilities. Because these factors co-vary in real languages, a second work package will involve adult readers in learning artificial languages, where we will build in any statistical properties we might need to test. A third package will seek signs of statistical learning directly into the performance of developing readers. A fourth package will assess positional constraints in the identification of morphemes (e.g., kind and ness in kindness). These work packages include behavioural, eye tracking, ERP, MEG and fMRI work. Bringing together evidence from such a wide array of approaches will allow to understand how statistical learning unfolds, and what kind of representations it brings into the human reading system.
Field of science
- /humanities/languages and literature/languages - general
Call for proposal
See other projects for this call