Languages of the world have their unique ways of putting sounds together. For example, word-initial /ŋ/ sounds bizarre to an English but not to a Cantonese speaker’s ear (e.g. /ŋoi/ “love”). These constraints on where sounds tend to occur and how they are typically sequenced within a word are called “phonotactic”. Phonotactic constraints are a source of significant regularity in the sound system of any language. How do humans exploit this phonotactic regularity when they produce speech? This project will provide a general and yet unprecedented theoretical framework of how this happens in a variety of communicative situations. We will develop three converging approaches in order to achieve this goal. First, we will bring together insights from linguistic theory and computational linguistics to develop a comprehensive characterization of phonotactic dependencies in English. Second, we will investigate how phonotactic information is represented in the human mind and how it is used in speaking. To achieve this, we will generate concrete predictions using cognitive psychological models of speech production and reading aloud, thus bridging the gap between these two fields of scientific enquiry that have been long treated as independent. We will simultaneously exploit tools and tasks from cognitive psychology and experimental phonetics to test our predictions empirically through behavioural experimentation. Finally, we will capitalise on advances in the science of learning, to determine phonotactic dependencies shape the acquisition of literacy. This research will shed light on mechanisms involved in the processing of sounds during reading, speaking, and the acquisition of these skills, and will inform the development of novel methods for teaching and enhancing literacy in first and second language.