Sound sources within the environment produce an aggregate wave-form that enters each ear. To make sense of the world, a listener must separate this input into source-related components. This research proposal concerns the processes by which sensory information is converted into a perceptual representation relevant for behaviour. Studies that target different stages of perceptual processing often employ different tools, different preparations (e.g. humans vs. animal models), as well as different measures of activation.
This makes comparisons and integration across the different stages very difficult. This proposal brings together researchers studying different aspects of how perceptual representations are created: David McAlpine is an electrophysiologist of sub-cortical auditory processing, Tim Griffiths is an expert in fMRI and Maria Chaits background is in psychophysics and MEG of human auditory cortex. Together, we will develop a unified research program, designing experimental stimuli and paradigms that ca n be used to probe neural activity at multiple stages in the auditory pathway in order to understand better the processes by which perceptual representations of the acoustic environment are constructed. Specifically we describe two paradigms: In the first, we study change detection: neural-adjustment to acoustic changes at different stages of processing. With this technique we hope to explore the perceptually relevant dimensions of sound.
In the second paradigm we study how these dimensions of sound bind together to create coherent representations of multi-featured sound-sources. MEG and psychophysics are first used to study the dynamics of human brain responses to such sounds, and how behaviourally-relevant representations arise from sensory input. With this information we formulate hypotheses (tested with fMRI in humans and electrophysiology in animal models) concerning the brain structures and computations involved in auditory perception
Call for proposal
See other projects for this call