'Auditory Scene Analysisn (ASA) promises to provide the needed front-end for robust automatic speech recognition devices. A more powerful approach than the current monaural one would be to include binaural cues (timing and intensity differences between the two ears).
Listeners are able to use these cues to locate sounds in space, and to group sounds that originate from the same spatial location. This project aims to
1. model this process in a physiologically plausible manner; 2. incorporate binaural cues in a larger model of ASA, and;
3. develop testing methodologies for assessing the performance of the model.
Recently the first database primarily intended for studying computational ASA has been collected and analysed at Sheffield.. This corpus will provide an ideal data source for the project.
The chosen ASA model has been developed by Guy Brown and Martin Cooke at Sheffield. Guy Brown will function as the scientist in charge of this project.