LISTEN will provide users with intuitive access to personalised and situated audio information spaces while they naturally explore everyday environments. A new form of multi-sensory content is proposed to enhance the sensual, emotional and pedagogical impact of a broad spectrum of applications ranging from art shows to marketing or entertainment events. This is achieved by augmenting the physical environment through a dynamic soundscape, which users experience over motion-tracked wireless headphones. Immersive audio-augmented environments are created by combining high-definition spatial audio rendering technology with advanced user modelling methods. These allow for adapting the content to the users? individual spatial behaviour. The project will produce several prototypes and a VR-based authoring tool. Technological innovations will be validated under laboratory conditions whilst the prototypes will be evaluated in public exhibitions.
The main objectives are (1) to develop a new multi-sensory form of content - the immersive audio-augmented environment, (2) to create the knowledge and technology to produce and experience this new type of content, and (3) to validate immersive audio-augmented environments in real-world applications. Perceptually consistent auditory augmentation of visually dominated exhibition spaces will be achieved by combining wide-area high-definition wireless multi-user motion tracking with user modelling, dynamic sonndscape generation, binaural auditory rendering and wirless digital headphones. A VR-based authoring tool will be developed and used to evaluate the LISTEN approach by means of virtual and physical prototypes. Design guidelines for creating LISTEN environments will be developed for pedagogical, commercial, and artistic applications. LISTEN is a content-oriented multi-disciplinary research project directly involving creators and designers.
DESCRIPTION OF WORK
Auditory augmentation of visually dominated everyday environments is anew and very promising approach in creating user-friendly information systems accessible to everybody - users just wear wireless headphones and walk around. To create such an intuitive human-machine interface and the corresponding content, several concepts and technologies need to be researched, developed, and/or integrated. Examples include audio content authoring, motion tracking, binaural rendering, room acoustic simulation, world ad user modelling, and wireless headphone technology. Work is divided in 9 work packages (WPs). An iterative design and development approach making use of advanced virtual prototyping techniques is employed. In the design WP, a specification of the overall system architecture, the component interfaces and the user interface will be produced. The modelling WP will develop the world and user modelling components as well as a VR-based authoring system. Among several binaural rendering architectures, the best oe for the project will be determined in the rendering WP. Room acoustic simulation techniques will be integrated with a novel approach to merge real and virtual auditory scenes. The display WP is concerned with developing wireless motion-tracked headphone for many concurrent users. Tracking will be based on a SAW chirp impulse compression microwave RADAR system. A special WP will integrate the system components. Virtual and physical prototypes will be developed in co-operation with curators, artists and designers. The main output of the prototyping WP will be a public exhibition. Evaluation of the developed system components as well as the usability and acceptance of LISTEN environments will be carried out in the evaluation WP. The last WP is reserved for dissemination measures including the organisation of 3 expert workshops, maintenance of a project web site, presentation of LISTEN environments at conferences and fairs, and scientific publication of results.
Funding SchemeCSC - Cost-sharing contracts