Nowadays, there is a wide variety of techniques tat can be used to generate and analyze sounds. However, urgent requirements (coming from the world of ubiquitous, mobile, pervasive technologies and mixed reality in general) trigger some fundamental yet unanswered questions:
-how to synthesize sounds that are perceptually adequate in a given situation (or context)?
-how to synthesize sound for direct manipulation or other forms of control?
-how to analyze sound to extract information that is genuinely meaningful?
-how to model and communicate sound embedded in multimodal content in multisensory experiences? -how to model sound in context-aware environments?
As a specific core research emerging and motivated by the above depicted scenario, essentially sound and sense are two separate domains and there is a lack of methods to bridge them with two-way paths: From Sound to Sense, from Sense to Sound. The CA S2S^2 has been conceived to prepare the scientific grounds on which to build the next generation of scientific research on sound and its perceptual/cognitive reflexes. So far, a number of fast-moving sciences ranging from signal processing to experimental psychology, from acoustics to cognitive musicology, have tapped the S2S^2 arena here or there.
What we are still missing is an integrated multidisciplinary and multidirectional approach. Only by coordinating the actions of the most active contributors in different subfields of the S2S^2 arena we can hope to elicit fresh ideas and new paradigms. The potential impact on society is terrific, as there is already a number of mass application technologies that are stagnating because of the existing gap between sound and sense. Just to name a few: sound/music information retrieval and data mining (whose importance exceeds P2P exchange technologies), virtual and augmented environments, expressive multimodal communication, intelligent navigation, etc.
Funding SchemeCA - Coordination action
75230 Paris Cedex 05
100 44 Stockholm
21078 Dijon Cedex