S2S2
Semantic Interaction with Music Audio Contents

Nowadays, there are a wide variety of techniques that can be used to generate and analyse sounds. However, urgent requirements (coming from the world of ubiquitous, mobile, pervasive technologies and mixed reality in general) trigger some fundamental yet unanswered questions:

  • how to synthesize sounds that are perceptually adequate in a given situation (or context)?
  • how to synthesize sound for direct manipulation or other forms of control?
  • how to analyse sound to extract information that is genuinely meaningful?
  • how to model and communicate sound embedded in multimodal content in multisensory experiences?
  • how to model sound in context-aware environments?

As a specific core research emerging and motivated by the above depicted scenario, essentially sound and sense are two separate domains and there is a lack of methods to bridge them with two-way paths: From Sound to Sense, from Sense to Sound (S2S²). The Coordination Action S2S² has been conceived to prepare the scientific grounds on which to build the next generation of scientific research on sound and its perceptual/cognitive reflexes. So far, a number of fast-moving sciences ranging from signal processing to experimental psychology, from acoustics to cognitive musicology, have tapped the S2S² arena here or there. What we are still missing is an integrated multidisciplinary and multidirectional approach. Only by coordinating the actions of the most active contributors in different subfields of the S2S² arena can we hope to elicit fresh ideas and new paradigms.

Research staff

Partners

Sponsor

European Commission

FET-Open, EU 6th FP

Key facts