Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Article Category

Content archived on 2023-03-23

Article available in the following languages:

EN

Antonio Camurri - Better understanding non-verbal communication

Understanding the brain mechanisms underpinning non-verbal communication, particularly in the context of creativity, is a challenge that requires the combined talents of a multidisciplinary team.

How do you express the bliss felt in a concert? Why is the execution of a piece of music better than another? Is an ensemble of musicians engaging an audience? Antonio Camurri, professor of human computer interaction at the department of computing, bioengineering, robotics and system engineering DIBRIS, at the University of Genoa, Italy, believes that it is possible to quantitatively evaluate the tricky issues underpinning creative social communication and co-creativity. The centre he founded, Casa Paganini-InfoMus, coordinated the SIEMPRE project, which was funded by the EU under the FET scheme and was completed in June 2013. The project aimed developing new theoretical frameworks, computational methods and algorithms for the analysis of creative social behaviour within small group of people. Camurri talks to youris.com about how cross-fertilisation between performing arts and science can lead to a better understanding of how brain works. What makes your project unique? It is not easy to find projects with a similar approach mixing artistic and humanistic theories with scientific and technological approaches. We study non-verbal emotions and social signals. And we use music or performing arts as a means to investigate the brain mechanisms. There are other research centres studying, for example, the reaction of the brain to musical stimuli through brain imaging. What we do is to study these phenomena in natural, ecological contexts. What was the goal of your research? We aimed at measuring the non-verbal expression of emotions and of social signals, such as leadership, co-creation, cohesion or the entrainment in a small group of people. To do so, we looked at multimodal signals including audio signals, movement of performers’ bodies, and physiological signals such as respiration, heart beat or muscle tension. Which scenarios have you used? We studied ensemble music performance and audience experience. We measured expert and non-expert string quartets during live music performance, different sections of an orchestra with different conductors, and audiences watching music performances. String quartets in particular have also been used as a model by economists. They are a small group with no, a priori, clearly established leadership. It is a typical self-managed group that has to cooperate to reach a common goal. We identified features that determine who the leader is, and measure entrainment in a group – if they are in tune or not – as well as how much their entrainment affects the engagement of an audience during a performance. We found that by measuring the movement of the heads of the musicians in a string quartet and the direction in which they look, we can help derive significant cues on entrainment and on leadership. We also measured the performance of an orchestra with different conductors, to study the effectiveness of their leadership. Why do you use artistic performance? We try to use art to inspire scientific research. For this project in particular, we use art as a test-bed to avoid the complexity of the symbolic use of language. Music is a well-known case of human interactive and social activity where non-verbal communication plays a fundamental role. Unlike speech, it’s one of the few expressive activities allowing simultaneous participation. An ensemble performance is one of the most closely synchronised activities that human beings engage in. What kind of results have you obtained? Among others, we have developed algorithms and software modules integrated in the EYESWEB open software platform. It is designed to explore new real-time multimodal systems and mobile social applications. It includes libraries to support programmers to measure in real-time non-verbal expressive gestures and non-verbal social signals. An example of its application is a system that measures the behaviour of a group to build social queries to search music content in an archive. We presented this project at an EU event on Horizon2020, ICT 2013. Could you give me some examples of interesting applications of your work? We are working on rehabilitation therapies based on serious games, which are games designed for a purpose other than pure entrainment, in an EU ICT project called ASC-INCLUSION. Its goal is to help children affected by autism to recognise emotions shown on videos. Actors express six simple feelings including happiness, sadness, rage, fear, shame and disgust. And children have to identify them and express them. With a Kinect sensor, we build a representation of the body and its movement in order to identify emotional states. This way the computer is capable of interpreting if the body of the child expresses the correct feeling and gives points in a videogame environment. Another foreseeable application is a negotiation table. We might evaluate how much a negotiating group is converging and who is hindering an agreement. An employer could use these tools to improve teamwork. Finally, we can optimise human interaction with robots, a very important asset for an ageing population. We are profoundly interwoven with the world of arts. And we collaborated with theatres, festivals and museums to create hands-on experiences. In contemporary music, there are consolidated examples that increase the expressive potential of an instrument through the gestures and the behaviour of the performer. We are basically giving new degrees of freedom to the musical language and to the artists.

Countries

Italy