Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

How Hands Help Us Hear

Periodic Reporting for period 1 - HearingHands (How Hands Help Us Hear)

Période du rapport: 2022-09-01 au 2025-02-28

When we have a conversation, we do not only produce sound but we also move to the rhythm of our speech. This is so inherent to human communication that we even gesture when speaking over the phone (while the receiver cannot actually see our hand movements). However, what these simple rhythmic movements contribute to spoken communication remains unclear. In other words, does the receiver benefit from seeing even simple beat gestures when trying to work out what’s being said?

The HearingHands program tests this question, viewing the timing of simple gestures as a form of multimodal prosody. We hypothesize that the temporal alignment of hand gestures to speech prosody (so-called ‘gesture-speech coupling’) directly influences what we hear. Our objectives are [WP1] to chart the PREVALENCE of the use of gesture-speech coupling as a multimodal prominence cue in production and perception across a typologically diverse set of languages; [WP2] to capture the VARIABILITY in production and perception of gesture-speech coupling in both neurotypical and atypical populations; [WP3] to determine the CONSTRAINTS that govern gestural timing effects in more naturalistic communicative settings. These objectives will be achieved through cross-linguistic comparisons of gesture-speech production and perception, testing multimodal integration in autistic and neurotypical individuals, and psychoacoustic tests of gestural timing effects employing eye-tracking and virtual reality. Outcomes are expected to reveal that even the simplest flicks of the hands can guide the listener in spoken word recognition and speech segmentation.
We have successfully demonstrated that:
- beat gestures serve as a cue to lexical stress in both Dutch and Spanish, distinguishing CONtent from conTENT, and CANto from canTÓ;
- this effect of beat gestures takes place in real time, temporally anchored to the beat apex, and arising as the word is still unfolding;
- this effect of beat gestures can be reliably detected in a mini-test of under 10 min;
- it can also be triggered by a human-like artificially-generated moving avatar;
- beat gestures can even have a lasting impact on spoken word recognition, shaping subsequent audio-only speech perception through recalibration;
- in Mandarin (a lexical tone language), gestures time to vowel onset, not pitch peaks (unlike stress languages);
- in Mandarin, producing a gesture raises the f0 across the entire lexical tone contour.
We are currently exploring the gesture-speech integration in autistic as compared to neurotypical participants. This may reveal relevant distinctions in multisensory speech processing in autism. Another avenue with potential applied impact concerns the use of virtual avatars to enrich the acoustic speech signal, especially in challenging listening conditions.
Mon livret 0 0