Skip to main content
Go to the home page of the European Commission (opens in new window)
English English
CORDIS - EU research results
CORDIS

Goal-directed eye-head coordination in dynamic multisensory environments

Periodic Reporting for period 4 - ORIENT (Goal-directed eye-head coordination in dynamic multisensory environments)

Reporting period: 2021-07-01 to 2022-12-31

Problem: Rapid object identification is crucial for survival of all organisms, but poses daunting challenges if many stimuli compete for attention, and multiple sensory and motor systems are involved in the processing, programming
and generating of an eye-head gaze-orienting response to a selected goal. How do normal and sensory-impaired brains decide which signals to integrate (“goal”), or suppress (“distracter”)? Audiovisual (AV) integration only
helps for spatially and temporally aligned stimuli. However, sensory inputs differ markedly in their reliability, reference frames, and processing delays, yielding considerable spatial-temporal uncertainty to the brain.
Vision and audition utilize coordinates that misalign whenever eyes and head move. Meanwhile, their sensory acuities vary across space and time in essentially different ways. As a result, assessing AV alignment poses major
computational problems, which so far have only been studied for the simplest stimulus-response conditions.

Impact: Understanding the underlying neuro-computational principles and control mechanisms is crucial to diagnose
and alleviate disorders in sensory-impaired or motor-impaired patients.

Approach: We tackle these problems, by applying dynamic eye-head coordination paradigms in complex environments, while systematically
manipulating visual-vestibular-auditory context and uncertainty. I vary AV goal/distracter statistics, stimulus motion, and active vs. passive-evoked body movements.
We perform advanced psychophysics to healthy subjects, and to patients with well-defined sensory (auditory, visual, or vestibular) disorders. We probe sensorimotor strategies of normal and impaired systems, by quantifying their acquisition of priors about the (changing) environment, and use of feedback about active or passive-induced self-motion of eyes and head.
I challenge current eye-head control models by incorporating top-down adaptive processes and eye-head motor feedback in realistic cortical-midbrain networks.
In collaboration with the Robotics Institute in Lisbon, my computational modeling will be critically tested on an autonomously learning humanoid robot, equipped with binocular foveal vision, multiple-degrees of freedom ocular and neck-muscular systems, and human-like audition.
The project started in January 2017, setting up the collaboration with the Visual Lab of the Robotics Institute at the Instituto Técnico Superior in Lisbon (IST).
The MoU was signed Sept. 2017. In Sept. 2018 the ISTb ecame second beneficiary.

Subproject 1: human psychophysics.

My multisensory two-axis vestibular chair became available for the psychophysics June 2018. In Sept 2017, PhD 1, and per Oct. 2018, Postdoc 1, were appointed. First results on neural mechanisms of sound-localisation in noisy environments, and Bayesian mechanisms for audiovisual integration were published in 2017-2019. The PI held several presentations on this work in international conferences and invited seminars, e.g. at the NCM meetings in Santa Fe, USA, in Toyama, Japan, in Rovereto, Italy, in Kosice, Slovakia, and in Alicante, Spain.
We hired prof A Snik per Sept 1, 2017 as an expert audiologist, to work on sensory-deprived patients in collaboration with our applicants. As world-recognized expert on auditory technology and audiology, he fits perfectly in the Action’s aims. In Jan 2019, we attracted postdoc 2 to work on audio-visual psychophysics and plasticity/adaptation.
To set up the auditory patient work, we hired two PhD researchers from Jan-June, 2019 to perform sound-localization studies with hearing-impaired patients. Postdocs 1 and 2also performed auditory motion experiments.

Subproject 2: Computational modelling
During the first months of the Action (Feb-May, 2017), the PI appointed a PhD student to work on a computational model of the midbrain by implementing a novel spiking neural network algorithm. Six manuscripts have arisen from this work. In April 2019, the PI appointed Postdoc 3 who has extended the spiking network modelling to 3D eye-head coordination.

Subproject 3: Humanoid robotic model.
The collaboration with prof Bernardino went very well. Between April–Oct 2017 a master student designed and tested a first prototype robotic eye (Thesis report, see Website). From April 2018 to Oct. 2019, this work continued with two new students: one working on kinematics and mechanical improvements; the other on visual-image processing and positional stabilisation. The coordinator and Bernardino recruited PhD2 for Sub-project 3 in Sept. 2018 and we actively searched for Postdoc 4, who could start by the end of 2020.

During the final period 4 (months 54-72) we worked towards a successful end of our project, despite the strong detrimental influence of the Covid-19 epidemic, which had seriously impaired the experimental work, both in Nijmegen and in Lisbon. At the time of writing (April 23, 2023), 47 research papers resulted from this action, with 4 additional papers currently under review, which is an excellent result.
In addition, the PI took the initiative to publish (together with 7 co-authors from the EU and USA) the seminal modelling work of prof David A Robinson (this world-leading researcher passed away in Nov. 2018 at the age of 92) on the Oculomotor System, as a full issue of Elsevier’s Progress in Brain Research (Vol. 267, 435 pages). It appeared Feb 2022.
The work on the robotic eye-head system in Subproject 3 in Lisbon went very well and is still ongoing: PhD-2 constructed a highly improved biomimetic prototype of the human eye with six muscle and motors, and seven master students finished their research theses with valuable results (see project’s website for all thesis reports). Four papers have been submitted in 2022/2023 on our robotic eye, and two more are currently in preparation. The PI aims to continue this fruitful collaboration in his new ERC project proposal.

The project has an open website: http://www.mbfys.ru.nl/~johnvo/OrientWeb/orient.html(opens in new window) with the project’s background, ongoing work, demo’s, prototypes, results, the team, research papers and student progress reports are updated regularly.
So far, 47 research papers appeared in peer-reviewed journals (published Open Access, and included in the Portal): 33 papers from Subproject 1, 10 papers from Subproject 2, and 4 papers from Subproject 3. I expect 5-7 more papers from this Action in 2023.

My Orient project uncovered three important novel phenomena:
(i) the auditory system can autonomously learn target statistics without any explicit exogenous (visual) feedback to rapidly adapt its priors. It compares internal spatial representations of the sensory input (weighting the acoustic localization cues) and statistics from its own (eye-head-orienting) responses.
(ii) We were the first to show that human head tracking of moving sounds has excellent accuracy, responds to continuous position- and, importantly, velocity (slip) errors, is rapidly adaptive, and may optimize speed-effort trade-off.
(iii) In my robotics collaboration with the Lisbon team, we explained the eye’s 3D kinematics, known as Listing’s law (3D eye orientations have no cyclo-torsion), by applying optimal control principles and reinforcement learning to our six degrees-of-freedom biomimetic robotic eye.
These novel discoveries have formed the basis for a new ERC advG proposal.
SUB3: Fish-eye camera view from the 3D robotic eye prototype
SUB1: Two-axis Vestibular chair for our human multisensory psychophysics studies
SUB3: Inside view of the prototype 3D robotic eye
SUB2: Computational model midbrain superior colliculus; Kasap/Van Opstal, Biol. Cybernet. 2017
SUB3: The robotic eye prototype, built in Lisboa (with 6 extraocular muscles)
My booklet 0 0