CORDIS - Résultats de la recherche de l’UE
CORDIS

Optimal Control of Eye Movements

Final Report Summary - OPTIMEYES (Optimal Control of Eye Movements)

The brain has a remarkable capacity to form internal representations that can be used to interpret stimuli and design efficient actions. A major challenge in neuroscience and cognitive science is to characterise these internal representations in terms of what type of computations are performed, how these are learned, and how the neural machinery supports these computations. Probabilistic computations have recently been implicated in a range of behavioural and neurophysiological phenomena, including perception, concept learning, causal learning in humans and infants, and adaptations in the sensorimotor system. The core to probabilistic computing is the recognition of the uncertainty that arises through inference and planning: noise and ambiguity is inherent in the physical world and the way we perceive it. These uncertainties can be present in different forms but the principle to handle the uncertainties is invariant: Bayesian statistics provide us with the tools to quantify the arising uncertainties and to effectively use this information to devise optimal actions.

In our research we aimed at characterising the consequences of Bayesian computations both in action generation through analysing the planning of eye movements, decision making in psychophysics tasks, and in neural response characteristics.

Effect of probabilistic inference on eye movement planning

In order to effectively collect information about the environment humans are constantly updating the position of their eyes. The visual system primarily uses discrete eye movements, saccades, to relocate the position of the eye. This form of update is a general method to update uncertainty that characterises environmental stimuli: lower resolution peripheral vision provides less information about the stimulus that falls on off-foveal regions of the retina than parts of the stimuli that are projected directly on the fovea, therefore fixating novel parts of the visual scene with the fovea provides increased information. Recent studies have shown that the strategy humans use to explore their visual environment is close to optimal: humans integrate information about the past actions and current stimuli to design the subsequent fixation position. We aimed at demonstrating that variability in behaviour and seeking optimal strategies are not contradictory, rather these are tied together through the idea of stochastic sampling

Stochastic sampling is a method in statistics to approximate probabilistic computations that would be otherwise intractable to perform. In our research we exploited the fact that the structure in the variability human behaviour reveals the type of the computations they perform. We constructed models to predict the patterns of errors for two alternative forms of computations: stochastic sampling, in which uncertainty present in inferences gives rise to a form of variability that reflects statistical characteristics of the uncertainties; and attentional mechanisms that constrain the effectively evaluated visual field therefore introducing an error that directly depends on variability in the layout of the stimulus but not other forms of uncertainties. Analysing eye movements of human participants we demonstrated that a sampling account can predict variability in actions better than an attentional account. This result provides evidence that the variability in eye movements is related to uncertainties arising through probabilistic inference.

Quantifying internal representations

Humans routinely make judgments about different aspects of their environment with as diverse spectrum as relational, modality, or qualitative judgments. These judgments are supported by elaborate mental representations that summarise our knowledge about the task to be accomplished. Some components of this knowledge are task-specific (e.g. utilities) other components solely depend on the statistics of the stimuli (e.g. subjective probabilities) and are therefore task-independent. While every component of a mental representation affects the judgments that humans make, characterising the mental representations through the patterns of judgments is rendered challenging by the limited data these judgments provide. Furthermore, since these representations are shaped by the experiences that someone collects, the representations are expected to vary from one subject to the other. Inferring mental representations therefore amounts to subject-by- subject analysis of judgments and viability of the analysis can be verified by assessing whether the inferred task-independent component of a mental representation is indeed invariant across tasks.

We proposed a Bayesian framework that captures two levels of uncertainty when assessing mental representations: first, the uncertainty an observer has about the stimulus; second the uncertainty of the experimenter which arises as a result of the limited data a discrete decision provides about the internal state of the observer. We built an ideal observer model that formalises how an individual's subjective distribution manifests itself in their (potentially noisy) decision making patterns; and secondly, Bayes rule is used to formalise our, the experimenter's, uncertainty in their true subjective distribution. Using this method we could quantify the subjective distributions of individual subjects and demonstrated invariance of the subjective distribution across different tasks for a given subject but changes across subjects within-, or across tasks. This result is a first but significant step towards quantitative assessment of mental representations in as diverse methodologies as pure psychophysics, or imaging studies.

Neural representations of uncertainty

Behavioural experiments have provided convincing evidence of Bayesian computations in the brain but ultimately we also need to understand how neurones implement these computations. Recent advances in theoretical neuroscience have demonstrated that the adaptation of the neural representation in the visual cortex to the statistics of stimulus statistics reveals characteristics of Bayesian models of vision and proved to be effective in providing a mapping between stimulus features and mean neural responses. This mapping, however, remained agnostic about major aspects of neural responses that were classified as stimulus-independent components: spontaneous activity and response variability.

We proposed that probabilistic Bayesian inference provides a plausible explanation for both of these stimulus-independent aspects of neural response statistics. In a collaborative study we demonstrated that identifying the spontaneous activity with a distribution reflecting previous experiences (the prior distribution in Bayesian statistics), while the evoked activity with the possible interpretations about the current state of the environment (the posterior distribution) can explain age-, and stimulus-statistics-related changes in the similarity of the distributions.

In a modelling study we demonstrated that the characteristics of variability and covariability, as measured with trial-by-trial changes in neural responses, is highly consistent with the 'sampling hypothesis' when applied to probabilistic inference. This approach establishes a strong tie between neural variability in the visual cortex to perceptual uncertainty and provides a rational account for neurophysiological phenomena that were hitherto attributed to noise.

Our results have strong implications on theories on how information is represented throughout the cortex.