"Our visual system has the ability to extract, in a “pop-out” fashion, contrast edges and contours as well as higher order information and statistical regularities from our sensory environment. This primal low-level perceptual analysis is achieved in spite of the continuously updated flow of images received from the environment and the perturbations introduced by the eye movements we generate to scan our surroundings. Underlying this remarkable ability are cortically built-in perceptual grouping mechanisms that can link together image features that arise from the same physical source (e.g. the same rigid object) irrespective of partial masking and that can encode more abstract perceptual attributes of the visual scenes (eg, colinearity, global motion) irrespective of self-generated eye-movements. Object recognition, and many other perceptual capabilities such as figure-ground segregation, would be impossible without appropriate and immediate perceptual grouping. Although the visual system seems to perform these tasks effortlessly, i.e. without requiring targeted attention, it remains one of the main on-the-fly computational challenges in robotics and other artificial vision systems. Despite being attributed to the first integration stages of visual cortical processing, the elementary neural-based mechanisms in charge of perceptual grouping are still poorly understood. In primary visual cortex, long-range horizontal axons link distant neurons, with the potential of playing a role in global perception and grouping, but the overall functional impact of these cortical horizontal connections is unknown. In this proposal we will use in vivo intracellular electrophysiology and voltage-sensitive imaging in the mammalian visual cortex to explore in a multiscale approach (ranging from the synaptic to the cell assembly levels) the cortical emergence of Gestalt laws which guide our everyday low-level perception."
Call for proposal
See other projects for this call