Skip to main content

Vision and Navigation in Mouse Cortex

Periodic Reporting for period 1 - VisNav (Vision and Navigation in Mouse Cortex)

Reporting period: 2016-05-01 to 2018-04-30

When an animal navigates in an environment, it uses its sense of position to find its way. In the brain, this sense of position relies on neurons located in the hippocampus. Each of these neurons is activated selectively when the animal is in a particular position in the environment. Together, these neurons form a map of the environment and report the position of the animal on this map.
To construct its spatial map, the hippocampus relies on multiple cues, and key among these cues are visual landmarks. The hippocampus, thus, must give special weight to visual information, which is processed by the visual cortex. Visual cortex and other sensory areas of the cortex, in turn are classically thought to provide the hippocampus with purely sensory information, independently of the animal’s position in the environment.
In recent years, recordings in awake mice showed that neurons in visual cortex are modulated by non-visual factors such as the run speed of the animal. These results made us hypothesize that visual cortex neurons may not simply analyze the visual scene according to local visual features but also carry information about the position of the animal.
We placed mice in a virtual reality environment and trained them to lick a water spout when they reached a specific location in the environment to get a water reward. In the original version of the corridor (developed by Aman Saleem), the visual landmarks displayed on the wall of the corridor were such as the visual scenes were identical 40 cm apart between the first and the second half of the corridor.
While the mice performed the task, we recorded simultaneously from neurons in the CA1 region of the hippocampus and in the primary visual cortex (V1) using silicon probes. As expected, place cells in hippocampus fired in specific regions of the corridor, allowing us to decode accurately the position of the animal from their activity. Surprisingly however, the position of the animal could also be decoded accurately from the activity recorded from V1, despite the repetition of the visual scene; V1 neurons actually often preferred one particular position over the visually-matching position 40 cm away. These results thus indicated that V1 responses are modulated by the position of the animal in the environment.
Further experiments (performed by Mika Diamanti) showed that this spatial modulation of V1 responses by the spatial context could not be explained by the position of the reward site: when mice simply ran through the corridor in absence of reward, a majority of V1 neurons still preferred either the first or the second half of the corridor, although the visual scene were identical. This modulation of V1 responses by the position of the animal could not either be explained by different running speeds: running speed affected the amplitude of V1 responses but not the relative amplitude of responses in the two-visually matching segments of the corridor.
The positions encoded in CA1, and in V1, were also correlated with the animal’s subjective estimate of its position along the corridor, as inferred from the animal’s decision to lick: when the animal licked too early, V1 and CA1 responses indicated a position that was further along the track; conversely, when the animal licked too late, V1 and CA1 indicated a position that was behind the animal.
Previous research showed that the positions encoded by CA1 place cells are influenced by surrounding visual cues but also by the distance travelled in the environment (idiothetic cues). Since the positions encoded by V1 neurons appeared coherent with the positions indicated by CA1 place cells, we next asked whether V1 responses are also influenced by the distance travelled in the environment, regardless of visual cues. We trained a different set of mice to perform a similar spatial task to the one used previously but in a virtual corridor that had no visual ambiguity. Once the animal learned to lick selectively in the reward region, we manipulated the virtual reality so that on a subset of trials, the animal had to run a different distance than usual to reach the reward location, while visual landmarks remained at the same position along the corridor.
As expected, this manipulation affected place cells in hippocampus: CA1 place cells fired earlier on the track when the distance was longer and later when the distance was shorter. Surprisingly however, a similar behavior was observed in visual cortex: the response profile of a majority of V1 neurons shifted spatially when the animal run a shorter or longer distance, although the visual cues had remained in the same place. As a consequence, when we looked at the positions encoded at the population scale in either CA1 or V1, we found that both hippocampus and visual cortex indicated a position that was too far ahead of the actual position of the animal when the distance run was longer than usual and too far behind when the distance was shorter. This effect of the distance run on decoded positions was coherent between V1 and CA1 and intermediate between the position expected from a purely visual model and the position expected from a pure distance model.
Our results thus indicated that the spatial representation in V1 is consistent with that encoded in the brain navigation system when the animal makes mistakes or when we bias the spatial representation in hippocampus by changing the distance run. We finally asked whether position errors made by V1 and CA1 in estimating the actual position of the animal were correlated beyond the common influence of position and behavioral factors on V1 and CA1 responses. We computed the covariance between V1 and CA1 position errors while accounting for the position of the animal, its run speed and the position of the eye. The residual errors were significantly correlated: when CA1 encoded a position that was ahead or behind the average decoded position, V1 tended to make the same mistake more often than not, thus demonstrating that V1 and CA1 spatial representations are intrinsically correlated.
In summary, we found that at least in mice, the visual cortex is not only sensitive to visual stimuli but also to navigation signals such as the distance traveled in the environment. This place modulation of V1 responses could have different origins. It might correspond to a feedback from the brain navigation system (e.g. hippocampus or entorhinal cortex) or it could develop through learning by adaptation of intracortical connectivity in visual areas. It is yet to be established how this spatial modulation of V1 responses builds up over time, as the animal becomes more familiar with the environment. Still, our results suggest that like hippocampus, sensory cortices may encode information about the position of the animal and that navigation signals are much more widespread in the brain than previously anticipated.