Periodic Reporting for period 5 - GEOCOG (Cognitive Geometry: Deciphering neural concept spaces and engineering knowledge to empower smart brains in a smart society)
Reporting period: 2023-05-01 to 2024-01-31
In our experimental framework, we investigate whether these domain-general principles support a wide range of fundamental cognitive functions, from spatial navigation to memory formation—with a specific focus on knowledge acquisition and concept learning, as explored in this project. The idea of a “concept space” suggests that the brain represents abstract knowledge—such as concepts, relationships, and experiences—within a high-dimensional cognitive space. This idea builds on the notion of a cognitive map, originally used to describe spatial navigation in the hippocampal-entorhinal system, and extends it to non-spatial domains.
In this framework, concepts are represented as points in a continuous space, and their relationships (such as similarity or semantic distance) correspond to geometric distances within this space. Grid-like coding in the entorhinal cortex, well known from spatial navigation, may thus provide a general neural mechanism for structuring these abstract spaces, enabling the brain to represent not only physical environments but also abstract, conceptual knowledge.
Nau et al. (2018, Nat Neurosci) showed that entorhinal grid-like codes, traditionally known for supporting spatial navigation, also represent gaze trajectories during visual exploration. This suggests a general mechanism for mapping continuous information beyond physical space. Julian & Doeller (2021, Nat Neurosci) demonstrated that the hippocampus and entorhinal cortex store contextual memories, enabling flexible retrieval of information tied to different environments via processes such as remapping and realignment. Notably, trial-by-trial changes in these patterns predicted context-dependent behavior in ambiguous situations, highlighting how the hippocampal–entorhinal system organizes and retrieves information to guide behavior under uncertainty. Building on these findings, Garvert et al. (2023, Nat Neurosci) revealed that the brain dynamically acquires abstract knowledge by flexibly updating hippocampal cognitive maps based on task demands, shifting between spatial and predictive relational structures. The orbitofrontal cortex tracked which map best explained outcomes and guided updates in hippocampal representations, illustrating a general mechanism for adapting relational knowledge to support inference and decision-making.
Bellmund et al. (2022, Nat Comm) further showed that the hippocampus encodes temporal relations of events based on constructed timelines and generalizes these relations across similar sequences, combining mnemonic construction with abstract structural knowledge to support flexible memory and reasoning about time. Polti et al. (2022, eLife) found that the hippocampus rapidly encodes task-specific temporal regularities, guiding sensorimotor timing and enabling real-time behavioral adaptation. Extending this, Polti et al. (2023, bioRxiv) demonstrated that entorhinal grid-like signals track task regularities and behavioral biases, such as regression to the mean, during time estimation. This suggests that grid-like neural representations encode task structure by integrating sensory evidence with prior expectations, enabling predictive adjustments of behavior.
Collectively, these studies reveal that the hippocampal-entorhinal system employs a generalizable, map-like coding scheme—including grid-like representations and predictive mapping—to acquire, organize, and leverage abstract knowledge structures. This uncovers a core neural mechanism by which the brain transforms discrete experiences into flexible, relational knowledge, supporting inference, generalization, and imagination across spatial and conceptual domains.
First, DeepMReye (Frey et al., 2021, Nat Neurosci) predicts gaze directly from fMRI data, eliminating the need for expensive eye-trackers. This method achieves sub-imaging temporal resolution, works on existing datasets, and uniquely decodes gaze even with closed eyes — enabling broad applications in labs and hospitals.
Second, a convolutional neural network classifier (Frey et al., 2021, eLife) decodes sensory and behavioral variables directly from wide-band neural data, outperforming traditional Bayesian decoders with minimal preprocessing. Applying it to CA1 recordings uncovered a novel head-direction signal in interneurons, highlighting its power to reveal new neural representations.
Third, we introduced a deep neural network model (Frey et al., 2023, CVPR) that mimics cortical and hippocampal processing beyond vision. Using a virtual environment modeled on the human 'Four Mountains' task, we tested how DNNs reconcile different representational schemes for spatial orientation, pushing forward biologically inspired AI.
Together, these developments showcase the synergy of neuroscience and AI: leveraging brain-inspired algorithms to advance machine learning, while using AI to decode, model, and better understand complex neural mechanisms.