European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS

Learning to read the code of large neural populations

Final Report Summary - NEURO-POPCODE (Learning to read the code of large neural populations)

Information is carried in the brain by sequences of electrical pulses or spikes that neurons send to one another. Much of our understanding of the language of the brain also know as the “neural code” has been based on detailed studies of the spiking patterns of single cells. Yet, our sensations, thoughts, and actions are carried and executed by the joint activity of large groups of neurons.Understanding this language can be thought of as amounting to building a dictionary from external stimuli to neuronal responses and vice versa. Learning such a dictionary is hard because of the huge number of possible activity patterns or “words” that large groups of neurons may use, and because neurons are noisy and may respond to the same stimulus with different spiking patterns. Another practical difficulty is the vastness of the space of possible stimuli our brain might encounter, but needs to represent and decode.

Our work has focused on finding the “design principles” that govern the code of large groups of neurons. We have shown that the activity patterns of groups of about 100 cells can be strongly correlated as a group, while the typical correlations between pairs seem relatively weak. Thus the vocabulary of such groups is the result of the seemingly weak relations between small groups of cells that add up to shape the collective code of the whole group.

The main effort of this project has been to identify the principles that govern the vocabulary of very large groups of neurons, and map the relations between the neural vocabulary and the stimuli the represent.

In the first part of the work we focused on the vertebrate retina as an example and mapped its response to artificial and natural stimuli. We have introduced a new family of mathematical models that describe the joint responses to cells to their stimuli with very high accuracy. In particular, we have shown that groups of cells sometime encode stimuli in an independent manner and sometimes as a group and that our model can capture this behavior. Moreover our models gave highly accurate description of the noise that shapes the response properties of these population codes. Equipped with accurate models of stimulus encoding by neuronal populations, we then turned to map the way codewords are used to convey information and how they may be decoded.

We then built a ‘thesaurus’ for the neural code of large groups of cells, where we learned which “neural population words” are semantically similar according to the set of stimuli they are used to encode. This thesaurus revealed that the codebook of groups of tens of neurons in the retina are organized in ~150 semantic clusters. Importantly, we showed that one cannot learn this thesaurus by any simple measure of similarity between population words based on intuitive similarity measures of the words themselves. We have shown that this thesaurus enables to infer the meaning of words that we have not seen before, and to decode the stimulus that gave these novel words.

We used the same family of models to learn a distance metric between stimuli, based on the neural responses they elicit. We have found that the metric of the brain on stimuli differs considerably from intuitive notions of what would make stimuli similar. This approach carries high potential for new and more accurate decoding algorithms for neural activity and neural prostheses.

We extended these models to study population coding in the cortex and in particular focusing on temporal patterns and how the code changes during learning.

Finally, we have developed a new family of models that enabled us to describe accurate the activity of hundreds of cells in the cortex, which would be essential for study of the code of large neural circuits. This model also suggest how neural circuits in the brain may implement these computations and learn to assess the likelihood of their own inputs.