Skip to main content

Visual perception in deep neural networks

Article Category

Article available in the folowing languages:

Machine vision brought closer through human modelling

If machines can ever truly see, it may begin with an EU team’s modelling of how real brains work.

Digital Economy

Human vision is the result of very complex neurological processes, achieved via a collection of specialised but relatively simple brain modules acting together. Something similar can be duplicated in computers, giving them a kind of vision. This application is not new and has been tried in various industries, from security systems to autonomous spacecraft and cars. However, these systems are limited and may fail in novel situations. For example, if a self-driving car has no visual data about deserts, it may struggle to apply its knowledge of urban landscapes to that environment. In that case, the vehicle could become confused and make mistakes. Genuinely reliable and autonomous computer vision is still some way off. Apart from the obvious applications for machines, the study of computer vision also improves understanding of how human vision works. The EU-funded DEEPCEPTION project, undertaken with the support of the Marie Skłodowska-Curie programme, has been working on both sides of the problem. Project researchers developed models of machine vision that emulate and illustrate processes in the human brain.

Deep neural nets

Neural nets are inspired by biological systems, whereby a network of computer processors functions analogously to neurons (brain cells). Such networks use algorithms to recognise patterns, without being specifically programmed to do so. A ‘deep neural net’, on which the project concept relies, is similar but involves many layers of processing, and it is trained for a particular task. The DEEPCEPTION task was teaching computers to recognise objects from photographs. Researchers compared the deep neural net responses against those of real primate brains (monkey and human) when viewing the same images. “If the computer model accurately represents the real biological process, then the response from the neural net and the brain should match,” explains project leader Jonas Kubilius. The research team built a suite of benchmarks that allow evaluation and quantification of how well these two processes match. The team’s integrative neural and behavioural benchmark, called Brain-Score, is the world’s largest to date. Using the insights gained from this comparison, researchers then built a computer model, called CORnet, that scored highly on the benchmarks.

The most accurate model

Currently, few models of human vision can accurately predict neural or behavioural response. The DEEPCEPTION model outperformed more complicated computer vision systems, and closely matches the best current understanding of how object recognition works in the primate visual system. “I was most proud when our model was able to predict neural responses on a completely new data set,” adds Kubilius. “Such tests on new data provide a stringent means to falsify models.” If a model cannot predict anything beyond the data it has been trained on, this means the model does not represent real understanding. Yet, if a model makes good predictions on a completely new data set, it is a positive sign that the model is accurate. The project yielded an improved model of primate vision. Although DEEPCEPTION had no commercial goals, the tools it developed will help its own and other researchers develop even more accurate models.

Keywords

DEEPCEPTION, vision, neural net, machine, deep neural net, primate, machine vision, modelling, human brain, Brain-Score, CORnet

Discover other articles in the same domain of application