Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Going Deep and Blind with Internal Statistics

Periodic Reporting for period 4 - DeepInternal (Going Deep and Blind with Internal Statistics)

Reporting period: 2022-11-01 to 2023-10-31

In the past decade, since the revival of Deep Neural-Networks (DNNs), there has been unprecedented progress and breakthrough results in Computer Vision, both in high-level and low-level vision tasks. Nevertheless, most of this impressive performance stems from the ability to train DNNs on huge amounts of training data (often tediously hand-labelled). This restricts the applicability of current Deep-Learning methods to specific problems and domains where enough training data exist. This limitation renders them inapplicable to domain areas where very little or no training data are available, or where data labeling (manual or automatic) is ill-defined.

In the past few years, during this project, my students and I have shown how very few training data can be used to train DNNs; often no prior training examples whatsoever! We have shown that DNNs can be trained on examples extracted directly from the single available test image. We have shown how to combine the power of unsupervised Internal Data Recurrence with the sophistication and inference-power of Deep Learning, to obtain the best of both worlds. This self-supervised learning approach gives rise to true “Zero-Shot” Learning, and has already made an impact on the scientific community (both Computer Vision and Deep learning), as well as in other domains (e.g. Reconstruction of data from brain activity), and is likely to have far reaching applications for the society. Some of these are detailed next.
During this project, we have developed new approaches and theories for Self-Supervised Deep Learning, by exploiting the internal redundancy inside a single natural image/video. We coined it “Deep Internal Learning”. The strong recurrence of information inside a single natural image/video provides powerful internal examples which suffice for training Deep Networks, without any prior examples or training data. This new “Deep Internal Learning” paradigm gives rise to true “Zero-Shot Learning”. We have demonstrated the power of this approach to a range of problems, including super-resolution (in images – CVPR’2018; in videos – ECCV’2020), image-segmentation, transparent layer separation, blind image-dehazing (CVPR’2019), image-retargeting (ICCV’2019), blind super-resolution (NeurIPS’2019), diverse image & video generations (CVPR’2022 & ECCV’2022), diverse video interpolation and extrapolation (ICML’2023), and more. We have also shown how such self-supervision can be used for reconstructing images from brain recordings (fMRI) with very few external training data (NeurIPS’2019), for image classification from fMRI brain activity (NeuroImage’2022), and for video reconstruction from fMRI (arXiv’2022).

More recently we have shown that the notion of “Internal Learning” can further be applied to recover the training data of a trained classifier, directly from the parameters of the network (NeurIPS’2022, ICLRworkshop’2023, NeurIPS’2023). Our findings have serious negative implications on Data Privacy in Deep Learning.

During the project the team published 18 research papers summarizing different aspects of the project goals, 15 of which were peered reviewed and published in the leading international journals and/or conferences including: Nature Communications, NeuroImage, NeurIPS, CVPR, ICCV, ECCV, ICML. These papers present excellent progress compared to the original plan, and beyond. We have made the code and data of the published papers available (where applicable). We further have a few new papers currently in preparation or under review (not published yet), which contain additional new exciting breakthroughs.
• We developed the theory of “Deep Internal Learning” to image and video data, thus enabling totally unsupervised Deep-Learning when no training data whatsoever are available.

• We showed the applicability of “Deep Internal Learning” to a wide range of inference tasks, and to a wide variety of network architectures. We further showed that these types of networks often lead to SOTA (state-of-the-art) results on out-of-distribution data for a variety of tasks.

• We showed that deep learning can further be done with modest amounts of supervised training data by exploiting self-supervision.

• We developed state-of-the-art (SOTA) Image & Video Reconstruction from fMRI brain activity. This was done using a very small amount of supervised training data.

• We have developed the first-ever large-scale Image-Classification (to more than 100 classes!) from fMRI brain activity. This was applied to classification of never-before-seen-classes during training time.

• We have developed the first-ever method for reconstructing training data directly from the parameters of trained neural-net, without any prior assumptions or any additional side information. Our findings have serious negative implications on Data Privacy in Deep Learning. As such, this line of work, although new, is already drawing a lot of attention in the Deep Learning community.
A GAN is trained on a single input image, generates many new images of the same patch distribution.