Periodic Reporting for period 2 - DeepFace (Understanding Deep Face Recognition)
Reporting period: 2018-11-01 to 2020-04-30
The proposal has focused on three domains of research: (i) the study of methods that promote effective transfer learning, (ii) the study of the tradeoffs that govern the optimal utilization of the training data and how the properties of the training data affect the optimal network design, and (iii) the post-transfer utilization of the learned deep networks, where given the representations of a pair of face images, we seek to compare them in the most accurate way. An emphasis was put on basing the work on a theoretical framework .
If the project is successful, new methodologies are to be introduced, which will make transfer learning much more effective and that would enable more accurate deep learning. In addition, grounding our results in concrete theorems, would lead to a more principled way to practice object recognition. Additionally, as detailed below, with the evolving societal focus on ethics in AI, we have become much more concerned with explainable models and fairness.
In addition, we shifted our interest to a particular case on transfer learning, where one maps visually between domains in a completely unsupervised way. We propose both practical algorithms [2,11,12,23,24,27], and methods that a grounded, as suggested in the proposal, by theoretical reasoning [20,25]. In these contributions, face datasets serve as the main testbed for the various methods. Unsupervised methods have been further applied elsewhere [13,16,18] and specifically in medical imaging [7,10], where we also applied advanced supervised methods . Generation of images is a major task in unsupervised learning and excellent conditional image generation results were presented in . In addition, we have studied unsupervised learning of facial transformations  and worked on lip synchronization in facial videos .
Our work on learning representations that are more robust, using the multiverse networks, continued in applications to NLP  and has taken a turn into a method to condition training  and theoretical analysis of deep neural networks . A current emphasis in our lab is the use of another form of network adaptivity called hypernetworks, with which we obtain state of the art results in 3D reconstruction .
Together with the shift of the research community, following societal concerns, toward privacy, fairness, and interpretability, the topic of explainability has become a major interest of ours. Our work sheds light on the reasoning of black boxes classifiers [6,19] and also designs recommendation systems that are explainable .
 S. Gur, T. Shaharabany, L. Wolf. End to End Trainable Active Contours via Differentiable Rendering. ICLR, 2020.
 R. Mokady, S. Benaim, L. Wolf, A. Bermano. Masked Based Unsupervised Content Transfer. ICLR, 2020.
 Y. Shalev, L. Wolf. End to End Lip Synchronization with a Temporal AutoEncoder. WACV 2020
 I. Malkiel, L. Wolf. Maximal Multiverse Learning for Promoting Cross-Task Generalization of Fine-Tuned Language Models. In submission, 2020.
 E. Shulman, L. Wolf. Meta Decision Trees for Explainable Recommendation Systems. AIES, 2020.
 W-J. Nam, S. Gur, J. Choi, L. Wolf, S-W. Lee. Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks. AAAI, 2020.
 S. Gur, L. Wolf, L. Golgher, P. Blinder. Microvascular Dynamics from 4D Microscopy Using Temporal Segmentation. Pacific Symposium on Biocomputing (PSB) ,2020.
 G. Littwin, L. Wolf. Deep Meta Functionals for Shape Representation. ICCV, 2019.
 O. Ashual, L. Wolf. Specifying Object Attributes and Relations in Interactive Scene Generation. ICCV, 2019.Best paper honorable mention.
 S. Gur, L. Wolf, L. Golgher, P. Blinder. Unsupervised Microvascular Image Segmentation Using an Active Contours Mimicking Neural Network. ICCV, 2019.
 T. Cohen, L. Wolf. Bidirectional One-Shot Unsupervised Domain Mapping. ICCV, 2019.
 S. Benaim, M. Khaitov, T. Galanti, L. Wolf. Domain Intersection and Domain Difference. ICCV, 2019.
 S. Gur, L. Wolf. Single Image Depth Estimation Trained via Depth from Defocus Cues. CVPR, 2019.
 B. Klein, L. Wolf. End-to-End Supervised Product Quantization for Image Search and Retrieval. CVPR, 2019. Preliminary preprint arXiv:1711.08589.
 E. Littwin, L. Wolf. On the Convex Behavior of Deep Neural Networks in Relation to the Layers' Width. ICML 2019 Workshop Deep Phenomena, 2019.
 M. Michaleshvili, S. Benaim, L. Wolf. Semi-supervised Monaural Singing Voice Separation with a Masking Network Trained on Synthetic Mixtures. ICASSP, 2019.
 O. Press, T. Galanti, S. Benaim, L. Wolf. Emerging Disentanglement in Auto-Encoder Based Unsupervised Image Content Transfer. ICLR, 2019.
 L. Wolf, S. Benaim, T. Galanti. Unsupervised Learning of the Set of Local Maxima. ICLR, 2019.
 L. Wolf, T. Galanti, T. Hazan. A Formal Approach to Explainability. AIES, 2019.
 T. Galanti, L. Wolf. Generalization Bounds for Unsupervised Cross-Domain Mapping with WGANs. Integration of Deep Learning Theories workshop at NeurIPS2018.
 S. Benaim, L. Wolf. One-Shot Unsupervised Cross Domain Translation. NeurIPS, 2018.
 E. Littwin, L. Wolf. Regularizing by the Variance of the Activations' Sample-Variances. NeurIPS, 2018.
 S. Benaim, T. Galanti, L. Wolf. Estimating the Success of Unsupervised Image to Image Translation. ECCV, 2018.
 N. Hadad, L. Wolf, M. Shahar. A Two-Step Disentanglement Method. CVPR, 2018.
 T. Galanti, L. Wolf, S. Benaim. The Role of Minimal Complexity Functions in Unsupervised Learning of Semantic Mappings. ICLR, 2018.
 I. Peleg, L. Wolf. Structured GANs. WACV, 2018.
 S. Benaim, L. Wolf. One-Sided Unsupervised Domain Mapping. NIPS, 2017.