Deep learning, in the form of artificial neural networks, is one of the most rapidly evolving fields in machine learning, with wide-ranging impact on real-world applications. Neural networks can efficiently represent complex predictors, and are nowadays routinely trained successfully. Unfortunately, our scientific understanding of neural networks is quite rudimentary. Most methods used to design and train these systems are based on rules-of-thumb and heuristics, and there is a drastic theory-practice gap in our understanding of why these systems actually work. We believe this poses a significant risk to the long-term health of the field, as well as an obstacle to widening the applicability of deep learning beyond that achieved with current methods. The goal of this project is to develop principled tools for understanding, designing, and training deep learning systems, based on rigorous theoretical results. This is a major challenge in this rapidly evolving field, and any progress along these lines is expected to have a substantial impact on the theory and practice of creating such systems. To do so, we focus on three inter-related sources of performance losses in neural network learning: The optimization error of neural networks (that is, how to train a given network in a computationally efficient manner); The estimation error (how to ensure that training a network on a finite training set will ensure good performance on future examples); and the approximation error (how architectural choices of the networks affect the type of functions they can compute).