Skip to main content
European Commission logo print header

Robust statistical methodology and theory for large-scale data

Project description

New statistical methods to reduce uncertainty in Big Data analytics

Large-scale data are usually messy: the fraction of data inaccuracies increases with data volume growth. Reliable conclusions may be difficult to derive when data are collected under different conditions, or when certain data are missing or corrupted. The EU-funded RobustStats project aims to develop robust statistical methodology and theory to address Big Data challenges. In the context of transfer learning, researchers will leverage proper methods to exploit the distributions between the source and target domains. In addition, they will test missing data mechanisms and provide practical tools to handle both missing and heterogeneous data in classification labels. Ultimately, data perturbation will be introduced for robust inference with large-scale data.

Objective

Modern technology allows large-scale data to be collected in many new forms, and their underlying generating mechanisms can be extremely complex. In fact, an interesting (and perhaps initially surprising) feature of large-scale data is that it is often much harder to feel confident that one has identified a plausible statistical model. This is largely because there are so many forms of model violation and both visual and more formal statistical checks can become infeasible. It is therefore vital for trust in conclusions drawn from large studies that statisticians ensure that their methods are robust. The RobustStats proposal will introduce new statistical methodology and theory for a range of important contemporary Big Data challenges. In transfer learning, we wish to make inference about a target data population, but some (typically, most) of our training data come from a related but distinct source distribution. The central goal is to find appropriate ways to exploit the relationship between the source and target distributions. Missing and corrupted data play an ever more prominent role in large-scale data sets because the proportion of cases with no missing attributes is typically small. We will address key challenges of testing the form of the missingness mechanism, and handling heterogeneous missingness and corruptions in classification labels. The robustness of a statistical procedure is intimately linked to model misspecification. We will advocate for two approaches to studying model misspecification, one via the idea of regarding an estimator as a projection onto a model, and the other via oracle inequalities. Finally, we will introduce new methods for robust inference with large-scale data based on the idea of data perturbation. Such approaches are attractive ways of exploring a space of distributions in a model-free way, and we will show that aggregation of the results of carefully-selected perturbations can be highly effective.

Host institution

THE CHANCELLOR MASTERS AND SCHOLARS OF THE UNIVERSITY OF CAMBRIDGE
Net EU contribution
€ 2 050 068,00
Address
TRINITY LANE THE OLD SCHOOLS
CB2 1TN Cambridge
United Kingdom

See on map

Region
East of England East Anglia Cambridgeshire CC
Activity type
Higher or Secondary Education Establishments
Links
Total cost
€ 2 050 068,00

Beneficiaries (1)