A critical technological challenge for emerging information systems is to acquire, analyze and learn from the ever-increasing high-dimensional data produced by natural and man-made phenomena. Sampling, streaming, and recoding of even the most basic applications now produce a data deluge that severely stresses the available analog-to-digital converter, digital communication and storage resources, and easily swamps the back-end processing and learning systems.
Surprisingly, while the ambient data dimension is large in many problems, the relevant information therein typically resides in a much lower dimensional space. Viewed combinatorially and geometrically, natural constraints often cause data to cluster along low-dimensional structures, such as unions-of-subspaces or manifolds, having a few degrees of freedom relative to their size. This powerful notion suggests the potential for developing highly efficient methods for processing and learning by capturing and exploiting the inherent model, or data’s “information level.”
To this end, we seek to revolutionize scientific and practical modi operandi of data acquisition and learning by developing a new optimization and analysis framework based on the nascent low-dimensional models with broad applications—from inverse problems to analog-to-information conversion, and from automated representation learning to statistical regression. We attack the curse of dimensionality in specific ways, not only by relying on the blessing of dimensionality via concentration-of-measures, but also by exploiting geometric topologies and the diminishing returns (i.e., submodularity) within learning objectives. We believe only an approach such as ours can provide the theoretical scaffold for a future proof processing and learning framework that scales its operation to the problem’s information level, promising substantial reductions in hardware complexity, communication, storage, and computational resources.
Call for proposal
See other projects for this call