Skip to main content

Lifting Methods for Global Matching Problems

Periodic Reporting for period 2 - LiftMatch (Lifting Methods for Global Matching Problems)

Reporting period: 2019-09-01 to 2021-02-28

This project advocates and develops a certain methodology, called "lifting", for providing means to solving, or approximately solving problems involving different kinds of irregular data such as point clouds, graphs or surfaces. Irregular data is prevalent in nature and science and is used to model or represent different physical objects or phenomena ranging from molecules, and 3D shapes to social graphs and networks.

The basic operation of analyzing, comparing or relating such irregular data is often times a challenging task since these objects do not have a canonical representation and/or can undergo an arbitrary deformation or transformation. The lifting methodology builds on the observation that in certain cases these problems be come easier, or more tractable, when embedded in higher dimensions. This projects develops different lifting methods to analyze irregular data. So far the project contributed methods to deal with shape matching, using convex and concave relaxations. It provided new means to analyze and learn graph and hyper-graph data and developed accompanying theory. Furthermore, the project explored several methods to analyze surface data by representing it with multiple chart, topological covers, and implicit representations.

The project outcomes are computational methods (including code), with corresponding theory, to learn and analyze irregular data that include point clouds, graphs, hyper-graphs, and surfaces.
The work so far can divided into three groups:

Group A: Shape matching. In this group we have developed formulations, algorithms and numerical solvers for certain matching problems that include the doubly stochastic relaxation, a popular linear program lift, and investigated a concave relaxation which builds upon a surprising measure concentration phenomenon.

Group B: Graphs and hyper-graphs. In this group we introduced and analyzed a novel family of neural network architectures used for learning graphs and hyper-graphs. These architectures learn lifts from the original data representation to higher dimensional tensor spaces and maps between these spaces. We proved in that lifts to high dimensions are required to achieve universal (i.e. maximally expressive) models, and showed that in some cases lower dimensional lifts are suffice to achieve expressiveness.

Group C: Surfaces. In this group of works we studied various surface representations such as: multiple charts, topological covers, and level sets of volumetric functions to represent surfaces. We used these representations for analyzing and learning surface data, surface reconstruction, and generation.
In all groups (A,B,C) the project introduced new methodologies and state-of-the-art results. Until the end of the project we expect further developments adding to groups B,C; mainly introducing new applications and new problem formulations, as well as keep improving the previously introduced methods. We expect improving our graph and surface data representations, and consider a wider range of problems, e.g. applications to physics. We also aim at generalizing our method to higher dimensional manifolds.
Defining a convolution operator on point clouds using a lift to a higher dimensional space.
Learning a surface by reducing Eikonal loss in the ambient space.
Visualization of the 15 element basis of all 2nd order equivariant linear layers.