Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Content archived on 2024-05-30

Invariant Representations for High-Dimensional Signal Classifications

Final Report Summary - INVARIANTCLASS (Invariant Representations for High-Dimensional Signal Classifications)

Machine learning over large size signals such as images or time-series requires to reduce the data dimensionality without eliminating information for classification or regression tasks. One can perform such a dimensionality reduction from prior information about invariant properties of classification or regression problems. The InvariantClass project developed new multiscale invariant representations, for supervised and unsupervised learning.

We introduced the wavelet scattering transform, which is invariant to signal translations and stable to deformations. Scattering coefficients are computed with a deep neural network architecture, by cascading wavelet filters and non-linear modulus. This representation was generalised to define invariants over larger groups of transformations, including rotations, scalings and frequency shifts, while preserving stability to deformations. A scattering transform provides a mathematical model for the first layers of deep convolutional neural networks.

We studied the applications of scattering invariant descriptors to a wide range of classificaiton and regression problems, for images, audio and bio-medical signals as well as financial time-series. We also showed that these invariants provide appropriate representations to linearly regress quantum molecular energies over large data-bases. All algorithms are implemented in open source softwares, which are widely distributed for research and industrial applications.

Scattering invariants have also been applied to unsupervised learning. A first applications concerned signals defined over a graph whose topology is learned from data. We then concentrated on estimations of stochastic models of data distributions. We introduced maximum entropy models conditioned by scattering invariants, to caracterize non-Gaussian processes having long range interactions. We studied applications to generative models of image and audio textures, as well as multifractal models of physical processes such as turbulent flows, and to financial time-series.

Scattering transforms have been extened by incorporating phase information, with rectifier non-linearities. The introduction of phase has considerably improved models of coherent geometric structures. It is applied to approximate the geometry of point processes. More complex geometries have been addressed with a dictionary learning algorithm over scattering coefficients. It provides a mathematical approach to understand signal generation with convolutional autoencoders.