Skip to main content

Towards compressive information processing systems

Final Report Summary - CRISP (Towards compressive information processing systems)

The CRISP project targets the application of compressed sensing (CS) to complex information processing systems. Future systems will have to handle unprecedented amounts of information, increasingly suffering from limited communication and computational resources. The project addresses the use of CS in such context, with the goal to develop theory and algorithms defining a unified framework based on CS allowing to perform compression, encryption, communication, signal reconstruction, signal analysis, information extraction and decision, and distributed signal processing in an effective and energy-efficient manner. The project activities have been carried out by a multidisciplinary research team involving expertise in mathematics, computer science, signal processing, information theory, optics, and electronics, allowing to develop novel methodologies and techniques based on CS.

We have shown that compressed signal representations are amenable to extracting information about the signal without reconstructing it. We have developed a set of tools allowing to perform inference directly on compressed data, e.g. to classify the signal, perform search/retrieval tasks, estimate its complexity, and perform basic signal processing operations on it. We have also shown that CS can effectively be used as a cryptosystem, where the sensing matrix serves the role of an encryption key; we have established theoretical bounds that quantify possible information leakages in many scenarios of practical interest. The combination of encryption and processing gives rise to a family of signal processing techniques that can be applied directly on compressed ciphertext, which is expected to be very useful in many secret computing applications. These ideas have been applied to several use cases, and particularly to the compression and processing of camera fingerprints, i.e. those tiny traces of imperfections inherent in each optical sensor, which can be extracted from pictures and used to link a picture to the device that has shot it. The techniques developed during the project have enabled the design of a large-scale camera identification system; a follow-up ERC proof-of-concept project has demonstrated functionality on an unprecedented scale, and a start-up named ToothPic has been launched to commercialize this system. Moreover, a technology has been developed for generating cryptographic keys that can only be decrypted by a given smartphone; this has clear potential applications in the areas of two-factor authentication and digital document signing, just to mention a few.

We have also worked on communication aspects of CS, including the extension of CS to distributed systems, which is very promising for sensor networks applications and the Internet of Things. We have developed several algorithms with provable quality guarantees, which perform distributed reconstruction of an ensemble of signals acquired using CS.

We have striven to improve the quality of CS reconstruction, which is one of the limiting factors towards the wide adoption of CS. We have employed more general signal models than sparsity, including Bayesian models, coupled with improved sparsifying norms, and obtained significant improvements in terms of both reconstruction accuracy and speed. We have also addressed the problem of how to sense and reconstruct a very large signal which cannot be completely buffered, and have developed iterative techniques to solve this problem.

We have demonstrated several of the ideas described above, including a single-pixel imaging sensor, a wireless indoor localization scheme based on sparsity, a GPU implementation of CS reconstruction algorithms, and a testbed for distributed CS.

The CRISP project embraces reproducible research policies, and freely distributes software and data employed to perform experiments published in the scientific papers describing the project outcomes. More information can be found in the project website,