Periodic Reporting for period 4 - VERICOMP (Foundations of Verifiable Computing)
Reporting period: 2023-07-01 to 2024-12-31
The main goal of this project is making progress towards a comprehensive understanding of proof systems: to explore their complexity, with a focus on proofs that can be generated efficiently and verified super-efficiently, to explore their practical applicability, and to investigate their connections with foundational questions in cryptography, in complexity theory and in data science.
A comprehensive understanding of the power and the limitations of proof systems can have dramatic scientific impact, with applications to complexity theory, cryptography and data science. Further, such proof systems also have the potential for facilitating trust in algorithms, automated systems, and statistical analyses, which is of growing societal importance.
• An optimal batch verification theorem. This new protocol allows for the verification of k statements, at a cost that grows only poly-logarithmically with k. The statements need to be in UP, meaning that there is at most a single valid witness (or classical proof) for each statement. This is important both as a foundational question, and also because batch verification is a powerful tool towards constructing power proof systems for general computations.
• Cryptographically sound arguments. A central pursuit in the study of cryptography is proving the soundness of cryptographic systems based on well-founded mathematical assumptions, such as the hardness of factoring or of lattice problem. One central question along these lines is proving the security of the Fiat-Shamir transformation, which can be used to minimize interaction in a cryptographic protocol.
The project proposed the first secure instantiation of this transformation based on Lattice Assumptions. It led to several subsequent breakthroughs in constructing and proving security for non-interactive protocols, resolving important open problems in the field.
The project also constructed the first argument systems that can out-perform unconditionally sound proof systems based only on the “minimal” cryptographic assumption that a one-way function exists. This was achieved by constructing constant-round arguments for bounded-depth computations from one-way functions.
• Verifiable data science. Traditionally, research on proof systems has focused on the computational complexity of the task at hand. That is, the critical resources were computational (e.g. running time or space). On the other hand, data science is becoming a major focus of computer science and algorithmic research, and there the critical resource is often access to an unknown data distribution. The project has developed proof systems for data science tasks such as machine learning, property testing and distribution testing. These new development allow the outcomes of complex analyses to be verified using only very limited access to the underlying distribution.
• Sound foundations for algorithmic fairness. Trust in automated systems is a growing scientific and societal concern. One focus is the issue of algorithmic fairness: are the results or resource allocations computed by algorithms fair, or do they discriminate against disadvantaged or protected populations? Our project has been tackling these concerns by formulating provable fairness guarantees, and constructing protocols and algorithms that are sound, in the sense that they meet these definitions, and can thus be verified. This builds a bridge between the social / societal fairness literature and the cryptographic literature on proof systems. It has already led to new work on fair risk prediction in medical contexts.
• The complexity of finding a Nash equilibrium. The project established a new connection between proof systems and the hardness of computational problems related to finding Nash Equilibria, a basic problem in game theory and economics (as well as, more broadly, the complexity class PPAD). Together with follow-up works, this led to a new understanding that these problems are no easier to solve than cryptographically hard lattice problems, resolving a long standing open question.
• Characterizing the complexity of doubly-efficient interactive proofs: obtaining a complete understanding of the runtime, communication and round complexity needed to verify general computations, and the possible trade-offs.
• A theory of efficient verification for data science. Building on our results for verifying machine learning and property testing problems. A rich theory on verifying analyses with limited access to the data is an exciting new frontier that is also very timely. This can have impact well beyond theoretical computer science. For example, it can be used to efficiently verify the results of scientific experiments.
• Cryptographic argument systems. The project’s advances, and subsequent works, have reshaped our understanding about the cryptographic assumptions that are sufficient for non-interactive proof systems. In particular, well-founded lattice assumptions have been found to be sufficient. The field is now well-positioned to develop proof systems on more relaxed assumptions, as well as on more diverse (but incomparable) ones.
• Provable societal guarantees for algorithms. The project has made important strides in providing fairness guarantees that are mathematically rigorous and verifiable. The focus has been on accuracy as a measure of fairness, this sets the stage for a comprehensive theory of algorithmic fairness and for similar advances in the study of privacy for algorithms.