## Periodic Reporting for period 1 - MCT (Metacomputational Complexity Theory)

Reporting period: 2020-03-01 to 2022-02-28

Understanding the power of computation is one of the biggest challenges in science and society. Computational complexity theory has been dedicated to the investigation of this generic goal but central questions of the field such as the famous P versus NP problem remain notoriously elusive. It is still consistent with our knowledge that all problems of practical interest are solvable by extremely efficient algorithms.

Complexity theory attempts to identify what makes a problem computationally hard by proving lower and upper bounds on the complexity of concrete computational models such as Boolean circuits or propositional proof systems. However, even after several decades of intense research, the progress on proving that there is an explicit computationally hard problem remains incremental. In fact, several barrier results, which are recognized as a serious obstacle towards the goal of proving strong complexity lower bounds, have been discovered. Nevertheless, the barrier results also revealed new structural properties of complexity lower bounds and connected them to a wide range of areas such as learning theory, cryptography and mathematical logic.

The present project, Metacomputational Complexity Theory (MCT), continued the development of complexity lower bounds with emphasis on their structural properties. It investigated complexity-theoretic properties of problems which themself capture central problems in complexity theory, e.g. complexity of proving complexity lower bounds.

The overall objectives of the project are divided into two groups.

1. Hardness magnification, exploring the limits and consequences of an emerging theory of hardness magnification which arouse from investigating meta-computational aspects of complexity lower bounds and received a lot of attention as a promising approach overcoming previously existing barriers for proving lower bounds.

2. Structural theory, strengthening and developing new connections between methods for proving complexity lower bounds and other central concepts of computer science such as efficient learning algorithms, cryptographic primitives and automatability of proof-search.

The MCT project led to a substantial progress on addressing these objectives. In particular, the project provided a better understanding of the potential of hardness magnification by expanding consequences of hardness magnification in areas such as learning theory and proof complexity. Further, the project developed a more robust theory of natural proofs establishing stronger connections between complexity lower bounds, cryptographic pseudorandom generators and learning algorithms. As one of the highlights of the project a conditional equivalence between learning algorithms and automatability of proof-search was established in a meta-mathematical setting.

Complexity theory attempts to identify what makes a problem computationally hard by proving lower and upper bounds on the complexity of concrete computational models such as Boolean circuits or propositional proof systems. However, even after several decades of intense research, the progress on proving that there is an explicit computationally hard problem remains incremental. In fact, several barrier results, which are recognized as a serious obstacle towards the goal of proving strong complexity lower bounds, have been discovered. Nevertheless, the barrier results also revealed new structural properties of complexity lower bounds and connected them to a wide range of areas such as learning theory, cryptography and mathematical logic.

The present project, Metacomputational Complexity Theory (MCT), continued the development of complexity lower bounds with emphasis on their structural properties. It investigated complexity-theoretic properties of problems which themself capture central problems in complexity theory, e.g. complexity of proving complexity lower bounds.

The overall objectives of the project are divided into two groups.

1. Hardness magnification, exploring the limits and consequences of an emerging theory of hardness magnification which arouse from investigating meta-computational aspects of complexity lower bounds and received a lot of attention as a promising approach overcoming previously existing barriers for proving lower bounds.

2. Structural theory, strengthening and developing new connections between methods for proving complexity lower bounds and other central concepts of computer science such as efficient learning algorithms, cryptographic primitives and automatability of proof-search.

The MCT project led to a substantial progress on addressing these objectives. In particular, the project provided a better understanding of the potential of hardness magnification by expanding consequences of hardness magnification in areas such as learning theory and proof complexity. Further, the project developed a more robust theory of natural proofs establishing stronger connections between complexity lower bounds, cryptographic pseudorandom generators and learning algorithms. As one of the highlights of the project a conditional equivalence between learning algorithms and automatability of proof-search was established in a meta-mathematical setting.

The core of the project consisted of the work on several research papers. We give a brief description of 3 of them:

[1] 'Strong co-nondeterministic lower bounds for NP cannot be proved feasibly', co-authored with Rahul Santhanam;

In [1] we show, unconditionally, that a theory formalizing probabilistic poly-time reasoning cannot prove, for any non-deterministic poly-time machine M, that M is inapproximable by co-nondeterministic circuits of sub-exponential size. Since the theory is capable of formalizing a lot of contemporary complexity-theory, our result gives an evidence that proving lower bounds for meta-computational problems will require methods that go beyond probabilistic poly-time reasoning.

[2] 'Learning algorithms from circuit lower bounds';

In [2] we investigated new methods for extracting learning algorithms from circuit lower bounds. The paper reveals a close connection between a question of Rudich about turning 'demibits' to 'superbits' and the existence of an algorithm transforming distinguishers breaking pseudorandom generators into efficient learning algorithms. The main result of the paper gives a new characterization of natural proofs, which is equivalent to the existence of efficient learning algorithms. Finally, the paper provides a new proof of a generalized learning speedup, which can be interpreted as a hardness magnification theorem. Intriguingly, we identify an attractive aspect of this particular instance of hardness magnification: it avoids the so called locality barrier, which affected previous magnification theorems.

[3] 'Learning algorithms versus automatability of Frege systems', co-authored with Rahul Santhanam;

The main result of [3] is an equivalence between P-provable automatability of Frege systems and P-provable learning algorithms for poly-size circuits, under the assumption that P is a sufficiently strong proof system. This shows that in the context of meta-mathematics it is possible to establish a connection between central concepts of complexity theory which we do not know to establish otherwise. A corollary of the theorem is a proof complexity collapse which can be interpreted as a proof complexity magnification.

The outcomes of the project were disseminated in leading international journals and conferences. Paper [1] appeared in Symposium on Theory of Computing (STOC) 2021. Further 2 papers have been accepted (subject to minor revisions) to 'Journal of ACM' and published in journal 'Theory of Computing'. Paper [3] has been accepted to ICALP 2022 and [2] is currently undergoing a reviewing process in a journal. The publication of research papers was accompanied by presentations at workshops, seminars and conferences. For example, [1] was presented at STOC 2021, and [3] was presented at Complexity Theory seminars in Oxford and Prague. Further presentations are planned.

As a part of the project we co-organized a workshop bringing together researchers in logic and complexity. We also co-organized a 'Complexity Network' connecting students and researchers located near Oxford. The dissemination of the project was further supported by activities targeting popular audience such as an article in 'Inspired Research' magazine or a YouTube video presenting [1].

[1] 'Strong co-nondeterministic lower bounds for NP cannot be proved feasibly', co-authored with Rahul Santhanam;

In [1] we show, unconditionally, that a theory formalizing probabilistic poly-time reasoning cannot prove, for any non-deterministic poly-time machine M, that M is inapproximable by co-nondeterministic circuits of sub-exponential size. Since the theory is capable of formalizing a lot of contemporary complexity-theory, our result gives an evidence that proving lower bounds for meta-computational problems will require methods that go beyond probabilistic poly-time reasoning.

[2] 'Learning algorithms from circuit lower bounds';

In [2] we investigated new methods for extracting learning algorithms from circuit lower bounds. The paper reveals a close connection between a question of Rudich about turning 'demibits' to 'superbits' and the existence of an algorithm transforming distinguishers breaking pseudorandom generators into efficient learning algorithms. The main result of the paper gives a new characterization of natural proofs, which is equivalent to the existence of efficient learning algorithms. Finally, the paper provides a new proof of a generalized learning speedup, which can be interpreted as a hardness magnification theorem. Intriguingly, we identify an attractive aspect of this particular instance of hardness magnification: it avoids the so called locality barrier, which affected previous magnification theorems.

[3] 'Learning algorithms versus automatability of Frege systems', co-authored with Rahul Santhanam;

The main result of [3] is an equivalence between P-provable automatability of Frege systems and P-provable learning algorithms for poly-size circuits, under the assumption that P is a sufficiently strong proof system. This shows that in the context of meta-mathematics it is possible to establish a connection between central concepts of complexity theory which we do not know to establish otherwise. A corollary of the theorem is a proof complexity collapse which can be interpreted as a proof complexity magnification.

The outcomes of the project were disseminated in leading international journals and conferences. Paper [1] appeared in Symposium on Theory of Computing (STOC) 2021. Further 2 papers have been accepted (subject to minor revisions) to 'Journal of ACM' and published in journal 'Theory of Computing'. Paper [3] has been accepted to ICALP 2022 and [2] is currently undergoing a reviewing process in a journal. The publication of research papers was accompanied by presentations at workshops, seminars and conferences. For example, [1] was presented at STOC 2021, and [3] was presented at Complexity Theory seminars in Oxford and Prague. Further presentations are planned.

As a part of the project we co-organized a workshop bringing together researchers in logic and complexity. We also co-organized a 'Complexity Network' connecting students and researchers located near Oxford. The dissemination of the project was further supported by activities targeting popular audience such as an article in 'Inspired Research' magazine or a YouTube video presenting [1].

The results of the MCT project go significantly beyond the state of the art in their respective research areas. For example, paper [3] gives a strong evidence for the desired equivalence between efficient learning algorithms and automatability of proof-search. The result relies crucially on meta-mathematics and promises further development in the area.

Hardness magnification, one of the main theories developed during the project, attracted a lot of attention. A workshop at Symposium on Theory of Computing (2020) and a workshop at Rutgers University (2022) have been partially dedicated to the topic.

I believe that the results of the MCT project have a potential to substantially impact the design of learning algorithms and, in future, I would like to provide a more elaborated theoretical framework for practical applications of the approach from the MCT project.

I hope that the MCT project will eventually help to bring us closer to the full understanding of the power of algorithms and reasoning.

Hardness magnification, one of the main theories developed during the project, attracted a lot of attention. A workshop at Symposium on Theory of Computing (2020) and a workshop at Rutgers University (2022) have been partially dedicated to the topic.

I believe that the results of the MCT project have a potential to substantially impact the design of learning algorithms and, in future, I would like to provide a more elaborated theoretical framework for practical applications of the approach from the MCT project.

I hope that the MCT project will eventually help to bring us closer to the full understanding of the power of algorithms and reasoning.