Skip to main content
European Commission logo print header

COMPUTATIONAL COMPLEXITY MEETS AUTOMATA THEORY

Final Report Summary - MINICOMPLEXITY (COMPUTATIONAL COMPLEXITY MEETS AUTOMATA THEORY)

Computational Complexity classifies computational problems according to difficulty. It studies a rich map of a large number of complexity classes, defined over a variety of computation modes (e.g. deterministic, alternating, probabilistic, interactive, quantum) and time bounds: P, EXP, NEXP, EEXP, …; NP, coNP, Δ2P, Σ2P, Π2P, …, PH, AP; ZPP, RP, coRP, BPP, PP; IP; BQP; and more. Its goal is to answer a long list of fundamental open questions about the relationships between these classes: P vs NP, P vs PH, P vs AP, BPP vs P, BPP vs NP, NP vs coNP, etc. Among them, the most famous one is P vs NP, formally a question about Turing machines (TMs) and time (i.e. number of steps):

Is every fast (i.e. polynomial-time) nondeterministic TM equivalent to a fast deterministic TM?

In the early 70s, Sakoda and Sipser proposed a miniature version of P vs NP, whose resolution could yield insight into the original question itself. This is the version that we get when the roles of TMs and time are played by twoway finite automata (2FA) and size (i.e. number of states):

Is every small (i.e. polynomial-size) nondeterministic 2FA equivalent to a small deterministic 2FA?

The question is known as 2D vs 2N, where classes 2D and 2N are the analogs of P and NP for 2FA and size. Despite some early progress against it in the late 70s, the question received little attention in the 80s and 90s. However, important advances occurred after 2000. In 2009, Kapoutsis outlined a broader research program: to extend the Sakoda-Sipser miniaturization beyond determinism-nondeterminism and study the 2FA-size analogs of all major TM-time complexity classes (Fig. 1b). This would create a new field of research within Theory of Computation, at the intersection of Computational Complexity and Automata Theory.

The principal goal of MINICOMPLEXITY has been to get that research program off the ground. Specifically, our goal has been to solidly found and vigorously initiate the new field of 2FA-size complexity, in three phases:

DEFINE - Where we define the 2FA of each mode (deterministic, alternating, probabilistic, interactive, quantum), in a way that (a) models general 2FA computations robustly via invariance theorems, and (b) carefully retains all known connections to TM-time/space complexity. The outcome is a map of robust 2FA-size complexity classes, along with all trivial inclusions between them.
UPDATE - Where we update the defined map with (a) inclusions/separations from known results of 2FA-size complexity and (b) straightforward inclusions/separations from known ideas of TM-time/space complexity.
ENRICH - Where we enrich the updated map with (a) new concepts and objects: new types of reductions, new complete problems, new high-level advances, and (b) new inclusions/separations, proven via novel general algorithmic or lower bound techniques.

The desired outcome has been a rich version of the map of Fig. 1b, where (a) all classes are defined robustly, (b) all known or straightforward inclusions/separations are incorporated, (c) new reductions and complete problems are introduced, (d) new high-level advances are made, and (e) new inclusions/separations are proved.

This map should be disseminated via publications in peer-reviewed international journals or conferences, via presentations in international conferences or seminars, and via a dedicated site on the World Wide Web.