Skip to main content
European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS

Systematic mapping of the complexity landscape of hard algorithmic graph problems

Periodic Reporting for period 5 - SYSTEMATICGRAPH (Systematic mapping of the complexity landscape of hard algorithmic graph problems)

Período documentado: 2022-01-01 hasta 2022-12-31

Graph-theoretical models are natural tools for the description of road networks, circuits, communication networks, and abstract relations between objects, hence algorithmic graph problems appear in a wide range of computer science applications. As most of these problems are computationally hard in their full generality, research in graph algorithms, approximability, and parameterized complexity usually aims at identifying restricted variants and special cases, which are at the same time sufficiently general to be of practical relevance and sufficiently restricted to admit efficient algorithmic solutions. The goal of the project is to put the search for tractable algorithmic graph problems into a systematic and methodological framework to obtain a unified algorithmic understanding by mapping the entire complexity landscape of a particular problem domain.

Completely classifying the complexity of each and every algorithmic problem appearing in a given formal framework would necessarily reveal every possible algorithmic insight relevant to the formal setting, with the potential of discovering novel algorithmic techniques of practical interest. The project achieved substantial progress in understanding hard combinatorial problems in different problem domains, for example, for homomorphism problems, edge-disjoint packing problems, and the kernelization of graph modification problems.
The project so far has achieved significant scientific progress on the topics laid out in the proposal. During the period 2017-2020, 7 journal and 21 conference publications were published acknowledging the support of the project, many of them appearing in prestigious venues such as the STOC/FOCS/SODA/ICALP conferences and journals “SIAM Journal on Computing” and “ACM Transactions on Algorithms.” We highlight here only some of the main results that are most representative of the project.

The structure of the project proposal envisions two distinct type of contributions. First, the overarching goal of the project is to obtain complete dichotomy results that classify each problem in an infinite complexity landscape as either computationally “hard” or computationally “easy”. Hard problems usually have easy special cases: if we restrict the input instance in a certain way, we may arrive to a tractable problem. Dichotomy theorems can be thought of as a systematic way of searching for tractable special cases of a problem. We define a family of problems by considering all possible restrictions (of a certain type) on the input; a dichotomy theorem characterizes the complexity of every resulting restricted problem. This way, a dichotomy theorem formally identifies every possible easy special case of the problem. If there is a particular problem in this family of problems whose complexity is unknown, then it is infeasible to obtain a complete classification. Therefore, the secondary goal of the project is to remove such road blocks towards the possibility of dichotomies by settling the complexity of these notorious problems.

Besides results of these two types, the project has resolved important open questions in the field of parameterized algorithms, including the W[1]-hardness of the EVEN SET problem. This problem was one of the very few remaining open questions in the highly influential list appearing in Downey and Fellows’ monograph published in 1999.

Kernelization of graph modification problems

The progress of the project can be very well demonstrated by our progress on the kernelization complexity of graph modification problems. A key goal in the field of parameterized of algorithms is to obtain kernelization results: these are preprocessing algorithms that do not solve the problem fully, but guaranteed to reduce the size of the instance. Kernelization is particularly well studied in the area of parameterized graph modification problems. We started the ambitious project of settling the kernelization complexity of H-free graph modification problems, where the goal is to make the graph free of any induced H subgraphs by adding/removing edges. While we haven’t achieved a complete classification yet, our results under submission removed major road blocks towards this goal by proving a number of incompressibility results and showed that there is only a small number of remaining graphs H whose complexity needs to be understood.

Algorithmically exploiting bounded treewidth

There is a wide literature on how efficient algorithms can be designed if we can assume that the input instance has a tree decomposition of small width. It is natural to investigate the range of problems that admit such improved algorithms and to quantify the possible improvements. Our results published in [SICOMP ‘18] and [ACM TALG ‘18] were among the first papers that realized that such lower bounds are possible and proved asymptotically tight lower bounds for a number of fundamental combinatorial problems. The success of these initial investigations suggests the feasibility of a more systematic analysis how treewidth affects problem complexity. In particular, a fundamental result of our [SICOMP ‘18] paper is that c-COLORING for a fixed number c of colors requires c^w dependence on the width w of the tree decomposition. Since c-COLORING can be interpreted as a special case of a homomorphism problem, it is natural to try to generalize the lower bound into this context. We started our investigations with the reflexive list H-homomorphism problem and presented a complete characterization, for every fixed H, how the complexity depends on the width of the tree decomposition. The characterization required now lower bounds and matching algorithmic results that combine the (fairly obvious) idea of considering only incomparable vertices in H and the (not so obvious) idea of exploiting certain type of separators in H. This result can be seen as a good demonstration of the main conceptual message of the project: we have identified two algorithmic ideas in the landscape of H-homomorphism problems and the matching lower bounds show that, in a formal sense, there are no further algorithmic ideas that can improve the running time in this context.
Perhaps the most novel aspect of the results developed in the project is exhibiting the feasibility of proving dichotomy theorems for graph problems. Hard problems usually have easy special cases: if we restrict the input instance in a certain way, we may arrive to a tractable problem. Dichotomy theorems can be thought of as a systematic way of searching for tractable special cases of a problem. We define a family of problems by considering all possible restrictions (of a certain type) on the input; a dichotomy theorem characterizes the complexity of every resulting restricted problem. This way, a dichotomy theorem formally identifies every possible easy special case of the problem. The results of the project indicate that the search for dichotomy theorems for graph problems should receive more attention, especially in the context of fixed-parameter tractability: we have presented various ways in which one can formulate such questions and showed that some of these questions can be answered with reasonable research effort.
SYSTEMATICGRAPH