Periodic Reporting for period 4 - ReduceSearch (Rigorous Search Space Reduction)
Reporting period: 2023-07-01 to 2023-12-31
REDUCESEARCH aims to re-shape the theory of effective preprocessing with a focus on search-space reduction. The goal is to develop a toolkit of algorithmic preprocessing techniques that reduce the search space, along with rigorous mathematical guarantees on the amount of search-space reduction that is achieved in terms of quantifiable properties of the input. The three main algorithmic strategies are: (1) reducing the size of the solution that the solver has to find, by already identifying parts of the solution during the preprocessing phase; (2) splitting the search space into parts with limited interaction, which can be solved independently; and (3) identifying redundant constraints and variables in a problem formulation, which can be eliminated without changing the answer.
These three strategies raise the scientific study of preprocessing to the next level.
The first area of investigation concerns preprocessing strategies that aim to reduce the search space of the follow-up algorithm by identifying parts of an optimal solution during the preprocessing phase. The project yielded two key publications that provide preprocessing algorithms that give two different types of search-space reduction guarantees. The first result in this direction consists is based on a combinatorial structure called antler decomposition, which lifts the successful notion of crown reduction (used for preprocessing the Vertex Cover problem) to the setting of the undirected Feedback Vertex Set. A second result introduces a framework of c-essential vertices to capture vertices that cannot be avoided when making a c-approximation for a vertex subset minimization problem on graphs, resulting in guarantees that the search space can be reduced proportional to the number of c-essential vertices that are present in the input.
The second pillar of the project concerns decomposition strategies to split the search space into parts that can be solved nearly-independently. In a sequence of several papers, the project has built up an algorithmic theory around the notion of H-treewidth, which extends the classic notion of treewidth by making it aware of a graph class H that is considered 'simple' for the optimization problem of interest. This results in algorithms that can decompose the search space by decomposing the graph into parts in which an optimal solution selects only vertices and parts that can be covered by small separators, thereby yielding algorithms that are provably efficient on a much larger class of inputs than before.
The last area covered by the project is that of Constraint Satisfaction Problems (CSPs). In the maximization variant of a constraint satisfaction problem (CSP), the input consists of a set of variables and a set of constraints defined with respect to those variables. Each constraint has an associated weight. The goal is to find an assignment to the variables which maximizes the total weight of the satisfies constraints. This paradigm captures a large number of important combinatorial problems. In an early result in the project, we completely characterized the extent to which a preprocessing phase can compress the description of the constraints, in terms of the total number of variables. We developed preprocessing algorithms that compress the information of the constraints into a new set of constraints which leads to the same answers, and we proved that the amount of data compression is optimal. After this early success, the investigation into CSPs hit a barrier that could not be overcome. Consequently, the focus shifted away from the third pillar and onto the first two areas of investigation.
The project has pioneered a new theory of preprocessing for NP-hard problems targeted at search-space reduction. Its results have been disseminated via publications in high-standing scientific conferences and journals, in addition to invited presentations at several workshops. A survey reflecting on the project and its results is currently in preparation. The project has has succeed in creating a new theory of preprocessing, thereby paving the way for a follow-up project aimed at implementation and experimentation.