Periodic Reporting for period 1 - LASTING (LArge STructures IN random Graphs)
Reporting period: 2021-07-01 to 2023-06-30
Random graphs' systematic study was launched in 1959 by Erdős and Rényi, and by Gilbert. Since then, random graphs have become one of the most central notions in combinatorics; they also have a tremendous amount of applications in different fields such as networks, algorithms, physics, life sciences, and more.
A random graph is a graph sampled from a collection of graphs according to some probability distribution. Random graphs are known by their nice properties, and have become one of the most central notions in combinatorics. Besides being interesting on their own, they are also often used for understanding properties of general graphs, as in many cases, understanding their behaviour can shed light on the behaviour of graphs in general.
Going back to the origins of combinatorics, it is an area of mathematics primarily concerned with counting. Counting subgraphs in (random) graphs is therefore a well studied problem: How many copies of a given graph does a (random) graph (typically) contain? Or more generally, for a family of graphs F, determine the (typical) behaviour of the total number of appearances of members of F in a (random) graph. In this project I focus on families of graphs with a large size variety, and their weighted version, all having a specific structure.
Another well studied example is graph decomposition. The area of graph decomposition has a long history and can be traced back to the famous Kirkman’s schoolgirl problem from 1850. The goal in these types of problems is to split the graph into pieces all having a prescribed structure. This notion is strongly connected to edge colouring, where we identify each colour class with a subgraph of the decomposition. In the most basic form, the colouring problem asks for the minimum number of colours needed in order to split the edges of the graph into matchings (where a matching is a disjoint collection of edges).
Independent sets in random subgraphs of the hypercube (Gal Kronenberg and Yinon Spinka, 44 pages, https://arxiv.org/abs/2201.06127(opens in new window))
In this work I studied the problem of counting graphs of a certain family.
An independent set in a graph G is a subset of vertices of G containing no edges. The problem of computing the total number of independent sets in a graph is known to be hard. This has been studied for various graphs, including the d-dimensional hypercube Q_d, where the problem of counting independent sets is particularly interesting due to its relation to the hardcore model from statistical mechanics. In the early 1980s, Korshunov and Sapozhenko computed the asymptotic number of independent sets in the hypercube. This classical result was recently refined by Jenssen and Perkins who gave a formula and an algorithm for computing the asymptotics of this parameter. In this work, we extend this to the number of independent sets in a random subgraph of the hypercube. Let Q_{d,p} be the random subgraph of the hypercube Q_d obtained by keeping each edge independently with probability p. We study the asymptotic number of independent sets in Q_{d,p} as d → infinity for a wide range of parameters p, including values of p tending to zero (as a function of d), constant values of p, and values of p tending to one. The results extend to the hardcore model on Q_{d,p}, and are obtained by studying the closely related antiferromagnetic Ising model on the hypercube, which can be viewed as a positive-temperature hardcore model on the hypercube.
Decomposing cubic graphs into isomorphic linear forests (Gal Kronenberg, Shoham Letzter, Alexey Pokrovskiy, Liana Yepremyan, 45 pages, https://arxiv.org/abs/2210.11458(opens in new window))
In this work I studied the problem of decomposing graphs.
In graph theory, we aim to understand the behaviour of graphs of different types. It would be useful if we could analyse complex graphs using other graphs with much simpler properties. One way to do so, is to decompose the graph into smaller pieces, all having a common structure. The most classic example in this direction is the problem of graph colouring, which is one of the most studied topics in graph theory. The graph colouring problem seeks to partition the edges of a graph into matchings (that is, a collection of isolated edges), where each matching is called a colour class. The minimum number of matchings needed for this is called the chromatic index of the graph. It was proved by Vizing already in the 60s, that the chromatic index heavily depends on the maximum degree of the graph (the maximum number of edges touching one vertex, denoted by ∆). This parameter was considered for many types of graphs, including random graphs, multigraphs (graphs in which each edge is allowed to appear multiple times), hypergraphs (graphs in which each edge consists of more than 2 vertices), and more. However, determining the chromatic index is known to be very difficult, and it may also require the use of many colours, even for sparse graphs. Thus, it is natural to ask what can be achieved with fewer colours.
Here I consider decomposition of the edges of graphs into slightly more complicated subgraphs, that will allow the use of less colours. Perhaps the most natural candidate for this is taking collections of disjoint paths (instead of disjoint edges). The linear arboricity of a graph G, denoted by la(G), is the minimum number of edge-disjoint linear forests (i.e. collections of disjoint paths) in G whose union is all the edges of G. This notion was introduced by Harary in 1970 as one of the covering invariants of graphs, and has been studied quite extensively since then for many classes of graphs. The following conjecture, known as the linear arboricity conjecture, of Akiyama, Exoo and Harary, suggests that la(G) is at most (∆+1)/2 (rounded up), and they proved it for d=3. The conjecture is still wide open, although there has been some progress in the past nearly 30 years. For d = 3, it was conjectured by Wormald in 1987 that not only that the graph can be coloured in 2 colours such that each colour class is a collection of paths, but the colour classes can also be isomorphic (assuming the needed divisibility conditions). In this work we essentially prove his conjecture, showing that it holds for large connected cubic graphs. The proof method heavily relies on random graphs. In the proof, we essentially split the graph into two edge disjoint random graphs in a clever way that will guarantee that every such random graph has maximum degree at most 2 and is similar in behaviour to the binomial random graph G(n,1/2) (up to small dependencies). Using properties of the random graph, we show that by small swapping of edges between those two graphs we can obtain the required linear forests.