Skip to main content
Aller à la page d’accueil de la Commission européenne (s’ouvre dans une nouvelle fenêtre)
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

The Power of Randomization in Uncertain Environments

Periodic Reporting for period 3 - UncertainENV (The Power of Randomization in Uncertain Environments)

Période du rapport: 2022-10-01 au 2024-03-31

Much of the research on the foundations of graph algorithms is carried out under the assumption that the algorithm has full knowledge of the input data.
In spite of the theoretical appeal and simplicity of this setting, the assumption that the algorithm has full knowledge does not always hold.
Indeed uncertainty and partial knowledge arise in many settings.
One example is where the data is very large, in which case even reading the entire data once is infeasible, and sampling is required.
Another example is where data changes occur over time (e.g. social networks where information is fluid).
A third example is where processing of the data is distributed over computation nodes, and each node has only local information.

Randomization is a powerful tool in the classic setting of graph algorithms with full knowledge and is often used to simplify the algorithm and to speed-up its running time.
However, physical computers are deterministic machines, and obtaining true randomness can be a hard task to achieve.
Therefore, a central line of research is focused on the derandomization of algorithms that relies on randomness.

The challenge of derandomization also arise in settings where the algorithm has some degree of uncertainty.
In fact, in many cases of uncertainty the challenge and motivation of derandomization is even stronger.
Randomization by itself adds another layer of uncertainty, because different results may be attained in different runs of the algorithm.
In addition, in many cases of uncertainty randomization often comes with additional assumptions on the model itself, and therefore weaken the guarantees of the algorithm.

The goal of this project is to investigate the power of randomization in uncertain environments and mainly focusing on two fundamental areas of graph algorithms with uncertainty.
The first area relates to dynamic algorithms and the second area concerns distributed graph algorithms.
We have been working on understanding the power of randomization in both fault-tolerant and dynamic settings and distributed computing by obtaining both randomized and deterministic solutions and trying to close the gap between them.
In particular, we developed new randomized algorithms for computing spanners and MIS in the congested clique model. We developed a new neighborhood collection technique that allows each node to know a neighborhood around it in a constant number of rounds.
We made substantial progress in fault tolerant distance oracles. We developed new deterministic and simple solutions matching the state of the art for randomized solutions for some of the problems and developed faster randomized algorithms in other problems.
We also developed new and improved randomized algorithms in some classical problems such as computing the girth in the directed graphs and computing distance oracles.
We anticipate progress in all fields described in the proposal: distributed computing, dynamic graph algorithms and fault-tolerant structures.
Until the end of the project, we plan to continue working on developing both randomized and deterministic solutions for algorithms that handle some kind of uncertainty and understanding the gap between randomized and deterministic solutions.