Community Research and Development Information Service - CORDIS


AOC Report Summary

Project ID: 339539
Funded under: FP7-IDEAS-ERC
Country: Switzerland

Periodic Report Summary 3 - AOC (Adversary-Oriented Computing)

The goal of the AOC (Adversary-Oriented Computing) project is to contribute to building high-assurance distributed programs by introducing a new dimension for separating and isolating their concerns, as well as a new scheme for modularly composing and reusing them.

We have so far conducted research on Adversary-Oriented Computing in several directions.

- The first direction considers a classical distributed system model where a static set of nodes seek to achieve a common goal despite failures. In a recent ACM PODS 16 paper, we closed the fundamental question of the complexity of atomic commit with an asynchronous adversary. The question was open since the first results on the problem with a synchronous adversary in 1983. We considered in our recent paper an asynchronous adversary but focused at runs with a weaker adversary where the system is failure-free and synchronous (with various tradeoffs between the main properties defining the problem) has never been addressed. We used here new ways to measure causality of messages in a distributed system, which can be of interest in its own right.

- From a more practical perspective, we also came up recently with new programming language supports to enable the development of reliable distributed systems assuming different kinds of adversaries in our Usenix OSDI 2016 paper. What we used here is a very old idea (that of futures or promises), generalized in a novel manner. The idea is to enable the programmer to somehow adapt dynamically the program to the adversary.

- We also looked into multi-processor computing. Here the adversary is the operating system that schedules processes in a possibly conflicting manner. The adversary introduces concention or delay. We defined what it means for a data structure to scale in an adversarial setting (our ACM SOSP 2013 and ACM ASPLOS 2015 papers) and, more recently, we proposed a novel pattern for designing data structures that scale, by starting from ones that do not assume any concurrency (any adversary). In the same vein, we also showed that adversaries have a hard time making lock-free data structures non wait-free (Our ACM SPAA 2016 paper).

- We recently studied the impact of an adversary that kills basic elements of neural network (neurons or synapses), as well as of a more severe adversary that can control the neurons (Byzantine neurons and synapses) - Our IEEE IPDPS 2016 paper. We believe the result to be fundamental for it enables to understand the robustness of neural networks and could help build better ones. We are also looking into what happens in natural algorithms where some elements are failed by an adversary (Our DISC 2016 paper on synchronising fireflies).

- Last but not least, we also looked at the "user" dimension. Here we looked into recommender systems and a setting where the adversary is curious. We introduced a new form of privacy, as an extension to Differential Privacy (Our VLDB paper). Here the idea is not only make it harder for an adversary to know that some user liked some movie or not, but that the user liked any movie in some distance with that movie.

Reported by

Follow us on: RSS Facebook Twitter YouTube Managed by the EU Publications Office Top