To date, the design of ethical machine learning (ML) algorithms has been dominated by technology owners and remains broadly criticized for strategically seeking to avoid legally enforceable restrictions. In order to foster trust in ML technologies, society demands technology designers to deeply engage all relevant stakeholders in the ML development.
This ERC project aims at responding to this call with a society-aware approach to ML (SAML). My goal is to enable the collaborative design of ML algorithms, so that they are not only driven by economic interests of the technology owners but are agreed upon by all stakeholders, and ultimately, trusted by society. To this end, I aim to develop multi-party ML algorithms that explicitly account for the goals of different stakeholders---i.e. owners, those experts that design the algorithm (e.g. technology companies); consumers, those that are affected by the algorithm (e.g. users); and regulators, those experts that set the regulatory framework for their use (e.g. policy makers). The proposed methodology will enable quantifying and jointly optimizing the business goals of the owners (e.g. profit); the benefits of the consumers (e.g. information access); and the risks defined by the regulators (e.g. societal polarization).
The SAML project involves a high-risk/high-gain paradigm shift from an owner-centered to a society-centered (multi-party) ML design. On the one hand, it will require significant and challenging methodological innovations at every stage of the ML development: from the data collection all the way to the algorithm learning. On the other hand, it will impact how ML technologies are deployed in society by enabling an informed discussion among different stakeholders and, in general, by society about these new technologies. The results of this project will provide the urgently needed methodological foundations to ensure that these new technologies are at the service of society.
Fields of science
- HORIZON.1.1 - European Research Council (ERC) Main Programme