Periodic Reporting for period 4 - FairSocialComputing (Foundations for Fair Social Computing)
Période du rapport: 2023-01-01 au 2023-06-30
As social computations pervasively impact all aspects of our social lives from what news we get to see and who we meet to what goods and services are offered at what price and how our creditworthiness and welfare benefits are assessed, questions are being raised about their fairness. A fair social computation is one that is perceived as just by the participants in the computation. The case for fair computations in democratic societies is self-evident: when computations are deemed unjust, their outcomes will be rejected and they will eventually lose their participants. Unfairness concerns about current social computations include:
1. Implicit biases in search and recommendations: In 2016 US elections, Google’s search algorithms have been criticized for presenting ideologically biased results for queries on certain political topics, while Facebook’s personalized newsfeed algorithms have been accused of enabling wide-spread dissemination of fake and biased news stories, polarizing the electorate and fragmenting them into ideological “filter bubbles”. These algorithmic biases are frequently implicit, i.e. unintended consequences of algorithms designed to optimize for retrieving stories that users find interesting or relevant.
2. Ethical concerns with algorithmic decision making: A recent study of a commercially used recidvism risk estimation (i.e. predicting the chance of a criminal reoffending in future) algorithm called COMPAS revealed significant disparity in its prediction accuracy for different social groups. More broadly, ethics and legal scholars have raised concerns about the use of machine learning algorithms to replace humans when making life altering assessments like recidivism risk prediction, credit assessments, or welfare benefit suitability, given the lack of mechanisms to embed ethical values into the learning models.
3. Lack of trust and transparency in black-box algorithms: Public policy experts fear that the lack of transparency in current social computations are leading to a black-box society, where (a) participants do not know what data about themselves is being used in life-affecting computations; (b) regulators cannot check the compliance of the computations with laws; and (c) even designers of computations cannot fully comprehend the behavior of algorithms trained using complex learning models that are hard for humans to interpret.
Traditionally, the topic of fairness has been studied in social-economicpolitical-moral sciences and the law. The General Data Protection Regulation adopted by the European parliament represents an ambitious legal effort towards making “automated individual decision making, including profiling” more fair. The FairSocialComputing project attempts a similarly ambitious computer science-centered research agenda to complement these legal efforts towards fair social computations.
In the context of search and recommender systems, we operationalised and synthesised notions of fair representation and fair exposure of diverse voices and information in search rankings and recommendations. Our work offers ways to reduce implicit biases in the current search and recommender systems used on the Web and social media sites. Additionally, our studies auditing ad platforms on social media sites have revealed numerous potential societal harms ranging from (a) leakage of private user data to advertisers and (b) incomplete/inaccurate explanations of the data used for targeting ads to (c) targeting opportunity (job, housing, and financial services) ads in a discriminatory way and (d) maliciously targeting ads to provoke societal conflicts and influence elections. Our studies have directly led to changes in the ways some widely used online advertising platforms operate; our results have been referenced by policy makers to further investigate and regulate the practices of online ad platforms.
Finally, in the context of market/match-making algorithms, we conducted some of the earliest studies investigating and addressing fairness concerns that arise from algorithms that mediate interactions between multiple stakeholders on online marketplace platforms like Amazon or Uber. These algorithms need to contend with more complex fairness issues that arise from the multi-sided nature of these platforms -- for example, a product recommender system on Amazon would have to consider whether it is being fair to sellers, buyers, and producers of products. Our studies exposed how marketplace platforms (e.g. Amazon) are being exploited by third-parties and platforms themselves to bias user attention towards their preferred information and their preferred stakeholders and they have attracted attention from both the research community and industry regulators.