Skip to main content
European Commission logo print header

Foundations for Fair Social Computing

Periodic Reporting for period 4 - FairSocialComputing (Foundations for Fair Social Computing)

Berichtszeitraum: 2023-01-01 bis 2023-06-30

Social computing represents a societal-scale symbiosis of humans and computational systems, where humans interact via and with computers, actively providing inputs to influence and being influenced by, the outputs of the computations. Examples of social computations include (i) Machine learning based predictive analytics for scoring and classifying human behaviors ranging from recidivism risk estimation and predictive policing to credit scoring and profiling welfare recipients; (ii) Search and recommendations on Web / crowdsourcing / social media platforms like Google, Facebook, and Twitter; (iii) Match-making algorithms connecting producers and consumers of goods and services in e-commerce platforms like Amazon, sharing economy platforms like Uber, AirBnB, job search sites like LinkedIn and freelancer sites like CrowdFlower. The outputs of these social computations are determined by processing inputs about user characteristics, choices and interactions at a scale unprecedented in human civilization. As such, social computing offers newer possibilities for societies to organize themselves and richer ways for humans to cooperate in making decisions in complex situations.

As social computations pervasively impact all aspects of our social lives from what news we get to see and who we meet to what goods and services are offered at what price and how our creditworthiness and welfare benefits are assessed, questions are being raised about their fairness. A fair social computation is one that is perceived as just by the participants in the computation. The case for fair computations in democratic societies is self-evident: when computations are deemed unjust, their outcomes will be rejected and they will eventually lose their participants. Unfairness concerns about current social computations include:
1. Implicit biases in search and recommendations: In 2016 US elections, Google’s search algorithms have been criticized for presenting ideologically biased results for queries on certain political topics, while Facebook’s personalized newsfeed algorithms have been accused of enabling wide-spread dissemination of fake and biased news stories, polarizing the electorate and fragmenting them into ideological “filter bubbles”. These algorithmic biases are frequently implicit, i.e. unintended consequences of algorithms designed to optimize for retrieving stories that users find interesting or relevant.
2. Ethical concerns with algorithmic decision making: A recent study of a commercially used recidvism risk estimation (i.e. predicting the chance of a criminal reoffending in future) algorithm called COMPAS revealed significant disparity in its prediction accuracy for different social groups. More broadly, ethics and legal scholars have raised concerns about the use of machine learning algorithms to replace humans when making life altering assessments like recidivism risk prediction, credit assessments, or welfare benefit suitability, given the lack of mechanisms to embed ethical values into the learning models.
3. Lack of trust and transparency in black-box algorithms: Public policy experts fear that the lack of transparency in current social computations are leading to a black-box society, where (a) participants do not know what data about themselves is being used in life-affecting computations; (b) regulators cannot check the compliance of the computations with laws; and (c) even designers of computations cannot fully comprehend the behavior of algorithms trained using complex learning models that are hard for humans to interpret.

Traditionally, the topic of fairness has been studied in social-economicpolitical-moral sciences and the law. The General Data Protection Regulation adopted by the European parliament represents an ambitious legal effort towards making “automated individual decision making, including profiling” more fair. The FairSocialComputing project attempts a similarly ambitious computer science-centered research agenda to complement these legal efforts towards fair social computations.
In the context of predictive analytics (e.g. risk assessments for loans or jobs or welfare policing), we have operationalised (i.e. provided formal interpretations for), synthesised (i.e. designed algorithmic mechanisms for) many desired notions of fairness at level of both individuals and groups and analysed the tradeoffs between achieving different fairness desiderata. Specifically, our operationalisation of group fairness notions lead to non-discriminatory predictive analytics, which prevents harms to already disadvantaged or marginalised demographic groups in the society and allow for predictive analytics that conform to existing anti-discrimination laws, be they in the context of risk assessments for employment or policing or access to financial services. We also studied how decision support systems based on predictive risk analytics can be better designed to assist human experts in critical decision making scenarios.

In the context of search and recommender systems, we operationalised and synthesised notions of fair representation and fair exposure of diverse voices and information in search rankings and recommendations. Our work offers ways to reduce implicit biases in the current search and recommender systems used on the Web and social media sites. Additionally, our studies auditing ad platforms on social media sites have revealed numerous potential societal harms ranging from (a) leakage of private user data to advertisers and (b) incomplete/inaccurate explanations of the data used for targeting ads to (c) targeting opportunity (job, housing, and financial services) ads in a discriminatory way and (d) maliciously targeting ads to provoke societal conflicts and influence elections. Our studies have directly led to changes in the ways some widely used online advertising platforms operate; our results have been referenced by policy makers to further investigate and regulate the practices of online ad platforms.

Finally, in the context of market/match-making algorithms, we conducted some of the earliest studies investigating and addressing fairness concerns that arise from algorithms that mediate interactions between multiple stakeholders on online marketplace platforms like Amazon or Uber. These algorithms need to contend with more complex fairness issues that arise from the multi-sided nature of these platforms -- for example, a product recommender system on Amazon would have to consider whether it is being fair to sellers, buyers, and producers of products. Our studies exposed how marketplace platforms (e.g. Amazon) are being exploited by third-parties and platforms themselves to bias user attention towards their preferred information and their preferred stakeholders and they have attracted attention from both the research community and industry regulators.
blank image