We are witnessing a massive shift in the way people consume information. In the past, people had an active role in selecting the news they read. More recently, the information started to appear on people's social media feeds as a byproduct of one's social relations. At present, we see a new shift brought by the emergence of online advertising platforms where third parties can pay ad platforms to show specific information to particular groups of people through paid targeted ads. These targeting technologies are powered by AI-driven algorithms. Using these technologies to promote information, rather than promote products as they have been initially designed for, opens the way for self-interested groups to use user's personal data to manipulate them. European Institutions recognize the risks, and many fear a weaponization of the technology to engineer polarization or manipulate voters.
The goal of this project is to study the risks with AI-driven information targeting at three levels: (1) human-level--in which conditions targeted information can influence an individual's beliefs; (2) algorithmic- level--in which conditions AI-driven targeting algorithms can exploit people's vulnerabilities; and (3) platform-level--are targeting technologies leading to biases in the quality of information different groups of people receive and assimilate. Then, we will use this understanding to propose protection mechanisms for platforms, regulators, and users.
This proposal's key asset is the novel measurement methodology I propose that will allow for a rigorous and realistic evaluation of risks by enabling randomized controlled trials in social media. The measurement methodology builds on advances in multiple disciplines and takes advantage of our recent breakthrough in designing independent auditing systems for social media advertising. Successful execution will provide a solid foundation for sustainable targeting technologies that ensure healthy information targeting.
- HORIZON.1.1 - European Research Council (ERC) Main Programme