European Commission logo
français français
CORDIS - Résultats de la recherche de l’UE
CORDIS

Close Up: Your ID is Your Face

Periodic Reporting for period 1 - DATAFACE (Close Up: Your ID is Your Face)

Période du rapport: 2020-09-01 au 2023-08-31

In the age of artificial intelligence, identifying anyone in public space through facial recognition is becoming faster and easier. However, the technologies have flaws due to their technical limitations and biases. Despite their imperfections, governments worldwide deploy the technologies at a breakneck pace. Strong criticisms have been heard over the lack of appropriate legal frameworks to regulate technologies that can negatively impact civil liberties and personal freedoms. For a few years, concerns have grown concerning their use in China, where they are deployed to support a social credit system to rate citizens’ behaviour. Facial recognition technologies can be very pervasive surveillance tools. The proposed research will identify the threats and risks that the use of facial recognition for surveillance poses to the rights to privacy and data protection as defined at EU level. The project seeks to understand whether legitimate and proportionate uses of the technologies can be defined based on country trends (France, UK, USA, and to some extent China) (first research objective), the technical characteristics of the biometric systems (second research objective), and the legal frameworks applicable to the rights to privacy and data protection (third research objective).
Within a few years, the regulation of FRTs in public spaces has become a priority policy agenda in Europe and beyond. The country trend has revealed an evolution towards more biometric surveillance. In the USA, the situation is very disparate at State and local levels, with existing bans, rolled-back moratoria, and specific regulations on police use for investigation purposes. In the UK, police forces have resumed using the technologies without any clear legal framework. In France, the institutions have pushed relentlessly, but without success, for an ‘experimental law’ but obtained, instead, the use of algorithmic surveillance tools ahead of the Olympic Games. Finally, in China, these technologies widely deployed to enforce the lock restrictions during the Covid pandemic are subsisting without clear rules.
In that context, and to provide recommendations, DATAFACE investigated the regulatory approach to experiments with FRTs in public spaces and the impact of their deployment for surveillance and policing purposes on the EU fundamental rights to privacy and data protection. Concerning the experiments, it offered several policy options based on risk-assessment mechanisms. Concerning the deployment, it found that public security and crime prevention can constitute legitimate in the field of law enforcement. However, using these particularly intrusive technologies must be based on law, be strictly necessary (‘necessary in a democratic society’) and be proportionate. The necessity of a measure must be demonstrated using objective evidence (i.e. scientifically verifiable) and show that the measure is the least intrusive. Once the necessity is established, its proportionality can be assessed (through the existence of safeguards to limit the risks to individuals’ rights). Applying this analysis to the future AI Act, the project found that the proposed rules on police use of FRTs in public spaces fell short of the ‘necessity’ requirement while it included several safeguards.
• With the fellowship, Catherine has become a fully independent researcher and recognized expert in the niche area of biometrics and laws, developed a solid international network, taught several lectures for legal and technical audiences, followed training sessions to enhance her skills, and has been invited as a guest or keynote speaker to several international events.
• Through her research, she has conducted doctrinal legal research and comparative legal analysis to map out FRT use in public spaces in four countries.
• From that analysis, Catherine wrote a chapter on ‘experiments with facial recognition in public spaces: in search of an EU governance framework’, comparing the situation in the UK and France, and she designed lectures and blog posts on the US company Clearview AI (which scrapped billions of online images without legal basis).
• Through her secondment at NTNU (Norway) and her study visit at the iProBe lab (Michigan State University), she gained technical knowledge that she used for her chapter on ‘Biometric Data, Within and Beyond Data Protection’.
• The last phase of the project was the legal analysis of the impact of FRTs on the EU fundamental rights to privacy and data protection. For this part, Catherine analyzed the conditions to restrict the exercise of the rights to privacy and data protection to determine whether the use of FRTs in public spaces can be legitimate, necessary and proportionate. She used that analysis to assess the rules proposed in the future AI Act. Using the European Courts’ tests on the principle of necessity and the factors identified by the EDPS, she found that the proposal did not provide evidence of the strict necessity to deploy these technologies in our democratic public spaces.
• The progress and the findings of the project were reported in scientific and professional publications and discussed at multiple legal and technical conferences. Updates were also communicated via a webpage, her LinkedIn profile and the Twitter account of CiTiP (host research centre). Besides a video explainer on the project, Catherine presented her findings at her final workshop (June 2023).
DATAFACE went beyond the state-of-the-art:
• First, the research filled two gaps. The first was on the regulatory approach to experiments with FRTs, proposing different avenues based on risk-assessment mechanisms. The second was the investigation of an under-explored issue, i.e. the necessity of FRTs in public spaces for surveillance and policing purposes. The assessment of their necessity is a prerequisite to evaluating their proportionality. Yet, the existing literature and debates do not focus on that condition. The discussions take for granted their existence and deployment, focusing on the adoption of safeguards.
• Second, DATAFACE has contributed to the debate on the legal notion of biometric data, showing its existence beyond data protection and trying to reconcile it with its technical meaning.
• Third, DATAFACE has contributed to the research on the privacy issues raised by large-scale biometric training datasets and models trained with Generative AI.

Potential impacts of the project:
• First, the perspective brought by DATAFACE and the analysis of necessity are crucial in the context of the current discussions on the regulation of AI, but the findings could also apply to any discussions concerning future uses of AI systems beyond biometric systems.
• Second, DATAFACE has addressed a societal issue that concerns all citizens and does not end with the fellowship. Whether the use of FRTs, in real-time or post-event, is necessary in a democratic society is not only a legal question but also a social and ethical one.
• Third, the project has advanced the much-needed ‘translational work’ between the legal and technical communities at the intersection of privacy and biometrics.
• Fourth, seeds for future collaborations and new networks with scientists in biometrics in Europe and the USA have resulted from the project and will be part of future research projects.
• Finally, part of the research (e.g. the video explainer) can be used as teaching material (in high schools).
Image extracted from the DATAFACE webpage