AI applications are on the rise. This is the case in terms of both number and value, and is coupled with a prominent EU investments in the field and in related research. AI-driven innovations promise great benefits, but also come with clear and concerning risks for data protection that are hard to predict and only-partially covered by GDPR. There is an urgent need to understand if and how regulators should intervene in this clash between AI innovations and data protection. Regulatory sandboxes are the perfect tool for doing so. A regulatory sandbox can be defined as an environment for the testing of innovative products, services or processes by selected private actors under the supervision of a regulatory authority. Their findings can be used to improve future regulation and/or support compliance with current regulation. REBOOT.AI aims at designing a legal theoretical framework to support Member States and their data protection authorities in setting up regulatory sandboxes dedicated to the development of trustworthy AI applications. This framework will be easily accessible in the form of recommendation guidelines, which will be based on both theoretical and empirical research. For that purpose, the project studies the only two regulatory sandboxes in the European Economic Area that explicitly address the interaction between AI and data protection: those that have been activated by the Norwegian Data Protection Authority and the French Data Protection Authority. By setting a standard framework for regulatory sandboxes on AI and data protection, REBOOT.AI will bring about an increased adoption of AI trustworthy applications across Europe, thus significantly contributing to the Horizon Europe Action (HORIZON-CL4-2021-HUMAN-01-02).
Fields of science
- HORIZON.1.2 - Marie Skłodowska-Curie Actions (MSCA) Main Programme
Funding SchemeHORIZON-AG-UN - HORIZON Unit Grant
See on map
See on map