Periodic Reporting for period 1 - PROTECT (Protecting Personal Data Amidst Big Data Innovation)
Reporting period: 2019-08-01 to 2021-07-31
The rate of technological innovation, now accelerated by big data and machine learning, increasingly outpaces public policy debate and the development of new regulation for the protection of personal data. This comes as the scale and social impact of data analysis is rapidly increasing. Tech companies, especially SMEs, face complex legal and ethical implications resulting from the collection of personal data from users. The pace of change and its complex technical nature serves to overload individuals and enterprises in considering the impact of use of their personal information, especially when this use also delivers attractive personalisation of services. PROTECT ESRs will develop new ways of empowering users of digital services, individually and collectively, to understand the risks they take with their rights and interests when they go online.
The technical research work of the PROTECT network is conducted through three multidisciplinary workpackages (WP1, 2 and 3). These workpackages combine ESR researching privacy law, the philosophy sub discipline of technology ethics and the computer science sub discipline of knowledge engineering.
WP1, “Privacy Paradigm”, focuses on how an organisation’s data processing intent and the expectations of its data subjects can better align by building consensus around standard forms for privacy policies, including human readable language, technical legal code and machine-readable code. WP2, “Ethics of Personalisation”, focuses on the strategic methodological concerns raised by new digital technology that builds an intimate representation of individuals to better tailor services to them but which also raises new risks to privacy and personal autonomy. WP3, “Personal Data Governance”, focuses on handling the uncertainty and risk involved in accurately informing and guiding the architectural and technological decisions that an organisation must undertake in responding to changes in business, information and technology context.
To date, workpackages undertook a detailed literature reviews in the problem domain from each discipline, captured uses cases and conduced further problem-specific investigations. WP1 identified the sufficient solutions exist for standard forms for privacy policies for use by organisations communicating with individuals, but not into privacy policies developed by communities of data subjects using decentralised personal online datastores to which organisations may seek access. It then used citizen engagement activities to explore attitudes to taking more communal control over privacy policies. WP2 initially examined conceptual issues related to the notion of personalisation, an overview of generic technical features of digital personalisation technologies, and a review of the main ethical and legal issues identified in the literature. To address the high level of uncertainty in the evolution of personalisation technologies, it undertook a foresight analysis of four such technologies as part of an anticipatory ethics assessment and introduced a concept-term model for the anticipatory ethics analysis of these technologies. WP3 assessed existing approaches to ethical and privacy risk assessment and based on that identified the need to address both structural injustices as well as individual harm. It developed an initial model that mapped this range of ethical concerns as captured by the EU High Level Expert Group to risk-based management systems that could be adopted by enterprises.
Progress in the ethics of personalisation advances the application of anticipatory technology ethics approaches to this important class of digital technologies. The use of open semantic models to capture the core concerns of such analyses, across variations in methodologies, allows for future cross methodology comparison and refinements as well as the development of findable, accessible, interoperable and reusable representations of personalisation technology ethics assessments. This may enable the more systematic study of the ethical and privacy concerns in a way that can keep pace with the accelerating capability and application scope of personalisation technology.
Progress in the governance of personal data advances the use of risk management models to address ethical and privacy risks of AI technology that must be assessed by enterprises. The use of open semantic model to capture such risk analyses can support future requirements for enterprises to undertake and document AI risk assessments under the EU’s proposed AI Act. These open semantic models also have the potential to contribute to the development of the harmonised standards required by the AI Act. The development of open risk management models also enables further study of structural injustices arising from enterprise use of AI, which may not be immediately addressed under the AI Act proposal.