Skip to main content

Responsible Intelligent Systems

Objective

I propose to develop a formal framework for automating responsibility, liability and risk checking for intelligent systems. The computational checking mechanisms have models of an intelligent system, an environment and a normative system (e.g. a system of law) as inputs; the outputs are answers to decision problems concerning responsibilities, liabilities and risks. The goal is to answer three central questions, corresponding to three sub-projects of the proposal: (1) What are suitable formal logical representation formalisms for knowledge of agentive responsibility in action, interaction and joint action? (2) How can we formally reason about the evaluation of grades of responsibility and risks relative to normative systems? (3) How can we perform computational checks of responsibilities in complex intelligent systems interacting with human agents? To answer the first two questions, we will design logical specification languages for collective responsibilities and for probability-based graded responsibilities, relative to normative systems. To answer the third question, we will design suitable translations to related logical formalisms, for which optimized model checkers and theorem provers exist. Success of the project will hinge on combining insights from three disciplines: philosophy, legal theory and computer science.

Call for proposal

ERC-2013-CoG
See other projects for this call

Host institution

UNIVERSITEIT UTRECHT
Address
Heidelberglaan 8
3584 CS Utrecht
Netherlands
Activity type
Higher or Secondary Education Establishments
EU contribution
€ 1 968 057
Principal investigator
Johannes Maria Broersen (Dr.)
Administrative Contact
Mariette Spilker-Maas (Ms.)

Beneficiaries (1)

UNIVERSITEIT UTRECHT
Netherlands
EU contribution
€ 1 968 057
Address
Heidelberglaan 8
3584 CS Utrecht
Activity type
Higher or Secondary Education Establishments
Principal investigator
Johannes Maria Broersen (Dr.)
Administrative Contact
Mariette Spilker-Maas (Ms.)