Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Content archived on 2024-06-18

Responsible Intelligent Systems

Final Report Summary - REINS (Responsible Intelligent Systems)

One of the most disruptive technologies presently starting to affect our lives is Artificial Intelligence (AI). Many have warned against the prospect of AI taking over and changing (or even eliminating) our lives for good. We believe these warnings have no ground and vastly overestimate what AI is and will be capable of. Ironically, the danger is not in AI itself, but in the fact that we overestimate its capabilities. Overestimation contributes to our tendency to easily delegate responsibilities to machines that cannot handle them. The REINS project aimed to tackle that problem by studying two sub-problems: (1) how do we formally and computationally check responsibilities in environments involving intelligent systems?, (2) how do we endow intelligent systems with moral/legal awareness and with moral/legal reason-based decision capacity? For both problems the REINS project formulated initial answers. For responsibility checking we have to focus on the intention forming mechanisms of intelligent systems and on how they play a role in formally tracing back responsibility from wrong outcomes to the designers of such mechanisms. This presupposes a symbolic approach to AI programming. The REINS project has put forward several stit-based formalisms that can function as symbolic representations for responsibility. The programming part had to be left to future work. For the second problem, that is better known under the name 'machine ethics', we see a solution in the application of rule-based deontic logic formalisms. Such formalisms enable us to endow systems with defeasible moral rules whose 'weights' are adapted on the basis of 'moral feedback' they get from their social environment. This approach can be characterised as an 'ethical learning technique' based on symbolic information (defeasible rules) and priorities (weights). However, learning would not be limited to the weights, but also to the rules themselves. Here we see a promising connection with work from the 90ies on 'inductive logic programming' that is aimed at learning new rules from positive and negative examples, through generalisation and specialisation of existing rules.