European Commission logo
español español
CORDIS - Resultados de investigaciones de la UE
CORDIS
Contenido archivado el 2024-06-18

Regulating Emerging Robotic Technologies in Europe: Robotics facing Law and Ethics

Final Report Summary - ROBOLAW (Regulating Emerging Robotic Technologies in Europe: Robotics facing Law and Ethics)

Executive Summary:
The investigation carried out within the RoboLaw project has focused on the multi-layered interplay between robotics and regulation. The research moves from a request for a legal framework that can accompany the developments of robotics, that comes from the actors who operate in this sector. Researchers and manufacturers who work on robotics both at the experimental and at the industrial level claim that they cannot properly appraise the risks and duties entwined in their activities until a clear analysis of this interplay has been made. The research has explored both the formal and the substantial aspects of the binomial robotics and regulation. On the one hand, it has focused on the different legal tools that can be employed in order to regulate technology in general, and robotic technologies in particular; on the other hand, the contents of the extant relevant legislation have been examined, with the aim of verifying whether they can already provide a systematic legal framework or other forms of regulation are needed, notwithstanding the adaptability and flexibility of the available rules.
The ambition of the European Union to promote innovation in the internal market and foster competitiveness makes robotics a strategic sector, to which the European institutions are devoting considerable attention. At the same time research and industrial production in the domain of robotics have to grow in accordance with the complementary objective which is enshrined in the European legal system, that of being an area of freedom, security and justice. The competing goals of protecting consumers and more generally end-users from harm and fostering innovation have therefore to become embedded in the regulatory endeavour and in the innovation process itself.
The RoboLaw project has addressed these challenges and tried to provide answers with regard to: the legal tools that better suit the goal of regulating technology; the type of ethical analysis that should be conducted on technological innovation, and the methodology to apply, in order to align our research to the EU policy on Responsible Research and Innovation (RRI); the contents of regulation, with special attention to the need of protecting the fundamental rights of European citizens.
The prospect of regulating robotics has had as points of reference the two requirements of ethical acceptability and orientation towards societal needs that identify the pillars of the concept of RRI. Not only do robotic products and applications have to comply with the core values embraced in the European context by the constitutional traditions of Member States and positively affirmed by the Charter on fundamental rights, but particular attention, and possibly a peculiar legal status in some respects, should also be given to those technologies that respond to societal needs, therefore contribute to achieve normative goals such as equality of opportunities, justice, solidarity and to improve the quality of life of the European citizens, especially the more deprived and vulnerable.
The question whether fundamental rights are menaced by new technical opportunities purported by robotics has been tackled; but research efforts have been devoted also to investigate whether an efficient and proactive protection of fundamental rights and liberties proclaimed at the European level requires to foster innovation in robotics by means of especially designed legal rules or inventive interpretation. The latter perspective has led for instance to propose special rules in the context of liability for damages deriving from the use of robotic technologies in the field of healthcare, namely surgical robots, prostheses and care robots. Robotics for healthcare is in fact a domain that more than others requires regulatory intervention and where legislation promoting innovation is called for. A bundle of demographic, social, technological, economic and political factors orientates the development of this sector, and makes it a special case to be analyzed from an ethical legal perspective, and one that qualifies as an effective context of implementation of the EU policy action.

Project Context and Objectives:
The main objective of the research described is to understand the legal and ethical implications of emerging robotic technologies, and aims to uncover (1) whether existing legal frameworks are adequate and workable in light of the advent and rapid proliferation of robotics technologies, and (2) in which ways developments in the field of robotics affect norms, values and social processes we hold dear.
The problem of regulating new technologies has been tackled in Europe almost by every legal system; it is therefore possible to rely on a background which includes a large amount of studies on the relationship between law and science and between law and technology. Nevertheless, the RoboLaw project is focused on the extreme frontiers of technological advance, confronting the legal “status” of robotics, nanotechnologies, neuroprostheses, brain-computer interfaces, areas in which very little work has been done so far. The radical novelty of these technological applications and instruments requires an original and more complex investigation, characterized by a multidisciplinary method and a comparative analysis of the diverse approaches adopted in different legal systems.
Several research institutes worldwide have investigated aspects of the regulatory and legal consequences of developments in robotics. However, so far the landscape of “robolaw” (in Europe and outside) is still quite fragmentary. As early as the 1980s legal scholars started to investigate whether the development of artificially intelligent machines, such as robots, would require an adaptation or extension of existing legal frameworks, for instance in relation to liability or legal status. Much of this work was related to agent technology in software systems. Since robotics was still more science fiction than actual fact, however, many of these investigations were sketchy in nature. They addressed important legal themes to be covered by (future) law on robotics, but did not provide concrete measures or rules, nor did they apply to actual legal systems. This project is the first in-depth investigation into the requirements and regulatory framework(s) of “robolaw” in the age of the actualization of advanced robotics, and the first study to combine the many different legal themes that have been investigated in isolation before. Moreover, it is the first research to delve into the legal and ethical consequences of developments in robotics within specific legal systems within the EU and to compare these with the US and the Far East, Japan in particular.
The main goal of the RoboLaw project is to achieve a comprehensive study of the various facets of robotics and law and lay the groundwork for a framework of “robolaw” in Europe. When there is no specific legislation aimed at regulating these new technologies, the problems they pose need to be confronted in the frame of extant legal systems; an objective of the research is, therefore, to verify the applicability of current rules and use the present instruments and categories to formulate possible solutions. This preliminary investigation will also point towards areas of regulation that are in need of adjustment or revision in order to accommodate the issues opened up by innovation in the field of robotics.
The research is set to consider the contents of any possible regulation in order to decide which is the rule that better adjust to the specific features of a given technology, possibly by distinguishing among the various technologies examined because different characteristics may suggest different strategies of regulation. But it will also try and identify the type of legal tool that is best suited to reach the goals of a certain uniformity of regulation and of spontaneous compliance. The a-territorial quality of the technological advance asks for an intervention at the European level (at least), but this can take different forms: soft-law instruments without binding force, although with potential normative effects, or hard law.
After having examined the Northern-American approach to emerging robotic technologies and the Eastern perspective, represented especially by the Japanese and the Chinese system, the RoboLaw project aims at developing a specific European approach on its topic, characterized by core “European values” enshrined in the European sources of law, like the European Charter of Fundamental Rights. The final outcome of the research is to elaborate a set of regulatory guidelines (D6.2 “Guidelines on Regulating Robotics”) addressed to the European policy makers and devoted to promote a technically feasible, yet also ethically and legally sound basis for future robotics developments.
Project Results:
The research activities carried out within the project have led to articulate policy considerations and legal recommendations for regulating robotic technologies.
Preliminarily, in order to establish the background of this investigation, a study was devoted to understand the multilayered interplay between law, science and technology, and then set the research against this meaningful and suggestive context. This analysis was also intended to examine the possible diverse approach and the sources of law at stake in the perspective of regulating robotic developments, including the alternative between soft law instruments and mandatory instruments. These results were attained through a workshop on “Regulating Technological Development at the Intersection of Science and Law”, where invited speakers were asked to confront the topic both from a theoretical perspective and in some specific fields (ICT, agro-food technologies, robotic technologies etc.) taken on as case-studies (D2.1). A collective volume “Law and Technology. The Challenge of Regulating Technological Development” has been published in 2013 in the RoboLaw Series; the book comprises the workshop’s presentations, enriched with further reflexions by the same authors, new contributions, an extensive introduction and a concluding chapter that draws lessons for the task of elaborating guidelines for regulating robotics. The volume also initiates a RoboLaw Series, which will accompany the length of the project, and will serve as a compact and acknowledgeable means of dissemination (D.2).
WP 3, that is devoted to the task of “Roadmapping RoboLaw”, first has produced a deliverable outlining a methodology for analyzing existing regulatory provisions regarding emerging robotic technologies. This methodology, if adopted to its full extent, leads to an extensive encyclopedia of norms related to robotic technologies and a detailed comparison between the countries studied (D3.2). Next, a comprehensive state of the art of existing regulation in robotics and relevant for robotics in the international, European and European MS law was conducted (D3.1). The deliverable provides a brief overview of the kind of robotics on which the analysis focuses. We have adopted a broad definition, including traditional robots or ‘machines’, i.e. constructed systems that display both physical and mental agency, but are not alive in the biological sense. Examples are industrial robots, domestic robots, care robots, medical and surgery robots, autonomous vehicles, and humoids/animoids. The field also includes softbots (autonomous software agents) and hybrid-bionic systems (human beings who are equipped with robotic technologies or biomedically enhanced, e.g. mechatronic and biomedical prostheses, soft tissue modelling, and brain/neural interfaces). Next, given the absence of robot specific regulation in countries explored for the deliverable, five common legal themes are discussed: (i) health, safety, consumer, and environmental regulation; (ii) liability (including product liability and liability in certain sectors); (iii) intellectual property rights (both to the robot itself and to works created by the robot); (iv) privacy and data protection; (v) and capacity to perform legal transactions (e.g. whether intelligent agents can enter into contracts).
(i) Health, safety, consumer and environmental regulation: An extensive set of EU-based health and safety requirements is relevant for robots and robotic technologies. The requirements aim at protecting workers in factories against the dangers of (heavy) machinery and equipment. With the development and widespread deployment of industrial robots, specific regulation (for instance the ISO 10218 standard) has been developed. Industrial robots are applied in a controlled and well-structured environment, generally by users who are trained for specific tasks, and the safety is directly related to the machine functioning. In contrast, service robots are applied in less structured environments by people for a wide range of tasks, often with no specific training, and the safety typically depends on the interaction of the robot and humans within less clearly defined spaces. Therefore, as robotic applications move from structured, professional environments of industry into hospitals, homes, shops, and the street, a new wave of regulation will have to be developed to cope with the specific health and safety issues that emerge in these new environments. Differences in safety risks and levels of user training will affect the nature of safety requirements and hence the design of the robots.
Another relevant finding is that there is a complex interplay between different regulations. Robots are regulated throughout their lifecycle under different regimes, ranging from regulation of hazardous substances and product safety requirements to rules on disposal of waste equipment. Depending on their type, they fall under general regimes of consumer goods and product safety but also potentially under product-specific regimes, such as toys or cars. This complex interplay merits further study for determining which sets of legal requirements obtain for which types of robots and robotic devices, in order to see whether gaps in legal protection or conflicting rules exist for certain specific robotic applications.
(ii) Liability. Robots cannot be held liable themselves for acts or omissions that cause damage to third parties under existing legal regimes. However, manufacturers, owners or users of robotic technologies may be held responsible for damage caused by robots, if the cause of the robot’s behaviour can be traced back to them. These actors may be held liable if they could have foreseen and avoided the robot’s behaviour under the rules of fault liability. More importantly, though, they can be held strictly liable (which does not depend on fault or negligence) for acts or omission of the robot, for example, if the robot can be qualified as a dangerous object or if it falls under product liability rules.
Still, it is hard to provide evidence of the link between human behaviour and damage caused by robotic technologies, particularly in cases where a person cannot distinctly control the actions of a robot. The damage may also be the result of a multitude of factors, as the functioning of the robot is likely to be complex and may depend not only on hardware and (self-learning) software, but also on sensors and interaction with unpredictable environmental factors. This makes the liability risks more difficult to estimate, which can lead to legal uncertainty that may have to be addressed by the legislature.
The position that the legislature should take, though, is not straightforward. Imposing substantial liability on manufacturers, owners or users of robots for damages caused to third parties may enhance safety and thus support wider social acceptance of robots, but it can also make industry or users risk averse and lead to slowing down the development and roll-out of new robot applications. A low level of liability could, in contrast, stimulate innovation and experimentation with new types of robots, which could ultimately enhance welfare in sectors such as health or transport. Therefore, the law should strike a careful balance between the conflicting interests of manufacturers, users, and third parties, and between risk regulation and stimulation of innovation.
(iii) Intellectual property rights (IPRs). As with all emerging technologies, robotics inventions and products can be protected by intellectual property rights, such as patents and trademarks and copyright. There are no legal provisions that specifically apply to robotics, but a first scan of existing legal regimes and doctrines does not show obvious difficulties or specific unclarities when these are applied to robotics. The law is reasonably clear on what can and what cannot be protected. It is nevertheless possible that a more in-depth analysis of specific robot applications could uncover legal issues that are less obvious. As there may be public-policy reasons to extend or reduce the protection afforded by IPRs, further research would also be welcomed to determine whether the current application of IPRs sufficiently meets the needs of the robotic industry and society at large.
A second IPR-related question is whether robots themselves are capable of producing copyrightable works. The UK has dedicated legislation with a positive stance to computer- generated or robot-generated works (although it is debated how this should be applied exactly), whereas other countries lack such legislation and seem to deny the possibility of such protection. This is an area where the law as it stands does not come up with clear answers. Issues that need clarification in legal research and practice are, for example, what exactly is a computer-generated work, who is the initial rights holder of such a work, and how the criterion of an “own intellectual creation” can be applied to computer-generated works.
(iv) Privacy and data protection. Many robots will contain information technology and many of those are likely to process sensor data. When these data concern individuals, the processing of these data by robots is subject to data protection regulation, which aims at providing privacy protection through requirements relating to, among other things, transparency, security, and lawful and fair processing.
The data controller who is responsible for personal data processing by a robot (this will often be the owner) has to comply with the data protection requirements. In practice, this should usually not be problematic to achieve. The emerging field of privacy by design can prove useful in making and keeping robots data protection-compliant. Some legal requirements can be implemented in the robot’s software and interface, such as data security through data encryption and data access control. Requirements such as informed consent can be implemented in system design, for example through interaction with users via displays and input devices. Designing in data protection is not only relevant for compliance purposes, but it can also improve social acceptance. However, there are significant differences in data protection frameworks between the EU and other jurisdictions, which could make it difficult for manufacturers catering for the international market to design in specific data protection rules. This is an issue that merits more detailed study.
Although robots could be developed and employed in accordance with current legislation, the data protection framework may show a certain disconnect with emerging robotics practices. Sometimes it will not be clear on whose behalf a robot will be processing personal data and for what purposes. This will especially be the case with highly autonomous devices embedded on a large scale in everyday life (e.g. ambient intelligent environments). Particularly when these devices are small or simple and contain limited means for communication with users, it may be difficult to meet all data protection requirements.
(v) Capacity for legal transactions. Software agents are becoming more intelligent and capable of taking over tasks traditionally done by humans. Also physical robots, such as companion and care robots will become more sophisticated and may have to be equipped with a capability of rendering basic services beyond pure material care, such as assistance in purchasing food, drugs, newspapers, or bus tickets. For such applications, it could be useful if robots would have the capacity to perform legal transactions.
Robots currently do not have legal personality in any of the jurisdictions studied. In the current legal frameworks, robots can only act as ‘mere tools’ to carry out commands that can, directly or indirectly, be attributed to human beings. The legal responsibility for robot actions thus rests with their human ‘master’.
Theoretically, it is possible to attribute legal personality to robots by changing the law. After all, the law defines who or what has legal capacity to act, and under what conditions. Basic requirements for granting legal personality to non-human entities, such as corporations, is that they are registered and have property. Registration requirements could in principle be extended to robots (including requirements how robots can prove their registered identity); the capability of owning property is less easy to create, although legal constructions could be devised to accommodate this. Another issue to resolve if robots were to be granted legal personality is how disputes can be resolved in which the robot is a party; how can they be represented in court?
Should these issues concerning legal personality be resolved at a certain point in time, more practical requirements and rules pertaining to legal acts will come into play. These issues will be smaller in character and probably easier to address. The programmability of robots makes it possible to implement relevant legal conditions into the machine, for instance a clear expression of the robot’s “intent” to enter into a contract. However, the field of artificial intelligence and law has shown the limits of embedding regulation into machine code, for instance, when rules or requirements are open textured or vague. It remains to be determined whether robots will ultimately be able to cope with situations where reasonableness and equity are expected of contracting parties.
In the final part of the deliverable three case studies are explored in different jurisdictions. Each case is discussed from the perspective of different jurisdictions. We have aimed to cover the cases, within the limitations of feasibility, from the perspective of a selection of relevant countries from the following jurisdictions: Italy, the Netherlands, the UK, Germany, the US, Japan, and China.
The final deliverable of WP3 consists of papers prepared for a workshop entitled "Opportunities and risks of robotics in relation to human values", held in Tilburg on 23-24 April 2013 (D3.3). The papers address topics such as robots as products enhancing human values, roboticists as responsible for building trust in robots, rethinking the notion of bodily integrity and its role as a limit to the freedom to use one’s body, big data applications in medical diagnosis and its effects on dignity and self determination, and robots as creators of original works.
Within WP 4, a taxonomy of robotic technologies based on the machines’ typological differences and the domain of application has been formulated (D4.1). This taxonomy should help clarify the differences in terms of legal relevance of robotic technologies and the impact that they have on the extant legislation. A report on case studies from SSSA laboratories was also realized (D4.2). The human capabilities that may be affected by developments in robotics (both in terms of enhancement or augmentation and in terms of recovery or assistance) have been associated to fundamental rights and freedoms, deriving from the European law and Charts and the MS Constitutions. This part of the research has been developed according to several viewpoints and trough collaborative and interdisciplinary work. A report has clarified the concept of human capabilities and articulated in which ways they can be affected in a world of robotics and how this is relevant for the purpose of constructing a EU regulatory framework (D4.3). Moreover, a general overview of how fundamental rights are challenged by robotics technologies, with a special focus on three application domains (industrial robotics, assistive technology, and biomedical robotics), has been discussed in an article published on the International Journal of Technoethics (D4.4).
The concept of human enhancement through robotics, the profound consequences for our conception of what it means to be human, and how technologies may be designed in such a way as to safeguard core social values have been studied in-depth in the context of several activities carried out by the RoboLaw team. Within WP5, HUB Berlin completed a report (D5.1) on the topic. The report provides an ethical-philosophical analysis of human enhancement (HE). Its main goal is not to formulate a new definition, as there have already been so many attempts to define HE. Rather, HUB and LMU researchers found a conceptual and terminological vagueness about HE which persists and represents a challenge for those who want to question it philosophically. The analysis takes readers through the evolution of the debate and considers HE from several viewpoints. It begins with pro-enhancement and anti-enhancement stances and arrives at a more reasonable view of the ethical and social issues that are raised by technology-based visions of the future. More precisely, for years the debate on HE has stalled on unilateral and polarized positions, “transhumanists” v “bioconservatives” being the most renown opposition. Over the time, other approaches have emerged, which help to clear some of the conceptual ground without siding with or against. These attempts are of significance and it is important to learn from each rationale and ethical standpoint in order to reach a more sound approach to this theme. Ultimately, the aim of the analysis developed within the RoboLaw project is to go beyond sterile disputes between supporters and detractors of human enhancement. We advocate a reoriented debate that can complement and inform ongoing work in the science and society field of research. As was shown in the deliverable, the cornerstone of this theoretical pattern is the shift from single and divided fields of knowledge to a comprehensive framework. Such a comprehensive framework can foster a broad cultural political debate with the aim to encourage a normative approach that deals with threats, challenges and opportunities. Despite the plurality of approaches and definitions, there are strong indications that more and more effective enhancement technologies will be developed in the near future, and that some existing lines of research and development already have the potential to significantly alter human body, cognition and social competence. The discrepancy between what one can – concretely – define today HE and the visions with regard to emerging technologies and their impact raise fundamental questions concerning our views on human nature, human condition as well as on the future of our societies. Consequently, the real challenge is not to find a new definition. It is to connect every possible definition and approach with a democratic process of evaluation and deliberation. Following such a process, it should be possible to reach agreement that goes beyond aggregations of supporters and detractors of HE. Human enhancement technologies feed new hopes and create social expectations, make available new tools both for individuals and society, foster threats and concerns, and present risks, that need to be dealt with in a public discussion and not only in academic or expert circles. Experts alone are partial actors for a successful decision-making process. Only a public, democratic debate can develop policies, which allow for an ethical use of enhancing technologies that improve the human condition. This renewed approach should be informed by a broad interpretation of the notion of “human enhancement” in order to include not just the interventions themselves, but the social and cultural dimension as well.
Following this preliminary investigation, the concept of human enhancement through robotics, the profound consequences for our conception of what it means to be human, and how technologies may be designed in such a way as to safeguard core social values have been studied in-depth in the context of a workshop held in Tilburg on 15-16 November 2012 (D5.2). The multidisciplinary contributions to this workshop were published in a volume Beyond Therapy v. Enhancement? Multidisciplinary analyses of a heated debate, edited by Federica Lucivero and Anton Vedder. This volume has been published in the RoboLaw Series.
A Final Conference on “Investigating the Relationship between Future Technologies, Self and Society” was held in Pisa on 28-29 November (D5.4). This conference was the closing initiative of WP5 and was directed at investigating the impact of emerging technologies on our identity, both in a philosophical and in an anthropological view. Stretching even further the interdisciplinary methodology that characterizes RoboLaw, and taking into account the phenomena of convergence and overlapping of emerging technologies, we organized this international conference combining multiple perspectives and experts in the fields of medicine, neuroscience, engineering, philosophy, ethics, in order to reflect on a general theoretical framework that can accompany new technologies advancement and the transformations they bring about. Particular attention was given to understand in which ways who we are and how we conceive ourselves and others may be altered with the spread of enhancing technologies, and what consequences open up when a more or less conventional gap between natural and artificial, in fact already difficult to establish, begins shifting or fading. At the same time we try to better understand new technologies contribute to better understand human behavior, mind, and society. An overall concern and driving theme of the international conference was also to understand whether robotic and other technologies for human enhancement could be developed and applied while safeguarding core social and ethical values and remaining consistent with the principles enshrined in European Charter of fundamental rights. The main purpose of this international conference was to investigate, discuss and asses the relationship between new and future technologies, legal, ethical, and moral norms, by focusing on their contribution to the development of self conception and society. For this reason selected speakers were international experts in the fields of medicine, neuroscience, engineering, philosophy, ethics, in order to open wide start points of discussion about our scientific theme.
All research activities carried out in the various WPs aimed at building the basis for the exercise undertaken in WP 6. The Guidelines on regulating robotics (D6.2) aim at offering an in-depth analysis of the ethical and legal issues raised by robotic applications and to provide the European and national regulators with guidelines to deal with them. After a comprehensive analysis of the current state-of-the-art of regulation pertaining to robotics in different legal systems has been carried out throughout the research, this document tries to respond to the question whether new regulation is needed or whether the problems posed by robotic technologies can be handled within the framework of existing laws. An introduction to the relationship between regulation and robotics clarifies where the need for a legal appraisal and intervention of robotics comes from and explains how the RoboLaw project has corresponded to it. The paths explored and the lines of investigation undertaken in the project are synthesized, in order to highlight the driving themes that cross-cut the entire research.
But the field of robotics is too broad, and the range of legislative domains affected by robotics too wide, to be able to say that robotics by and large can be accommodated within existing legal frameworks or rather require a lex robotica. For some types of applications and some regulatory domains, it might be useful to consider creating new, fine-grained rules that are specifically tailored to the robotics at issue, while for types of robotics, and for many regulatory fields, robotics can likely be regulated well by smart adaptation of existing laws. Given this multifaceted quality of robotic applications, in the subsequent analysis a case-by-case approach was adopted and four diverse technological applications have been examined in depth.
In order to conduct an ethical analysis of different robotic technologies a methodology was also developed (D5.5) that has started from the following questions:
• what type of ethical analysis is more appropriate to support the main objective of RoboLaw?
• What are the features that such an ethical analysis should have?
• Which elements in the existing ethical reflection on robotics can be usefully implemented in this approach?
• How should such ethical analysis be conducted?

The approach that is embraced in D5.5 is context-sensitive, aware of the mutual shaping of technology, society and morality, and able to situate the ethical analysis in the context of political deliberation. It refuses to offer overall normative frameworks and instead is sensitive to specific situated contexts. Building on the pluralist ideal of democratic deliberation wherein different participants discuss their interests and normative positions, the chosen approach takes into account both the public debate on robotics and the academic literature since they highlight different aspects and perspectives. It seeks to improve the governance of robotics by creating better conditions for a democratic deliberation. The process of governance is improved by including a broad range of stakeholders in the discussion about new technologies and investigating their normative assumptions and positions. Moreover, because of the exploratory character of RoboLaw the ethical analysis should be able to address short- and medium-term challenges posed by emerging technologies. This approach provides a map of the ethical debate on robotic applications, both at the level of public discussion and of academic debate. In order to do so, it acknowledges the importance of design as a relevant aspect of the ethical analysis. Uncovering the values inscribed in technological artefacts is the first step in order to initiate a discussion on their moral and ethical meaning, as well as to reflect on how specific values should be embedded in the design of these artefacts. The ethical analysis that is adopted builds on philosophy of technology as well as on science and technology studies, determining how robots mediate the understanding and knowledge of the world.
The four robotic applications selected were: automated cars, surgical robots, prostheses, and care robots.
TILT undertook the study on automated cars, addressing both ethical and legal questions. The ethical analysis concluded that the value of safety is important, but it is not the only value and in some cases, may not be the most important. The aesthetic of driving and comfort, freedom, privacy, equality are important values as well and in some cases societies and individuals may want to give them the priority under certain conditions. The legal analysis concentrated on the question whether liability law (especially product liability law) has a chilling effect on the introduction of automated cars to the market and what can be done to dampen such effect. It was found that the existence of a chilling effect is a distinct possibility. It was further suggested that insurance can play an important role in reducing a chilling effect without compromising the incentive function of product liability law (spurring manufacturers to produce safe vehicles). It was furthermore found that the Swedish system of first party insurance deserves serious consideration as an addition or even replacement of the third party insurance systems that are prevalent in other EU member states.
SSSA and LMU conducted the study on surgical robots, After a general overview on the several domains of robotic-assisted surgery, the investigation zooms in the paradigmatic da Vinci System to show its most critical legal sides. Directive 93/42/CE on medical devices already allows a strict control on the safety and performance of surgical robots, but it completely ignores the assessment of the technical training and capability of the surgeon to operate with the robot. Therefore, a first point to regulate is the professional requirements that the surgeons must have to use robots; in particular, a specific training should be introduced which permits the surgeon to obtain a European certificate of his abilities to perform computer-assisted operations, after passing a final exam. Then, civil liability rules which should be adopted were considered, together with issue of access to digital data, privacy, informed consent, and telemedicine as a domain where surgical robots may be deployed in the future. Some final reflections were dedicated to the economic perspective and to the lack of an effective competition in this market, in order to propose the adoption of policies that favour the spread of this technology.
Furthermore, SSSA developed the two remaining case-studies on prosthetics and care robots.
Prostheses represent one of the most relevant kinds of robotic applications currently researched, which entails the chance for a personal with disability requiring the replacement of a limb to come close – if not equate – the functioning of a – missing or functionally impaired – natural limb. The desirability of such a technology with respect to the substantial improvement of the living conditions of people with disabilities is self-evident, enabling a much more relevant – independent – participation in society of a large share of the population. Robotic prostheses raise different legal issues: they induce different considerations on what the human body is and which parts constitute it, and ultimately even what human being itself means; in a not so distant future they may allow us to acquire and develop new capabilities, or substantially improve already existing ones beyond those limits, that today appear to be inherent to human nature; but prostheses are also products, even if of a peculiar kind. This consideration leads to the application of some relevant liability rules – namely the defective products directive – that provides specific incentives both to the industry and users of such devices, that may though be less than optimal. Arguments have been advanced to justify the adoption of alternative paradigms, addressing both issues of safety and compensation.
Finally, since the analysis refers to a greater extent to a given type of technology, which will be available in the medium run, the issue of the regulation of human enhancement has been taken into consideration as a horizon that some of the policy choices suggested may influence, creating different path dependencies.
Care robots are to be employed for the assistance and care of elderly and disabled people. Robotic applications can potentially support the care system in providing more effective and personalized services. Robots could assist the elderly and disabled in many ways, for example by administering food or medication, helping them to bathe, as well as serving as a memory aid. Care robots could also provide benefits to care-givers and family members. However, the role that robots could play in the field of personal care is far from being perceived in the same way. The risks of affecting the dignity of elderly and disabled people instead of improving their quality of life, through isolation or, even, the dependence on technological devices, are just some of the reasons given for opposing the possible use of robots in the care system. Despite differing opinions among experts, academic researchers, care associations and the public, the interest in the opportunities offered by care robots is clearly increasing, together with the rise in the numbers of companies producing robots for care. Thus, several social, ethical and legal issues need to be addressed in order to protect the rights of potential users and to develop a specific market. Ethical issues have been analyzed to identify the human values and the areas of social life which are challenged the most. The issues of acceptance, safety, liability were explored also from a legal point of view, and the analysis led to the formulation of policy recommendations.
Three out of the four technologies examined define a cluster of applications of robotics that for several different reasons fits very well with the project basic aim and rationale. All of them are triggered by policy tendencies and social phenomena observed in this sector, which efforts in robotics research are trying to correspond to: improvement of the quality of medical treatment (through high precision surgery), attempts to increase independence and social inclusion of vulnerable persons, like the elderly and disabled, population ageing and demographic change, with expected shortage of (informal and professional) caregivers. These challenges fall quite well within the EU bundle of competences and sectors of intervention: the protection of health and the right of access to healthcare represent fundamental principles established in the EU Charter of fundamental rights (art. 35), and art. 152 of the EU Treaty identifies the promotion of public health as a core area for the EU action. Therefore the improvement of medical products and procedures, and of safety and efficiency in healthcare delivery are suitable objectives of EU policies to be accomplished also by means of technological progress, particularly in robotics. The free flow of goods in the EU market might also be compromised by different regulations in different countries. At the same time robotics for healthcare is a domain that more than others requires regulatory intervention and where legislation promoting innovation is called for.
In conclusion, a bundle of demographic, social, technological, economic and political factors that orientates the development of this sector makes it also a special case to be analyzed from a legal perspective, and one that qualifies as an effective context of implementation of the EU policy action.
The main findings of the research on robotics and regulation have led to elaborate a journal article, incorporating lessons learned within the project (D6.3). The title of the article is “Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues”. The paper addresses regulatory dilemmas on the basis of the well known distinction in regulatory modalities originally introduced by Lessig, in law, market, social norms, and code and the co-evolution of law, technology, and normative outlooks. It discusses dilemmas, such as technology-neutrality versus legal certainty using cases studied in the project and illustrates the co-evolution of law, technology, and normative outlooks. Next, general mechanisms for dealing with the dilemmas are discussed, including enforcing the framework of rights and values, developing soft law, responsible innovation and research, smart regulation and developing procedures. The paper has been accepted for publication in the journal Law, Innovation and Technology (LIT) and will appear in second issue of 2015. A draft of the article submitted is available in D6.3 Regulatory challenges of robotics.

In the context of WP7 a wide range of dissemination activities were carried out.

The project web-site has been online since Month 1, as reported in MS 21 Dissemination Plan. The project web-site has been maintained and updated during the course of the project and will not be discontinued after the project life-time. The project is available at http://www.robolaw.eu.
LMU also has developed a website (www.robolaw.philosophie.uni-muenchen.de) where it states the main aims of RoboLaw and informs various interest groups about new developments.

Different categories of RoboLaw stakeholders were identified. They include: Robotics producers (in the field of: assistive, industrial, medical, and logistics robotics); Employers (i.e. owners of factories in which robots are deployed ); Insurance companies (in the field of: assistive, industrial, medical and logistics robotics); Trade-unions; Users associations (e.g. care givers, disabled, cyborgs, consumers, etc.); Professional users (at the moment this category consists of practitioners in robotic surgery only); Standards associations (i.e. ISO); Policy Makers; Academics and Researchers mainly in the field of law, robotics engineering and philosophy; Defense and military. On line questionnaires were formulated and sent to all these stakeholders categories.
Selected categories of stakeholders were involved mainly by invitation to participate in meetings and by means of requests to fill in online questionnaires. 3 public meetings with selected stakeholders were organised during the project life time (1st Stakeholders meeting June 25th, 2013 – Reading, U.K; 2nd Stakeholders meeting October 29th, 2013 – Munich, Germany. Stakeholder categories represented: Insurance and automotive companies, cyborg associations, trade unions and practitioners in robotic surgery; 3rd Stakeholder meeting January 15th, 2014 – Pisa, Italy). The goal of the stakeholders meetings was to discuss the priorities - in terms of technologies, services, or research applications – that would need to be addressed from the legal and ethical standpoints. The inputs gathered during the meetings have been included in the recommendations that the RoboLaw project has prepared for the European Commission.

The RoboLaw project was widely disseminated through different media to a heterogeneous number of stakeholders, including the civil society. Among the most remarkable results are:
• 16 articles published in scientific journals
• 4 special issues edited in scientific journals
• 11 articles in edited books
• 26 presentations given at scientific conferences,
• 4 books edited in the RoboLaw Series,
• 1 PhD dissertation.
As regards dissemination actions targeted at the civil society, it is worth point out here the articles appeared on The Economist and Wired and the prize won by the project coordinator E. Palmerini at the at the 2013 WORLD TECHNOLOGY AWARD in the Law category, which was widely covered in the Italian press.
Among the follow-up results obtained are:
• The special issue of the journal Humana.mente (http://www.humanamente.eu) entitled ‘Reframing the Debate on Human Enhancement’. In March 2013, HUB launched a call-for-papers and invited several distinguished contributors. This way, HUB was able to attract well-known specialists from philosophy and engineering, who work on human enhancement. HUB then conferred the leadership upon LMU, which finished the editing of the issue on “Reframing the debate on human enhancement” and conducted the peer-review process at month 27. This new publication is intended as a follow-up to the major results on issues of human enhancement that were stated in D5.1. It is open-access and can be downloaded for free at: http://www.humanamente.eu
• A conference on “Roboethics in Film” at LMU Munich’s Institute for Theatre Studies and the related volume collecting the proceedings (published with Pisa University Press) on Roboethics in Film (edited by Fiorella Battaglia and Nathalie Weidenfeld)
• The special issue edited by Alberto Pirni and Antonio Carnevale, entitled, ‘Investigating the Relationship between Future Technologies, Self and Society’. Published in Politica & Società, number 2, May-August (2014).

Potential Impact:
Robotics represents one of the most relevant technological innovation of the current century , a revolution capable of radically modifying existing economies at least in a twofold sense. On the one hand those countries that more than others will invest in similar applications, developing a strong industry in the field, will soon acquire a relevant strategic hedge over latecomers and other players, who nonetheless will be consuming similar devices. On the other hand the advent of similar technologies will profoundly modify – in every country where the use of robotic applications will be allowed – the societal structure, changing required competences and skills, ultimately reshaping the overall labour market and income distribution..
Different aspects profoundly influence a similar outcome in an aggregated fashion, that cannot be easily disentangled; yet many such factors can be rooted back to the legal system, that most certainly determines the conditions for the development of a market.
Even a substantial investment in research if not coupled with adequate normative strategies may prove insufficient, delaying the emergence of some technologies, eventually impairing the development of a corresponding industry. A transparent regulatory environment is a key element for the development of a robotics and autonomous systems market, where products and services can be incubated, tested in real environments and eventually launched. Even worse than the lack of legal clarity could be the effect of some actions – or policy choices – aimed at preventing the development or diffusion of a given application considered not sufficiently safe or reliable. These initiatives may only in fact impair the development of the supply side of the economy for that specific device, at the same time reducing the possibility to effectively influence the way that product is conceived, designed, and finally distributed onto the market, and thus the standards it needs to conform to. In the end the robot will be out there, in the market, in the streets, in our homes, but it will adhere to the standards, values, social fabric of those countries, like China or South Korea, that are proactively confronting the economic opportunities offered by the advancements in robotics.
These considerations entail stating that Europe should claim a relevant political role in the development of technologies and in the regulatory process pertaining to innovation in robotics. The ambition of the European Union to promote innovation in the internal market and foster competitiveness makes robotics a strategic sector, to which the European institutions are devoting considerable attention. At the same time research and industrial production in the domain of robotics have to grow in accordance with the complementary objective which is enshrined in the European legal system, that of being an area of freedom, security and justice. The competing goals of protecting consumers and more generally end-users from harm and fostering innovation have therefore to become embedded in the regulatory endeavour and in the innovation process itself.
The investigation carried out within the RoboLaw project has precisely focused on the multi-layered interplay between robotics and regulation, providing policy-makers with an analysis of the ethical and legal implications of developments in robotics. The research moves from a request for a legal framework that can accompany the developments of robotics, that comes from the actors who operate in this sector. Researchers and manufacturers who work on robotics both at the experimental and at the industrial level claim that they cannot properly appraise the risks and duties entwined in their activities until a clear analysis of this interplay has been made. The research has explored both the formal and the substantial aspects of the binomial robotics and regulation. On the one hand, it has focused on the different legal tools that can be employed in order to regulate technology in general, and robotic technologies in particular; on the other hand, the contents of the extant relevant legislation have been examined, with the aim of verifying whether they can already provide a systematic legal framework or other forms of regulation are needed, notwithstanding the adaptability and flexibility of the available rules.
Many regulatory issues raise the question of when regulators can or should intervene if they want or ought to regulate. An intrinsic dilemma exists in technology regulation: controlling a technology is difficult in its early stages because not enough is known of its possible or probable effects, and it is also difficult once the technology is well-developed because by then intervention is expensive, drastic, or impossible because the technology cannot be reversed. We therefore need ways to regulate in early stages when it is still possible, albeit in the dark, to regulate, which calls for innovative approaches. One such approach that features increasingly on the policy and academic agendas is responsible research and innovation (RRI). The RoboLaw project has had as a constant point of references the EU policies on Responsible Research and Innovation. The main concerns and meanings that are entrenched in this formula have been respected and applied both from a methodological and a substantial point of view. On the one side, an interdisciplinary approach has been a constant feature of the study since its conception, attained by integrating various disciplines and competences in the project’s structure and team. The diverse expertise of the researchers involved in the consortium (lawyers, philosophers, S&T studies experts, engineers) have led to a constant interaction aimed at exchanging information, opinions and perspectives in order for the suggested rules to be sound from a technical point of view, informed by an appraisal of the ethical issues at stake, and compliant with a general frame of reference given by common fundamental values and more specific principles of the applicable law. On the other side, multiple stakeholders have been involved in the research process with the goal to include all possible relevant perspectives, including that of operators in the robotic industry and market, potential or actual users (e.g. person with disabilities, surgeons, care associations), insurers, society at large. Dissemination activities throughout the length of the project have been carried out also with the aim of getting inputs and views from the public, as a way of ensuring public participation and integrating social views into the policy-making and regulatory processes.
Moreover, an ethical appraisal of various robotic technologies in their potential scenarios of deployment has been carried out as a core research exercise within the project. Any legal evaluation should in fact take into account the problems that the former perspective enlightens, so that it can inform the fashion in which new rules are tailored or the existing ones are to be interpreted. In other words, ethics of technology and of robotics in particular has not been considered an autonomous exercise deferred to experts of the field, but as an integrating part of the analysis leading to distinctive features of the proposed solutions.
From the substantive point of view, RoboLaw has embraced the two requirements of ethical acceptability and orientation towards societal needs that identify the pillars of the concept of RRI. Not only do robotic products and applications have to comply with the core values embraced in the European context by the constitutional traditions of Member States and positively affirmed by the Charter on fundamental rights, but particular attention, and possibly a peculiar legal status in some respects, should also be given to those technologies that respond to societal needs, therefore contribute to achieve normative goals such as equality of opportunities, justice, solidarity and to improve the quality of life of the European citizens, especially the more deprived and vulnerable.
The regulatory challenges posed by advancements in robotics have been addressed comprehensively and the project has tried to provide answers with regard to: the legal tools that better suit the goal of regulating technology; the type of ethical analysis that should be conducted on technological innovation, and the methodology to apply, in order to align our research to the EU policy on Responsible Research and Innovation (RRI); the contents of regulation, with special attention to the need of protecting the fundamental rights of European citizens.
The main outcome of the research are the Guidelines on regulating robotics (D6.2) which provide an instruments to understand the role that European institutions and national authorities could play in the development of an efficient market for robotic products and services, with a special attention to the domain of robotics in healthcare.
List of Websites:
Website address: http://www.robolaw.eu
Relevant contact details:

Prof. Erica Palmerini
Project Coordinator
The DIRPOLIS Institute
Scuola Superiore Sant'Anna, Pisa, Italy
Tel: +39 050 883111
Fax: +39 050 883225
Email: erica.palmerini@sssup.it

Dr Pericle Salvini
Project manager
The BioRobotics Institute
Scuola Superiore Sant'Anna, Pisa, Italy
Tel: +39 050 883422
Fax: +39 050 883497
Email: p.salvini@sssup.it