Robots that run on autonomous code or are powered by Artificial Intelligence could be governed by a “special legal status”, the European Union (EU) council has recommended in a draft paper. Robot owners would need to procure registration, insurance, and even the robots would have to follow basic ‘civil laws’ if they fall under the category of “smart autonomous robots”, if the draft paper on robotics issued by the EU becomes a reality. The paper notes that in 2014 “sales rose by 29%, the highest year-on-year increase ever, with automotive parts suppliers and the electrical/electronics industry being the main drivers of the growth” and “annual patent filings for robotics technology have tripled over the last decade”.
Robots could have a status of “electronic persons” with predefined rights and obligations, the council said. Compensation for potential damages from robots, creation of a “European Agency” for governing robotics, ethical principles applicable for robotics research are some recommendations that the council has made in the draft paper on Robotics.
According to the council, a robot can be given the “smart” property if:
-It acquires autonomy through sensors and/or by exchanging data with its environment (inter-connectivity) and trades and analyses data,
-is self-learning (optional criterion),
-has a physical support,
-adapts its behaviors and actions to its environment.
In such cases, the robot’s could be applicable to follow certain legal responsibilities and liabilities, the council added. It called for further discussion to create an ethical framework based on the EU’s Charter of Fundamental Rights, privacy and social responsibility laws. The council said that in-case future robots are developed to become self-aware then the immediate set of laws governing robotics could be adopted from Asimov’s three laws of robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws
4. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
The council said in the paper that:
“guiding ethical framework should be based on the principles of beneficence, non-maleficence and autonomy, as well as on the principles enshrined in the EU Charter of Fundamental Rights, such as human dignity and human rights, equality, justice and equity, non-discrimination and non-stigmatisation, autonomy and individual responsibility, informed consent, privacy and social responsibility, and on existing ethical practices and codes”.
- Terminator, rise of the machines: “AI could surpass human intellectual capacity in a manner which, if not prepared for, could pose a challenge to humanity’s capacity to control its own creation and, consequently, perhaps also to its capacity to be in charge of its own destiny and to ensure the survival of the species.”
- Physical safety: “when a robot’s code proves fallible, and the potential consequences of system failure or hacking of connected robots and robotic systems at a time when increasingly autonomous applications come into use or are impending whether it be in relation to cars and drones or to care robots and robots used for maintaining public order and policing”
- Privacy: “aspects of data ownership and the protection of personal data and privacy might still need to be addressed, given that applications and appliances will communicate with each other and with databases without humans intervening or possibly without their even being aware of what is going on”
- Employment: “the future of employment and the viability of social security systems if the current basis of taxation is maintained, creating the potential for increased inequality in the distribution of wealth and influence”
- Dignity: “…if and when robots replace human care and companionship, and whereas questions of human dignity also can arise in the context of ‘repairing’ or enhancing human beings”
Framework of regulation for ‘smart’ robots
The Council recommended the flowing principles to govern the development of robotic and AI for “civil use”:
- Liabilities of researchers. engineers: Robotics engineers and designers would have to follow certain “robotics protocols” while carrying research. These protocols will be defined within the term and standards taken from existing ‘ethical code of conduct’ governing scientific research and other Fundamental Rights. Robotics engineers will be accountable for the “social, environmental and human health impacts” that robotics may impose on present and future generations.
- License for users: Users will have to register (or license) and obtain insurance for their robots—similar to automobile regulations— so that either the manufacturer or the user could be held liable to compensate for losses caused by it. Apart from this, current legal framework could be tweaked to hold users or manufacturers responsible for damages caused by a robot, if proven that damages are willfully caused or due to negligence, and can be traced back to the user or manufacturer.
- Civil Law liability: All compensations and extent of damage caused by a robot to a third-party will be calculated legally without any limitations in future laws. Damages caused by robots (other than damage to property) should in now way treated differently just because they are caused by “non-human agents”.
- Intellectual property, interoperability, access to code: Certain types of works or content produced by smart robots could be regarded as “own intellectual creation” and criteria to determine such creations could be drawn up in the future. Networks connecting multiple ‘smart’ robots should be accessible by humans and other devices as well, while all data generated by the robot and its source code should be easily accessible in case of accidents, system failures, etc.
- Disclosure of robots: Users, researchers, or basically anyone who owns a smart robot is obliged to disclose: i)the number of such robots in use ii) any revenue made by businesses, or personal savings made through use of robots in place of humans.
- Research Ethics Committee: A committee including experts in the field, industry stakeholders will be responsible for reviewing all research being carried out on robotics. The committee will look to ensure that all research follows pre-defined ethics, while updating the ethical codes with development in technology. It will also monitor all research organizations and determine any risks resulting from certain types of research. For this, the research ethics committee “could be appropriately located within organisational structures” so that review of all research can be done in a transparent manner.
- Differentiating use case: The manufacturer and a regulation body shall determine what each kind of robot is designed to do i.e. for “military” or “civil” purposes.
Download the draft paper here.