Artificial Intelligence

Experts discuss compulsory insurance for robot damage

The development in artificial intelligence requires new provisions for liability issues and insurance models. At least that is what the majority of experts in the Bundestag (German Parliament) Committee Digital Agenda think.

29 Mär. 2017
12_Roboter-bessere-Menschen_iStock_000019738831_Large
The attendees of the expert meeting determined that there is a need for regulating liability for damage caused by AI systems.

The attendees of the expert meeting that took place on March 22 determined that there is a need for regulating liability for damage caused by AI systems. Data protection and privacy still gives them headaches. At the same time, they argued to appreciate and promote the opportunities artificial intelligence presents. Matthias Spielkamp of AlgorithmWatch and Enno Park of Cyborgs e.V. focused on the question of liability as the core of their statements. Park, for example, considers it possible that new mandatory insurance policies may be necessary. To declare a machine a legal entity was taking it too far in the opinion of mathematician Prof. Raúl Rojas at the Free University Berlin: They cannot be responsible.

The majority of the experts supported an AI regulation at European level. This task could be taken on by a European Agency for Impact Assessment for the entire IT sector, according to Rojas. According to Spielkamp, national solutions could be useful when coupled with international standards. The EU is already working to develop robot legislation to clarify ethical issues in addition to liability issues. For example, a kill switch is being discussed so that the machine could be turned off in case of emergencies.

Artificial Intelligence Cyber-Physical Systems Cognitive ComputingFuture MobilityLegal & Privacy Policy CEBIT RSS Feed