The European Commission has decided to withdraw its proposal for a Directive on adapting non-contractual civil liability rules to artificial intelligence, known as the AI Liability Directive. This initiative aimed to assist individuals in identifying liable entities and proving claims in cases involving complex AI systems. The withdrawal is part of the Commission's efforts to reduce regulatory burdens on the private sector, prioritizing competitiveness and innovation.
The Centre for Democracy & Technology Europe (CDT Europe) expressed disappointment over this decision, viewing it as a setback for victims seeking justice from AI-induced harms. Laura Lazaro Cabrera, CDT Europe's Counsel and Director of the Equity and Data Programme, stated: "The AI Liability Directive was set to put forward a framework to ease the burden on individuals to pursue justice when wronged by an AI system. Its withdrawal is a departure from European values of transparency and accountability as well as fundamental rights."
Cabrera further elaborated on the challenges posed by AI systems: "Harms caused by AI systems and models are notoriously difficult to prove, owing to their complexity and lack of transparency." She emphasized that individuals have limited options for redress under current conditions.
Despite acknowledging areas where the proposal could be improved, CDT Europe stresses the importance of having regulations that address barriers faced by individuals harmed by AI. The group highlights concerns over limited remedies available under the existing AI Act.
There were recent indications from the European Parliament showing interest in advancing discussions on this matter. A report by the European Parliamentary Research Service had recommended continuing with the proposal. However, CDT Europe finds it concerning that this withdrawal occurred while parliamentary discussions were ongoing and before completing a public consultation initiated by the file’s rapporteur.
CDT Europe remains committed to advocating for fundamental rights preservation, including effective redress mechanisms related to harms caused by artificial intelligence.