Bultenler
The European Union is on Track to Introduce New Liability Rules on Products and Artificial Intelligence to Protect Consumers and Encourage Innovation
The Proposal for a Directive of the European Parliament and of the Council on Adopting Non-Contractual Civil Liability Rules to Artificial Intelligence (“Proposal’) was published on the website of the European Commission (‘Commission’) on 28.09.2022.
This Proposal aims to address, in particular legal uncertainty and legal fragmentation, which hinder the development of the internal market and thus amount to significant obstacles to cross-border trade in AI-enabled products and services.
What are the causes of the Proposal?
- Current national responsibility rules are not suitable for addressing claims of responsibility for damage caused by artificial intelligence-enabled products and services, especially due to error. In particular, when claiming compensation, victims could incur very high up-front costs and face significantly longer legal proceedings, compared to cases not involving AI.
- The current EU rules on product liability, based on the strict liability of manufacturers, are almost 40 years old. Modern rules on liability are important for green and digital transformation, specifically to adapt to new technologies, like Artificial Intelligence. This is about providing legal certainty for businesses and ensuring consumers are well protected in case something goes wrong.
- If a victim brings a claim, national courts, faced with the specific characteristics of AI, may adapt the way in which they apply existing rules on an ad hoc basis to come to a just result for the victim. This will cause legal uncertainty.
- Furthermore, it showed a public concern as to how legislative action on adapting liability rules initiated by the individual Member States, and the ensuing fragmentation, would affect the costs for companies, especially SMEs, preventing the uptake of AI Union-wide.
- In addition, there are concrete signs that a number of Member States are considering unilateral legislative measures to address the specific challenges posed by AI with respect to liability. For example, AI strategies adopted in Czechia[1], Italy[2], Malta[3], Poland,[4], and Portugal[5] mention initiatives to clarify liability. Given the large divergence between Member States’ existing civil liability rules, it is likely that any national AI-specific measure on liability would follow existing different national approaches and therefore increase fragmentation.
The open public consultation informing the Impact Assessment of this proposal confirmed the problems explained above. Therefore, the goal of our proposal is to encourage the adoption of reliable AI in order to reap all of its benefits for the domestic market. It accomplishes this by assuring that victims of damage brought on by AI receive the same level of protection as victims of damage brought on by products in general. Additionally, it prevents the development of fragmented AI-specific adaptations of national civil liability laws and lessens the legal uncertainty faced by firms creating or employing AI on their potential exposure to liability.
What are the benefits of the Proposal?
- Consistency with existing policy provisions in the policy area
This proposal is part of a package of measures to support the roll-out of AI in Europe by fostering excellence and trust. For the proposal to be effective in this area, an effective AI law[6], revision of sectoral and horizontal product safety rules and EU rules for the responsibility of AI systems are required.
- Consistency with other Union policies
The proposal supports the promotion of technology that benefits people, one of the three key pillars of the policy orientation and objectives stated in the Communication "Shaping Europe's Digital Future," and is thus consistent with the Union's overall digital strategy.
This approach intends to improve the adoption of AI in this setting by fostering public trust in the technology. This will create synergies and complement the [Cyber Resilience Act][7], which also seeks to improve user and business protection and promote trust in products with digital components by decreasing cyber vulnerabilities.
- Improvement of major economic, social, and environmental impacts
The economic study[8] underpinning the Impact Assessment of this proposal concluded – as a conservative estimate – that targeted harmonization measures on civil liability for AI would have a positive impact of 5 to 7 % on the production value of relevant cross-border trade as compared to the baseline scenario. Based on the overall value of the EU AI market affected by the liability-related problems addressed by this Proposal, it is estimated that the latter will generate an additional market value between ca. EUR 500mln and ca. EUR 1.1bln.
The Proposal's social effects include a rise in societal acceptance of AI technologies and improved access to the legal system. It will help create an effective civil liability system that is tailored to the unique characteristics of AI and allows for the successful resolution of legitimate damage compensation claims. All businesses involved in the AI value chain would gain from the greater social trust because doing so will hasten the adoption of AI.
As regards environmental impacts, the Proposal is also expected to contribute to achieving the related Sustainable Development Goals (SDGs) and targets. The uptake of AI applications is beneficial for the environment. For instance, AI systems used in process optimization make processes less wasteful (e.g. by reducing the number of fertilizers and pesticides needed, decreasing the water consumption at an equal output, etc.). The Proposal would also impact positively on SDGs because effective legislation on transparency, accountability, and fundamental rights will direct AI’s potential to benefit individuals and society towards achieving the SDGs.
What is the legal basis of the Proposal?
The legal basis for the proposal is Article 114 TFEU, which provides for the adoption of measures to ensure the establishment and functioning of the internal market.
Will the Proposal be able to avoid legal uncertainty?
The objectives of this proposal cannot be adequately achieved at a national level because emerging divergent national rules would increase legal uncertainty and fragmentation, creating obstacles to the rollout of AI-enabled products and services across the internal market. Legal uncertainty would particularly affect companies’ active cross-borders by imposing the need for additional legal information/representation, risk management costs, and foregone revenue. At the same time, differing national rules on compensation claims for damage caused by AI would increase transaction costs for businesses, especially for cross-border trade, entailing significant internal market barriers. Further, legal uncertainty and fragmentation disproportionately affect start-ups and SMEs, which account for most companies and the major share of investments in the relevant markets. In the absence of EU harmonized rules for compensating damage caused by AI systems, providers, operators, and users of AI systems on the one hand and injured persons on the other hand would be faced with 27 different liability regimes, leading to different levels of protection and distorted competition among businesses from the different Member States. Harmonized measures at the EU level would significantly improve conditions for the rollout and development of AI technologies in the internal market by preventing fragmentation and increasing legal certainty. This added value would be generated notably through reduced fragmentation and increased legal certainty regarding stakeholders’ liability exposure. Moreover, only EU action can consistently achieve the desired effect of promoting consumer trust in AI-enabled products and services by preventing liability gaps linked to the specific characteristics of AI across the internal market. This would ensure a consistent (minimum) level of protection for all victims (individuals and companies) and consistent incentives to prevent damage and ensure accountability.
Will the Proposal be proportional?
The Proposal simplifies the legal process for victims when it comes to proving that someone's fault led to damage, by introducing two main features: first, in circumstances where a relevant fault has been established and a causal link to the AI performance seems reasonably likely, the so-called ‘presumption of causality' will address the difficulties experienced by victims in having to explain in detail how harm was caused by a specific fault or omission, which can be particularly hard when trying to understand and navigate complex AI systems. Second, victims will have more tools to seek legal reparation, by introducing a right of access to evidence from companies and suppliers, in cases in which high-risk AI is involved.
Final evaluation results of the Proposal
To guarantee broad engagement of stakeholders throughout the policy cycle of this proposal, a thorough consultation approach was put into place. The method for consultation was based on both open consultations and a number of focused consultations (webinars, bilateral conversations with businesses, and various organizations). In total, 233 responses were received from respondents from 21 Member States, as well as from third countries. Overall, the majority of stakeholders confirmed the problems with the burden of proof, legal uncertainty, and fragmentation and supported action at the EU level. Therefore, the preferred policy option was developed and refined in light of feedback received from stakeholders throughout the impact assessment process to strike a balance between the needs expressed and concerns raised by all relevant stakeholder groups.
Does the Proposal contain a regulation on fundamental rights?
One of the most important functions of civil liability rules is to ensure that victims of damage can claim compensation. By guaranteeing effective compensation, these rules contribute to the protection of the right to an effective remedy and a fair trial (Article 47 of the EU Charter of Fundamental Rights, referred to below as 'the Charter') while also giving potentially liable persons an incentive to prevent damage, in order to avoid liability. With this proposal, the Commission aims to ensure that victims of damage caused by AI have an equivalent level of protection under civil liability rules as victims of damage caused without the involvement of AI. The proposal will enable effective private enforcement of fundamental rights and preserve the right to an effective remedy where AI-specific risks have materialized. In particular, the proposal will help protect fundamental rights, such as the right to life (Article 2 of the Charter), the right to physical and mental integrity (Article 3), and the right to property (Article 17). In addition, depending on each Member State’s civil law system and traditions, victims will be able to claim compensation for damage to other legal interests, such as violations of personal dignity (Articles 1 and 4 of the Charter), respect for private and family life (Article 7), the right to equality (Article 20) and non-discrimination (Article 21).
Does the Proposal spend the European Union budget?
This proposal will not have implications for the budget of the European Union.
Does the Proposal include active control elements?
This proposal puts forward a staged approach. To ensure that sufficient evidence is available for the targeted review in the second stage, the Commission will draw up a monitoring plan, detailing how and how often data and other necessary evidence will be collected. The monitoring mechanism could cover the following types of data and evidence:
- reporting and information sharing by the Member States regarding the application of measures to ease the burden of proof in national judicial or out-of-court settlement procedures;
- information collected by the Commission or market surveillance authorities under the AI Act (in particular Article 62) or other relevant instruments;
- information and analyses supporting the evaluation of the AI Act and the reports to be prepared by the Commission on the implementation of that Act;
- information and analyses supporting the assessment of relevant future policy measures under the ‘old approach’ safety legislation to ensure that products placed on the Union market meet high health, safety, and environmental requirements;
- information and analyses supporting the Commission’s report on the application of the Motor Insurance Proposal to technological developments (in particular autonomous and semi-autonomous vehicles) pursuant to its Article 28c(2)(a).
What are the detailed explanations of the specific provisions in the Proposal?
- Article 1- Subject Matter and Scope: Article 1 sets out the subject and scope of this Proposal: it applies to non-contractual civil claims for damages caused by an AI system, where such claims are filed under fault liability regimes. The measures provided in this Proposal can fit seamlessly into existing civil liability systems, as they reflect an approach that does not touch the definition of fundamental concepts such as "fault" or "harm". This Proposal does not affect Union or national rules that determine, for example, which party bears the burden of proof, what degree of certainty is required regarding the standard of proof, or how fault is defined.
- Article 2- Definitions: To ensure consistency, the definitions in Article 2 follow the definitions of the AI Law. Article 2(6)(b) states that claims for damages can be brought not only by the injured person but also by persons who have been successful or have been succeeded by the rights of the injured person. Succession is when a third party (such as an insurance company) assumes the legal right of another party to collect a debt or compensation. Therefore, a person has the right to exercise the rights of another for his own benefit. Succession will also include heirs of a deceased victim. In addition, Article 2(6)(c) provides that an action for damages may be brought by someone acting on behalf of one or more injured parties, in accordance with Union or national law.
- Article 3- Disclosure of Evidence: Article 3(1) of the Proposal provides that a court may order the disclosure of relevant evidence about certain high-risk AI systems suspected of causing damage. Pursuant to Article 3(2), the plaintiff may request disclosure of evidence from non-accused providers or users only if all proportionate attempts to gather evidence from the defendant fail. For judicial instruments to be effective, Article 3(3) of the Proposal provides that a court may also order that such evidence be kept. As noted in the first subparagraph of Article 3(4), the court may order such disclosure only to the extent necessary to sustain the claim, given that in the event of damage involving AI, the information may be critical evidence for the injured person's claim. Article 3(5) introduces a presumption of non-compliance with a duty of care.
- Article 4- Presumption of Causal Link in the Case of Fault: It may be difficult for claimants to establish a causal link between such a mismatch and the output produced by the AI system or the inability of the AI system to produce an output that causes relevant damage. For this reason, a rebuttable presumption of causality aimed in Article 4 (1) has been established regarding this causal link. Such a presumption is the least burdensome measure to address the need for just compensation to the victim. Paragraphs (2) and (3), on the one hand, against a high-risk provider of AI systems or against a person subject to the provider's obligations under the AI Law, and, on the other hand, against the user of such systems. In this regard, it follows the relevant provisions and relevant terms of the AI Law. In the case of high-risk AI systems as defined by the AI Act, Article 4(4) creates an exception from the causality assumption if the defendant demonstrates that sufficient evidence and expertise are reasonably available for the claimant to prove the causal link. In the case of non-high-risk AI systems, Article 4(5) establishes a condition for the applicability of the presumption of causality; this assumption depends on the court's decision that it is extremely difficult for the plaintiff to prove a causal link. Where the defendant uses the AI system during a personal non-professional activity, Article 4(6) states that the presumption of causality should only apply if the defendant's AI system materially interferes with the working conditions of the operation of the AI system or if the defendant was required and able to determine the conditions of operation of the AI system and failed to do so. Article 4(7) states that the defendant has the right to rebut the presumption of causality based on Article 4(1).
- Article 5- Evaluation and Targeted Review: A monitoring program is in place to provide the Commission with information on incidents involving AI systems. The targeted review will consider whether additional measures are required, such as a strict liability regime and/or compulsory insurance.
- Article 7- Transposition: the Member States, when informing the Commission of national transfer measures to comply with this Proposal, must provide sufficiently clear and precise explanatory documents and specify, for each provision of this Proposal, the national provision(s) under which it is transposed.
What does the Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Proposal) contain?
Briefly, the points in the Proposal are as follows:
- (1) Artificial Intelligence (‘AI’) is a set of enabling technologies that can contribute to a wide array of benefits across the entire spectrum of the economy and society. It has a large potential for technological progress and allows new business models in many sectors of the digital economy.
- (2) At the same time, depending on the circumstances of its specific application and use, AI can generate risks and harm interests and rights that are protected by Union or national law.
- (3) When an injured person seeks compensation for damage suffered, Member States’ general fault-based liability rules usually require that person to prove a negligent or intentionally damaging act or omission (‘fault’) by the person potentially liable for that damage, as well as a causal link between that fault and the relevant damage.
- (4) In such cases, the level of redress afforded by national civil liability rules may be lower than in cases where technologies other than AI are involved in causing damage. Such compensation gaps may contribute to a lower level of societal acceptance of AI and trust in AI-enabled products and services.
- (5) To reap the economic and societal benefits of AI and promote the transition to the digital economy, it is necessary to adapt in a targeted manner certain national civil liability rules to those specific characteristics of certain AI systems.
- (6) Interested stakeholders – injured persons suffering damage, potentially liable persons, insurers – face legal uncertainty as to how national courts, when confronted with the specific challenges of AI, might apply the existing liability rules in individual cases in order to achieve just results.
- (7) The purpose of this Directive is to contribute to the proper functioning of the internal market by harmonizing certain national non-contractual fault-based liability rules, so as to ensure that persons claiming compensation for damage caused to them by an AI system enjoy a level of protection equivalent to that enjoyed by persons claiming compensation for damage caused without the involvement of an AI system.
- (8) The objective of ensuring legal certainty and preventing compensation gaps in cases where AI systems are involved can thus be better achieved at the Union level. Therefore, the Union may adopt measures in accordance with the principle of subsidiarity as set out in Article 5 TEU.
- (9) It is, therefore, necessary to harmonize in a targeted manner specific aspects of fault-based liability rules at the Union level.
- (10) To ensure proportionality, it is appropriate to harmonize in a targeted manner only those fault-based liability rules that govern the burden of proof for persons claiming compensation for damage caused by AI systems.
- (11) The laws of the Member States concerning the liability of producers for damage caused by the defectiveness of their products are already harmonized at the Union level by Council Directive 85/374/EEC.
- (12) [The Digital Services Act (DSA)] fully harmonizes the rules applicable to providers of intermediary services in the internal market, covering the societal risks stemming from the services offered by those providers, including as regards the AI systems they use.
- (13) Other than in respect of the presumptions it lays down, this Directive does not harmonize national laws regarding which party has the burden of proof or which degree of certainty is required as regards the standard of proof.
- (14) This Directive should follow a minimum harmonization approach.
- (15) Consistency with [the AI Act] should also be ensured. It is therefore appropriate for this Directive to use the same definitions in respect of AI systems, providers, and users.
- (16) Access to information about specific high-risk AI systems that are suspected of having caused damage is an important factor to ascertain whether to claim compensation and to substantiate claims for compensation.
- (17) The large number of people usually involved in the design, development, deployment, and operation of high-risk AI systems, makes it difficult for injured persons to identify the person potentially liable for damage caused and to prove the conditions for a claim for damages.
- (18) The limitation of disclosure of evidence as regards high-risk AI systems is consistent with [the AI Act], which provides certain specific documentation, record keeping and information obligations for operators involved in the design, development and deployment of high-risk AI systems.
- (19) National courts should be able, in the course of civil proceedings, to order the disclosure or preservation of relevant evidence related to the damage caused by high-risk AI systems from persons who are already under an obligation to document or record information pursuant to [the AI Act], be they providers, persons under the same obligations as providers, or users of an AI system, either as defendants or third parties to the claim.
- (20) To maintain the balance between the interests of the parties involved in the claim for damages and of the third parties concerned, the courts should order the disclosure of evidence only where this is necessary and proportionate for supporting the claim or potential claim for damages.
- (21) While national courts have the means of enforcing their orders for disclosure through various measures, any such enforcement measures could delay claims for damages and thus potentially create additional expenses for the litigants.
- (22) In order to address the difficulties to prove that a specific input for which the potentially liable person is responsible had caused a specific AI system output that led to the damage at stake, it is appropriate to provide, under certain conditions, for a presumption of causality. While in a fault-based claim the claimant usually has to prove the damage, the human act or omission constituting fault of the defendant and the causality link between the two, this Directive does not harmonise the conditions under which national courts establish fault.
- (23) Such a fault can be established in respect of non-compliance with Union rules which specifically regulate high-risk AI systems like the requirements introduced for certain high-risk AI systems by [the AI Act], requirements which may be introduced by future sectoral legislation for other high-risk AI systems according to [Article 2(2) of the AI Act], or duties of care which are linked to certain activities and which are applicable irrespective whether AI is used for that activity.
- (24) In areas not harmonized by Union law, national law continues to apply and the fault is established under the applicable national law.
- (25) Even when a fault consisting of non-compliance with a duty of care directly intended to protect against the damage that occurred is established, not every fault should lead to the application of the rebuttable presumption linking it to the output of the AI.
- (26) This Directive covers the fault constituting non-compliance with certain listed requirements laid down in Chapters 2 and 3 of [the AI Act] for providers and users of high-risk AI systems, the non-compliance with which can lead, under certain conditions, to a presumption of causality.
- (27) While the specific characteristics of certain AI systems, like autonomy and opacity, could make it excessively difficult for the claimant to meet the burden of proof, there could be situations where such difficulties do not exist because there could be sufficient evidence and expertise available to the complainant to prove the causal link.
- (28) The presumption of causality could also apply to AI systems that are not high-risk AI systems because there could be excessive difficulties of proof for the claimant.
- (29) The application of the presumption of causality is meant to ensure for the injured person a similar level of protection as for situations where AI is not involved and where causality may therefore be easier to prove.
- (30) Since this Directive introduces a rebuttable presumption, the defendant should be able to rebut it, in particular by showing that its fault could not have caused the damage.
- (31) It is necessary to provide for a review of this Directive [five years] after the end of the transposition period. In particular, that review should examine whether there is a need to create no-fault liability rules for claims against the operator, as long as not already covered by other Union liability rules particular Directive 85/374/EEC, combined with mandatory insurance for the operation of certain AI systems, as suggested by the European Parliament.
- (32) Given the need to make adaptations to national civil liability and procedural rules to foster the rolling-out of AI-enabled products and services under beneficial internal market conditions, societal acceptance, and consumer trust in AI technology and the justice system, it is appropriate to set a deadline of not later than [two years after the entry into force] of this Directive for the Member States to adopt the necessary transposition measures.
- (33) In accordance with the Joint Political Declaration of 28 September 2011 of the Member States and the Commission on explanatory documents, Member States have undertaken to accompany, in justified cases, the notification of their transposition measures with one or more documents explaining the relationship between the components of a Directive and the corresponding parts of national transposition instruments.
With regard to this Directive, the legislator considers the transmission of such documents to be justified and has adopted this Directive.
You can find the full text of the Proposal here.
Kind regards,
Zumbul Attorneys at Law
[1] National Artificial Intelligence Strategy of the Czech Republic, 2019: https://www.mpo.cz/assets/en/guidepost/for-the-media/press-releases/2019/5/NAIS_eng_web.pdf; AI Watch, ‘National strategies on Artificial Intelligence – A European perspective’, 2021 edition – a JRCOECD report: https://op.europa.eu/en/publication-detail/-/publication/619fd0b5-d3ca-11eb-895a01aa75ed71a1, p. 41.
[2] 2025 Strategia per l’innovazione tecnologica e la digitalizzazione del Paese: https://assets.innovazione.gov.it/1610546390-midbook2025.pdf;.
[3] Deloitte, Study to support the Commission’s IA on liability for artificial intelligence, 2021, p. 96
[4] See Polityka Rozwoju Sztucznej. Inteligencji w Polsce na lata 2019 – 2027 (Policy for the Development of Artificial Intelligence in Poland for 2019-2027) (www.gov.pl/attachment/0aa51cd5-b934-4bcb-8660- bfecb20ea2a9), 102-3
[5] AI Portugal 2030: https://www.incode2030.gov.pt/sites/default/files/julho_incode_brochura.pdf; AI Watch, op. cit., p. 113.
[6] Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) (COM(2021 206 final).
[7] Proposal for a Regulation of the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital elements - COM(2022) 454 final.
[8] Deloitte, Study to support the Commission’s IA on liability for artificial intelligence, 2021 (‘economic study’).
Türkçe
English