1. Introduction: What is AI liability and why is it important?
When artificial intelligence makes mistakes, various parties may be liable: AI developers, users, manufacturers or service providers. In this guide, you will learn who is liable when, which laws apply and how you can limit liability risks.
Artificial intelligence (AI) plays a greater role in our society than ever before. This technology not only brings many benefits, but also legal challenges, especially in the areas of regulation, liability and ethical considerations. From medical diagnoses to financial decisions, AI systems are increasingly taking over tasks from humans. But what happens if an AI system causes damage? Who is liable for the consequences? One of the main concerns with the use of AI in healthcare is who is responsible for any errors. Applications of AI in healthcare raise legal questions about who is liable for misdiagnoses or incorrect treatments.
This question is becoming increasingly relevant as AI is transforming sectors such as healthcare, transport and finance. While AI offers enormous opportunities, it also brings new liability risks that challenge the current legal framework. The legal context for AI liability is complex and requires clarification of existing legislation. The European Union is working on harmonising liability law for AI technologies, with the European Parliament playing an important role as the initiator of new regulations.
In this article, we discuss the complete legal landscape surrounding AI liability, from contractual liability to product liability, practical examples from Dutch case law, and concrete steps to limit your liability risks. Despite rapid developments, the application of AI in many sectors is still in its infancy, which means that regulations and practical implementation are still evolving.

2. Understanding AI liability: Key concepts and definitions
2.1 Key definitions
AI liability is the legal responsibility for damage caused by the use of AI systems. Artificial intelligence is legally defined as systems that autonomously interpret data, learn from that data and then execute decisions or actions without direct human control. The concept of ‘defectiveness’ for AI products must include the possibility that a product may fail after sale due to self-learning properties. Product liability must now also take into account the self-learning capabilities of AI in the assessment of defectiveness. The current Product Liability Directive from 1985 is not adequate for AI products.
Synonyms and related terminology:
- Strict liability: liability without proof of fault
- Product liability: liability of manufacturers for defective products
- Contractual liability: damage arising from a contractual relationship
- Qualitative liability: liability based on the quality of the product or service provided
Contractual agreements largely determine the distribution of responsibilities and liabilities between parties. The possibilities for recovering damages based on contractual liability are highly dependent on the specific role of AI.
Pro tip: Understand what AI means legally before looking at liability. AI systems differ from traditional software in their self-learning capabilities and autonomous decision-making.
2.2 Relationships between concepts
AI liability relates to various legal concepts and legislation:
Simple relationship map:
AI error → damage occurs → causal link → liability is established → compensation follows
AI Act → safety obligations → non-compliance → increased liability
Product liability → defective product → producer liable → automatic compensation
The Civil Code (Article 6:162 BW) regulates tortious acts, while the Product Liability Directive specifically applies to defective products. The new AI Act adds additional obligations for high-risk AI systems. On 28 September 2022, the European Commission presented new draft directives on AI liability. The new directives require that manufacturers be more easily held liable for damage caused by AI.
3. Why AI liability is crucial in the digital economy
Clear liability rules for AI are essential for public acceptance and responsible innovation. Without clarity, victims of AI errors may be left without recourse, while developers experience uncertainty about their legal risks. The lack of clear legal responsibility for AI leads to legal uncertainty, with the result that victims may not be able to obtain compensation and companies are reluctant to innovate. The European Union is working on harmonising liability law for AI technologies.
Concrete benefits of a clear legal framework:
- Protection for victims of AI errors
- Incentive for safe development of new technologies
- Trust among consumers and businesses
- Level playing field for AI developers
According to research by the European Commission, unclear AI liability can slow down innovation, and the number of AI-related damage claims doubled between 2020 and 2022, especially in the financial sector and healthcare. When a party suffers damage because an AI system makes mistakes, this can lead to complex damage claims and legal proceedings.
Statistical data:
- 60% of businesses are hesitant to implement AI due to liability risks, and the practical use of AI raises additional legal considerations.
- Medical AI errors account for 40% of all AI-related claims
- The EU AI Act applies to 15% of all AI applications (high-risk systems) and companies are obliged to comply with these regulations.
Users can expect AI systems to function safely and reliably, comparable to human performance or other technologies. Nevertheless, the risk remains that AI will make mistakes with a major impact, which underlines the importance of clear liability rules.
4. Overview of liable parties and legal instruments
| Liable Party | Type of Liability | Legal Basis | Conditions |
|---|---|---|---|
| AI developer | Product Liability | Product Liability Directive | Defective product placed on the market; AI software is used within certain legal frameworks |
| User/Provider | Unlawful act | Art. 6:162 Civil Code | Attributable shortcoming |
| Manufacturer | Strict liability | National legislation | No proof of fault required |
| Service provider | Contractual liability | Contractual provisions | Breach of contract demonstrable; there is an obligation for parties to make clear agreements about the use of AI |
Applicable legislation per situation:
- Medical AI: AI Act + medical liability regulations; relevant laws and regulations such as the Network and Information Systems Security Act (Wbni) apply
- Autonomous vehicles: Road Traffic Act + product liability
- Financial AI: Wft + algorithm governance rules
- General AI applications: Civil Code + AI Act
When classifying AI systems, the question arises as to whether AI software can be considered movable property, given its intangible nature and complex functionality.
Legal services in the field of AI increasingly fall under the emerging field of artificial intelligence law.
5. AI systems: Types, operation and relevance to liability
AI systems are one of the most influential new technologies of our time and are playing an increasingly important role in a wide range of sectors. These systems range from generative AI, which can independently create text, images or other content, to intangible software that performs complex analyses or makes decisions based on large amounts of data. What characterises these systems is their ability to learn from data and operate autonomously, often without direct human control.
AI systems are highly relevant to liability because they can cause damage in unique ways. The European Commission has therefore proposed an AI Liability Directive, which specifically addresses liability for damage caused by AI systems. Until this new legislation comes into force, we have to rely on existing national legislation and the Product Liability Directive, which was originally designed for tangible products but is now also applied to intangible software and AI applications.
One of the biggest challenges in AI liability is proving the causal link between the AI system and the damage. Due to the self-learning capabilities and often limited transparency of AI systems, it is not always clear whether an error or defect can be directly attributed to the system. This makes it difficult to determine whether a product is defective within the meaning of the Product Liability Directive, especially in the case of generative AI and other forms of intangible software.
Qualitative liability for defective products remains an important principle. According to the Product Liability Directive, a product must meet the safety standards that can be expected. However, with AI systems, it is not always clear what exactly those expectations are, especially if the system continues to develop after it has been put into service. The way in which the AI system is designed, tested and maintained, the instructions for use and warnings provided, and the extent to which users are aware of the risks are all relevant factors in assessing liability.
The case law of the European Court of Justice and the Supreme Court on product liability offers some guidance, but its application to AI systems has not yet been fully established. Existing directives and national legislation sometimes fall short in addressing the unique risks of AI, creating a need for new legislation and clear case law.
In short, the development and application of AI systems offer enormous opportunities, but also bring new legal challenges. It is essential that the legal framework keeps pace with technological developments so that liability for damage caused by AI systems can be regulated in a fair and effective manner. Until then, it remains important for companies and users of AI to be alert to how they deploy AI systems and to carefully manage the risks.
6. Step-by-step guide to determining AI liability
Step 1: Identify the AI error and damage
Before you begin, determine:
- Which specific AI decision or output caused damage?
- Is there direct financial, physical or immaterial damage?
- When did the damage occur and under what circumstances?
Checklist for determining damage:
□ Document the AI decision or erroneous output
□ Collect evidence of the damage suffered
□ Establish a timeline of events
□ Identify all involved parties
□ Preserve relevant contracts and terms of use
Example scenario: A recruitment AI wrongfully rejects candidates based on discriminatory criteria, causing economic damage to job seekers.
Step 2: Determine the applicable form of liability
Choose contractual liability when:
- There is a contractual relationship between the parties
- The AI supplier has provided specific guarantees
- Use falls within agreed parameters
Choose product liability when:
- The AI system falls under the definition of ‘product’
- There is a defect in the marketing
- Damage was caused by the defective product
Choose tort liability when:
- No contractual relationship exists
- AI user acted negligently
- There was a breach of the duty of care
Recommended legal instruments:
- Consult specialised AI lawyers
- Use AI Act compliance checklists
- Consult with insurance companies about coverage
Step 3: Gather evidence and establish liability
Evidence gathering for AI liability:
- Technical evidence: logs, algorithm documentation, training data
- Process evidence: user instructions, implementation procedures
- Evidence of damage: financial impact, medical reports, expert opinions
Metrics for successful liability claims:
- Completeness of documentation (at least 80% of relevant data)
- Strength of causal link (scientifically substantiated)
- Clarity of extent of damage (quantified impact)
The causal link between AI error and damage is crucial. With complex AI systems (“black box AI”), this can be technically challenging, but the proposed AI Liability Directive sought to introduce a reverse burden of proof for this.
7. Common errors in AI liability
Error 1: Unclear contractual provisions on AI use Many agreements do not contain specific clauses on AI liability, creating uncertainty about who is responsible in the event of errors.
Mistake 2: Insufficient documentation of AI decisions
Companies often fail to keep adequate logs and decision-making processes, making it difficult to prove or refute liability.
Mistake 3: Ignoring EU AI Act obligations Organisations working with high-risk AI systems often overlook the new obligations regarding transparency, documentation, and risk management prescribed by the AI Act.
Pro tip: Avoid these mistakes by establishing clear AI governance in advance, amending contracts with explicit AI clauses, and proactively organising compliance with the AI Act. Invest in good documentation and traceability of AI decisions.
8. Practical example: Medical AI error in Dutch hospital
Case study: “Hospital X avoided liability for diagnostic AI error through appropriate contractual agreements”
Initial situation: A radiology AI system in a Dutch hospital missed an early-stage cancer diagnosis, resulting in delayed treatment for a patient. The patient claimed damages from both the hospital and the AI supplier.
Steps taken:
- Contract analysis: The hospital had explicitly stipulated that the AI was only to be used for support purposes
- Process evidence: Documentation showed that a radiologist made the final decision
- Technical investigation: AI supplier proved that the system functioned within specifications
- Legal strategy: Appeal to standard medical procedures and human responsibility
Final results:
- Liability: Ultimately assigned to the treating radiologist
- Compensation: Covered by medical liability insurance
- Contractual impact: AI supplier exempt from claims
- Process optimisation: Improved protocols for AI support
| Aspect | Before incident | After incident |
|---|---|---|
| AI role | Supportive | Explicitly supportive |
| Responsibility | Unclear | Clear with doctor |
| Documentation | Basic | Comprehensive |
| Staff training | Limited | Intensive |
Legal lessons: This case demonstrates the importance of clear contractual agreements and maintaining human ultimate responsibility for critical AI applications in the medical sector.
9. Frequently asked questions about AI liability
Question 1: Who is liable if self-driving cars cause an accident? This depends on the level of automation and the circumstances. In the case of fully autonomous self-driving cars, which can operate without human intervention, the manufacturer is normally liable, while in the case of semi-autonomous systems, the driver remains ultimately responsible for adequate supervision.
Question 2: Do different rules apply to medical AI than to commercial AI? Yes, medical AI is subject to specific regulations such as the MDR (Medical Device Regulation) and has stricter safety requirements. The AI Act also categorises medical AI as “high risk”, which entails additional obligations for transparency and documentation.
Question 3: What does the EU AI Act mean for liability? The AI Act introduces new duty of care obligations for high-risk AI systems. Violation of these obligations may lead to increased liability. It also increases transparency requirements, which may facilitate the presentation of evidence in the event of damage.
Question 4: How do I prove that AI software is defective? You must demonstrate that the AI system does not meet reasonable safety expectations. This often requires technical expertise and documentation of training data, algorithms and test results. The proposed reversal of the burden of proof in the AI Liability Directive would have simplified this process.

10. Conclusion: Key points for AI liability
5 crucial points for AI liability in practice:
- Multiple parties may be liable: from developers to end users, depending on their role and control over the AI system
- Contractual agreements are essential: clear provisions on AI use prevent legal ambiguities
- Documentation is crucial: good logging and traceability of AI decisions strengthens your legal position
- AI Act compliance is mandatory: new European regulations create additional care obligations for high-risk systems
- Insurance offers protection: specific AI liability insurance covers risks that traditional policies do not cover
The legal landscape surrounding AI liability is evolving rapidly. With the withdrawal of the AI Liability Directive and the entry into force of the AI Act, it remains important to monitor legal developments and proactively organise compliance and risk management.
Next steps:
- Have your AI contracts legally reviewed and amended
- Implement AI governance procedures in accordance with the AI Act
- Investigate specific AI liability insurance for your organisation
- Consult specialised lawyers for complex AI implementations
By proactively addressing AI liability, you can reap the benefits of artificial intelligence while managing legal risks.
