Let's be clear from the start: under current Dutch and EU law, an algorithm cannot be found criminally responsible for a crime. It's a non-starter. Core legal concepts like criminal intent (mens rea) and legal personhood are reserved for humans and, in certain situations, corporations.
However, that simple answer is just the beginning of a much more complex conversation. The actions of an algorithm are becoming absolutely central to proving the guilt—or innocence—of the people who create, deploy, and oversee them.
Can an Algorithm Be Guilty of a Crime?
When we talk about AI in a criminal law context, the real question is whether an algorithm can end up in the defendant's chair. Legally speaking, the answer today is a firm no. No matter how sophisticated it is, an algorithm simply lacks the fundamental traits required to stand trial. It has no consciousness, no personal assets to seize, and no liberty to take away.
This legal reality forces the spotlight to shift from the tool to the user. It's helpful to think of an advanced AI system as a highly complex but ultimately inanimate instrument—not unlike a self-driving car or an automated factory machine. If the machine causes harm, the law doesn't prosecute the machine; it investigates the humans behind it.
The Hurdles of Legal Personhood and Intent
Criminal law is built on two pillars that AI simply cannot satisfy: legal personhood and criminal intent. For any entity to face prosecution, the law must recognise it as a "person," which means either a natural person (a human) or a legal person (like a company). AI systems don't fit into either category.
Even more critically, most serious crimes require proof of mens rea—a "guilty mind." This is about proving that the defendant acted with a specific mental state, whether it was intention, knowledge, or recklessness. An algorithm runs on code and data; it doesn't form intentions or grasp the moral wrongfulness of its actions.
The central difficulty arises from a system’s capacity to select and act independently, thereby inserting a non-human agent between human intent and the resulting harm. This disrupts the conventional model of attributing responsibility in criminal law.
To get straight to the point, the law faces some significant hurdles in applying centuries-old legal principles to autonomous technology. The table below summarises the core problem.
Current Status of Algorithmic Criminal Liability
| Legal Concept | Application to Humans | Application to AI Systems |
|---|---|---|
| Legal Personhood | Humans are "natural persons" with rights and duties under the law. Corporations can be "legal persons." | An AI system is considered property or a tool. It has no independent legal standing. |
| Criminal Intent (Mens Rea) | Prosecutors must prove a "guilty mind," such as intent, recklessness, or knowledge of wrongdoing. | An algorithm operates based on its programming and data inputs. It lacks consciousness, beliefs, or desires. |
| Physical Act (Actus Reus) | A person must have committed a voluntary physical act (or a culpable omission). | An AI's "actions" are outputs of code. They are not voluntary acts in the human sense. |
| Punishment | Sanctions include imprisonment, fines, or community service, aimed at retribution and deterrence. | An AI cannot be imprisoned or fined. "Punishing" the code (e.g., deleting it) doesn't fit legal frameworks. |
As you can see, there's a fundamental mismatch. The entire structure of criminal law is built around human agency, which AI lacks.
Attributed Liability as the Legal Framework
So, because an algorithm can't be found guilty, Dutch law falls back on the concept of attributed liability. This simply means that responsibility for the AI's actions is assigned—or attributed—to a human or corporate actor. In this scenario, the AI’s output becomes a critical piece of evidence that points to the actions or negligence of its human controllers.
This approach isn't revolutionary. It directly mirrors how the law handles crimes committed using other complex tools. For instance, if a company knowingly sells a dangerously defective product that causes injury, the company and its executives are held liable, not the product itself.
The principles guiding this are consistent with established legal doctrines. For legal professionals navigating this space, a solid grasp of existing frameworks is the essential starting point. Our detailed guide on the criminal procedure in the Netherlands offers a great primer on how these cases move from investigation to verdict. The challenge now isn't inventing new laws from scratch, but adapting these proven principles to the unique complexities of autonomous systems.
How Dutch Law Assigns Blame for AI-Facilitated Crimes
Since an algorithm itself cannot be put on trial, the Dutch legal system turns to existing, human-focused doctrines to assign responsibility where it's due. The main legal tool for this task is the doctrine of functional perpetration (functioneel daderschap).
This powerful principle allows a court to hold a person or company criminally liable for an act they didn't physically carry out, as long as they were effectively in control of the situation.
Think of it this way: a construction firm's director doesn't personally operate every crane on site. But if they knowingly order an operator to use a faulty crane and an accident happens, the director is on the hook. The same logic applies when the "crane" is a sophisticated AI system. The focus shifts from what the algorithm did to the human decisions that allowed it to happen.
This is a critical concept for anyone working with AI, as it gives prosecutors a direct path to link an AI's harmful output back to a person or a corporation. It neatly sidesteps the impossible task of proving an algorithm’s “intent” and instead zeroes in on the intent and negligence of its human masters.
The Two Tests of Functional Perpetration
For a prosecutor to successfully argue functional perpetration in court, they have to satisfy two key tests. These criteria are the pillars that determine whether a person or company can be seen as the "functional" author of a crime committed through an AI.
-
Power of Control (Beschikkingsmacht): Did the individual or company have the actual power to determine whether the AI's criminal behaviour would take place? This is all about authority and oversight—things like setting the AI’s operating rules, having the ability to shut it down, or defining the parameters that guide its decisions.
-
Acceptance (Aanvaarding): Did the individual or company accept the risk that a criminal act might happen? Crucially, this doesn't require direct intent. It can be proven if they knew there was a chance of a harmful outcome but consciously chose not to put sufficient safeguards in place.
These two pillars—control and acceptance—form the bedrock of how Dutch law answers the question, "Can an algorithm be partly responsible?". The answer is a clear no, but its human controller can be held wholly responsible.
A Practical Scenario: Autonomous Drone Injury
Let's apply this to a real-world scenario. Imagine a logistics company deploys a fleet of autonomous delivery drones. One drone, guided by an AI navigation system, malfunctions over a crowded public square and causes a serious injury.
A prosecutor building a case against the company would lean heavily on the functional perpetration framework:
-
Proving Control: They would demonstrate that the company had total command over the drone fleet. The company set the delivery routes, managed the software updates, and held the "kill switch" to ground the drones at any moment.
-
Proving Acceptance: Evidence might come to light showing the company was aware its AI had a 5% error rate in dense urban areas but decided to deploy it anyway to cut costs. By operating the system despite this known risk, the company effectively accepted the possibility of a harmful outcome.
Under this doctrine, the company becomes the perpetrator of the crime (e.g., grievous bodily harm by negligence). The AI is merely the instrument; the company's decisions to deploy and not adequately supervise it constitute the criminal act.
Corporate Liability and Gross Negligence
This concept of functional perpetration extends directly to corporate criminal liability. An organisation can be held accountable if the criminal conduct can be reasonably attributed to it. This often comes into play in cases of gross negligence, where a company’s policies—or lack thereof—created an environment where an AI-driven crime was not just possible, but foreseeable.
While the legal principles are well-established, their application to AI is still taking shape. In the Netherlands, as of 2025, there are no published court rulings specifically on criminal liability for harms caused solely by an AI system's autonomous decision. This shows that the legal field is still playing catch-up with technology.
For now, prosecutors adapt these general doctrines, holding individuals liable if they controlled the AI and accepted its potential for wrongful actions, such as in cases of negligent homicide resulting from reckless AI operation. You can read more about the current state of AI in Dutch law and its implications.
For legal counsel, this reality puts the focus squarely on one thing: demonstrating responsible human oversight and a proactive approach to risk management. Proving a lack of control or arguing that a harmful outcome was genuinely unforeseeable will be central to defending against such charges.
The EU AI Act's Impact on Criminal Liability
While Dutch domestic law like functioneel daderschap provides a framework for attributing blame, the landscape is being dramatically reshaped by a much broader initiative: the European Union's Artificial Intelligence Act. This isn't just another piece of regulation; it's a comprehensive risk-based framework designed to govern how AI systems are developed and deployed across the single market.
For legal professionals and businesses, getting to grips with the AI Act is crucial because it creates new compliance duties that have a direct bearing on criminal liability. A failure to adhere to its strict requirements can be used by prosecutors as powerful evidence of negligence or recklessness, forming the basis for criminal charges when an AI system causes harm. This legislation shifts the conversation from merely reacting to harm to proactively preventing it.
The AI Act establishes a clear hierarchy, categorising AI systems based on their potential to harm safety or fundamental rights. This structure is the key to understanding its connection to criminal law.
Understanding the Risk Categories
The Act’s most significant impact comes from its tiered approach. It doesn't treat all AI the same. Instead, it sorts systems into categories, each with different legal obligations.
-
Unacceptable Risk: These are systems considered so threatening to fundamental rights that they are banned outright. Think government-run social scoring systems or real-time biometric identification in public spaces by law enforcement (with narrow exceptions).
-
High-Risk: This is the most critical category for criminal law. It covers AI used in sensitive areas like critical infrastructure, medical devices, and, importantly, law enforcement and the administration of justice. Predictive policing tools and AI-driven sentencing software fall squarely into this group.
-
Limited Risk: These systems, such as chatbots, face lighter transparency obligations. Users must simply be made aware that they are interacting with an AI.
-
Minimal Risk: This category includes most AI applications, like spam filters or AI in video games, which are largely unregulated.
Deploying a system in the "unacceptable risk" category is a direct violation that could easily support a criminal negligence case if it leads to harm. The core legal battleground, however, will be around the high-risk systems.
High-Risk Systems and Criminal Negligence
For high-risk AI, the Act imposes stringent requirements that function as a legal standard of care. These obligations aren't suggestions; they are mandatory duties for developers and deployers.
Key requirements for high-risk systems include robust data governance to prevent bias, complete technical documentation, full transparency for users, ensuring human oversight is possible at all times, and maintaining high levels of accuracy and cybersecurity.
Imagine a company deploys a predictive policing algorithm without properly vetting the training data for racial bias—a clear violation of the Act’s data governance rules. If this biased system leads to a wrongful arrest that results in harm, a prosecutor has a ready-made argument. They can point to the non-compliance with the AI Act as direct evidence of the company’s failure to take reasonable care, making a charge of corporate negligence much easier to prove.
The EU-wide Artificial Intelligence Act, which became applicable in the Netherlands in February 2025, fundamentally shapes this legal landscape. Non-compliance can result in massive administrative fines of up to €35 million or 7% of total annual turnover. The Dutch government has mandated that organisations identify and phase out any banned systems, reflecting serious concerns over flawed AI seen in wrongful arrests from facial recognition errors. As legal scholars advocate for greater rights for defendants to challenge AI evidence, the Act is paving the way for more rigorous judicial scrutiny. For more detail on these new rules, you can explore the AI Act prohibitions that came into force.
Lessons from the Dutch Childcare Benefits Scandal
While legal theories give us a framework, nothing illustrates the real-world stakes of algorithmic failure quite like the Dutch childcare benefits scandal, or toeslagenaffaire. This national crisis is a harrowing case study of systemic injustice, driven not by a single malicious actor, but by an opaque, automated system that completely spun out of control.
The scandal reveals the devastating human cost when accountability gets lost inside a "black box" algorithm. For legal professionals, it’s a critical lesson in how automated systems, even if not criminally prosecuted themselves, can cause profound harm and shatter public trust in our institutions.
How the Algorithm Falsely Accused Thousands
At its heart, the scandal revolved around a self-learning algorithm used by the Dutch Tax and Customs Administration. Its job was to detect potential fraud in childcare benefit claims. While the goal was sound, the system’s internal logic was deeply flawed and, ultimately, discriminatory.
The algorithm began to wrongly flag thousands of families as fraudsters based on criteria that should have been harmless. A minor administrative slip-up, like a missing signature, was enough to trigger a full-blown fraud investigation. The consequences were catastrophic for over 26,000 families, who were ordered to repay tens of thousands of euros, pushing many into financial ruin.
This situation shows just how powerfully an AI can amplify injustice. The discriminatory patterns in the tax authorities' algorithms unfairly targeted specific groups, leading to severe financial and social damage. In response to the national outcry, the Dutch government published the 'Handbook on Non-discrimination by Design' in 2021 to proactively prevent such biases in future AI systems. You can discover more insights about how Dutch law is adapting to AI on globallegalinsights.com.
The Critical Gaps in Transparency and Accountability
The toeslagenaffaire ripped open several critical gaps in the legal and ethical oversight of automated decision-making. These failures are central to understanding when an algorithm's output might raise questions of criminal responsibility for its human operators.
Three key failures stood out:
-
Lack of Transparency: Affected families were never given a clear reason why they were flagged. The system was a black box, making it impossible for them to challenge its conclusions.
-
Absence of Human Oversight: The algorithm’s decisions were often treated as gospel. There was a systemic failure of human officials to question or override the automated fraud classifications.
-
The Presumption of Guilt: Once the system flagged a family, they were presumed guilty. This reversed the burden of proof, forcing them into an impossible battle to prove their innocence against an invisible accuser.
The scandal was a stark reminder that when an automated system makes a life-altering decision, the "right to an explanation" is not a luxury—it is a fundamental component of justice. Without it, there can be no meaningful appeal.
For anyone facing such accusations, understanding the legal framework is paramount. The Dutch approach to fraud is complex, and the scandal underscores the need for expert guidance. Learn more about the Dutch legal approach to fraud and financial crime in our article.
The Aftermath: A Push for Regulation
While no algorithm was put on trial, the human and political fallout was immense. It led to the resignation of the entire Dutch government in 2021. The scandal became a powerful catalyst for change, directly influencing the development of stricter guidelines for using AI in public administration.
It proved that even without criminal charges against the code itself, recklessly deploying a flawed, biased system can have consequences on par with widespread institutional negligence. This cautionary tale now informs regulatory discussions across Europe, including the EU AI Act, ensuring that transparency, fairness, and human oversight are at the forefront of any future AI deployment.
Defence Strategies When AI Is Involved
When a client is facing criminal charges because of something an AI system did, their legal counsel steps into a challenging new world. The standard legal playbook needs a major rethink. A solid defence has to focus on taking apart the prosecution's case for human intent or negligence, and that often means zeroing in on the algorithm's own autonomous and sometimes unpredictable nature.
The biggest hurdle for any prosecutor is proving a human had a specific criminal intent (mens rea) when the direct cause of the harm was a complex algorithm. This is precisely where the defence has its best opening. The aim is to create reasonable doubt by showing that the human simply didn't have the control or foresight to be held criminally responsible for the AI’s independent decision.
Challenging Intent with the Black Box Defence
One of the strongest arguments available is the "black box" defence. This strategy plays on the fact that many advanced AI systems, especially those built on deep learning or neural networks, are inherently opaque. The argument is straightforward: if the people who created the system can't fully explain how it arrived at a particular conclusion, how can a user possibly be expected to have foreseen and intended a criminal outcome?
This defence goes right to the heart of the intent requirement. Counsel can argue that the AI's harmful action was an unforeseeable, emergent behaviour—a kind of digital fluke, not a planned criminal act. The more complex and autonomous the AI, the more compelling this argument becomes.
To make this defence work, you absolutely need the right experts on your side.
-
Digital Forensics Experts: They can dive into the AI's code, data logs, and decision-making trails to find the exact point where it deviated from its expected behaviour.
-
AI Ethicists and Computer Scientists: These experts can testify about the built-in unpredictability of certain AI models. They can explain to the court why a "rogue" result was a technical failure, not a product of the defendant's will.
By framing the incident as an unforeseeable malfunction, the defence can effectively argue that the essential "guilty mind" needed for a conviction just isn't there.
Proving a Lack of Control or Culpable Omission
Another effective strategy is to argue a lack of effective control. Under the Dutch legal principle of functioneel daderschap (functional perpetration), liability requires the defendant to have had the power to control the action. The defence can push back on this by demonstrating that, once the AI was up and running, it operated with a degree of autonomy that put its actions beyond the defendant's direct influence.
This could involve showing that the system was designed to learn and adapt in real time, making its behaviour fluid and not entirely predictable. The defence's position becomes that the defendant can't be held responsible for an action they could neither directly command nor reasonably stop.
The core of this defence is to shift the narrative from one of human culpability to one of technological autonomy. It reframes the defendant not as a perpetrator, but as a victim of the system's unpredictable logic.
When an AI's actions could lead to criminal liability, having robust AI agent guardrails in place is not only a crucial preventative step but also a key part of a strong defence. Proving that these kinds of state-of-the-art safety measures were implemented can powerfully support the argument that the defendant did not recklessly accept the risk of a harmful outcome.
Ultimately, the right to a fair defence is paramount, even in cases that are technically complicated. A defendant has fundamental protections, just as they would in any human-centric crime. To understand these core principles in a broader context, you can learn more about the right to remain silent in criminal matters and how it applies within Dutch law.
A Practical Compliance Roadmap for Businesses Using AI
Knowing the legal theories is one thing, but actually building a solid compliance framework is another challenge entirely. For businesses using AI in the Netherlands and across the EU, the best way to manage the risk of criminal liability is through proactive governance and being able to show you’ve done your homework. A clear roadmap is essential.
This isn’t about stifling innovation. It's about putting smart safeguards in place to protect your company, your customers, and your reputation. By creating a strong internal framework, you're also building a powerful defence against any claims of negligence or recklessness if an AI system ever causes unexpected harm.
Building Your AI Governance Foundation
First things first: you need a clear structure for oversight and accountability. This isn't just an IT problem; it’s a core business responsibility that needs full support from your legal, compliance, and executive teams. Adopting robust AI governance best practices is a crucial step for managing risks and ensuring your AI is deployed legally and ethically.
Your governance model must be built on a few key pillars:
-
Human-in-the-Loop Oversight: For any high-stakes decision, a human must have the final say. This person or team needs the authority and the technical know-how to step in, make corrections, or completely override the AI's suggestions.
-
Clear Accountability Lines: You must define exactly who is responsible for the AI system at every single stage—from development and data sourcing to deployment and ongoing monitoring. Any grey areas here create significant legal risks.
-
Regular Algorithmic Audits: Just like you audit your company's finances, you have to regularly audit your AI systems. These audits should be carried out by independent third parties to check for performance, fairness, and compliance with rules like the EU AI Act.
Emphasising Explainability and Data Integrity
If you can't explain how your system works, you can't defend it in court. The "black box" problem is a massive legal weak spot, which makes designing for transparency absolutely critical.
Explainability by Design should be a non-negotiable principle. Your technical teams must build systems where the decision-making process can be documented, understood, and explained to non-technical people like judges and regulators.
This all starts with the data used to train your models. Meticulous data governance is your best defence against bias—a major source of algorithmic harm. Make sure your data is high-quality, relevant, and properly represents the people it will affect. Document every step of how you source, clean, and process data to create a clear audit trail. This documentation is priceless evidence that you've exercised due diligence.
An EU AI Act Compliance Checklist
The EU AI Act is all about proactive risk management, especially for high-risk systems. Your compliance strategy needs to show a continuous commitment to safety and fairness.
A practical checklist should include:
-
Risk Classification: Formally classify every AI system your company uses according to the Act's risk categories.
-
Impact Assessments: Before deploying any high-risk AI, conduct and document Data Protection Impact Assessments (DPIAs) and Fundamental Rights Impact Assessments (FRIAs).
-
Technical Documentation: Keep detailed, up-to-date technical documentation ready to provide to regulators whenever they ask for it.
-
Continuous Monitoring: Set up processes for post-market monitoring to keep an eye on the AI’s performance and catch any unforeseen risks that show up after it’s been deployed.
Frequently Asked Questions
The crossover between AI and criminal law understandably brings up a lot of questions. Here, we tackle some of the most common concerns for legal professionals, developers, and business owners wondering if an algorithm can really be partly to blame for a crime.
Can a Company Be Held Criminally Liable If Its AI Discriminates?
Yes, it absolutely can. While you won't see an AI system itself in the dock, the company that put it to use can certainly face criminal charges for discriminatory outcomes under Dutch corporate criminal liability principles.
If a company's leadership knew about the AI's potential for bias and did nothing, or if they were grossly negligent in their oversight, criminal charges are a very real possibility. The EU AI Act also sets strict anti-bias rules for high-risk systems. Failing to meet those standards would be powerful evidence of negligence in any criminal case. The legal spotlight will always shine brightest on the human decisions made around the AI’s creation, training, and deployment.
What Is the Black Box Problem in AI?
The "black box" problem is a term for complex AI models where even the people who built them can't fully trace how a specific output was reached. This is a massive issue when AI and criminal law collide.
In court, this can actually become the cornerstone of a defence. A lawyer could argue that a harmful outcome was completely unforeseeable, meaning the defendant lacked the required criminal intent (mens rea). The argument is simple: how could they have intended a result they couldn't possibly predict?
But prosecutors have a strong comeback. They can argue that deploying a powerful, unpredictable system without proper safeguards is, in and of itself, an act of recklessness or gross negligence. And that can be enough to satisfy the mental element needed for criminal liability.
This sets the stage for a high-stakes legal fight over foreseeability and the duty of care.
What Is the Best Way for Developers to Limit Legal Risk?
The single most effective thing developers can do to shield themselves from legal risk is to keep meticulous, transparent documentation through every stage of the AI's life. Think of it as creating a detailed "audit trail" that can become your most crucial piece of evidence.
This documentation really needs to cover everything from start to finish:
-
Data Sources: Where did the training data come from, and how was it checked for quality and bias?
-
Bias Mitigation: What specific steps were taken to find and remove biases from the datasets?
-
Design Rationale: What was the logic behind the key architectural choices and algorithms?
-
Testing Results: A full record of every test run, including failures and how you fixed them.
Putting a clear framework for human oversight in place is just as vital. If an investigation ever happens, this paperwork serves as undeniable proof of due diligence. It helps show that any harm caused was a truly unforeseeable accident, not the result of negligence—and that forms the bedrock of a solid legal defence.