Mens Rea Without a Mind? A Critical Examination of the Impossibility of Attributing Criminal Intent to Artificial Intelligence


[This Article is authored by Akhil Kumar K.S. and Lakshmi K Shanmughan are 3rd Year student at the National University of Advanced Legal Studies, Kochi]

Introduction

Artificial Intelligence (AI) is reshaping the field of criminal law jurisprudence. A person, whether natural or legal, is generally considered an offender, but what if AI commits a crime? How mens rea, the most essential element, will be determined in cases involving AI. The question leads to another question, whether AI can be treated as a person or not? Today, AI technologies, used in self-driving cars, robots, and medical software, are taking on tasks that once required human judgment. However, AI systems are capable of making decisions independently and can also cause harm. For instance, in 2018, a self-driving Uber vehicle struck a pedestrian in Arizona, which raised critical questions about AI’s liability. Therefore, a key question that arises when considering whether a crime has been committed by an AI system is Who will be responsible for the crime? Of course, if an AI system has performed a criminal act, then the company that created it would be liable. However, can criminal liability be attached to an AI system without some degree of human intent? For example, cases involving self-driving cars, automated trading algorithms, and the misuse of AI facial recognition, among other. This is the question addressed in this research. Criminal law experts have pointed out this lacuna in the emerging world, observing AI as a transformative force reshaping the field and emphasising the need for legal rules that can keep pace with its growing global impact.

The Concept of Mens Rea, Jurisprudential Foundations

A criminal offence has two elements: Actus Reus and Mens Rea, as expressed by the maxim, Actus non facit reum nisi mens sit rea, which means that an act does not make a person guilty unless the mind is also guilty. An act itself cannot become an offence, but there must be a guilty mind. Additionally, the term is used to embrace any mental element that may be an ingredient of a criminal offence, for example, recklessness or knowledge. It is based on principles like autonomy, free will, and moral blameworthiness. Jurists such as Immanuel Kant, John Austin, and H.L.A. Hart argue that punishment is justified only when the actor can appreciate the consequences of their actions. In the case of AI, determining the guilty mind becomes arduous as there is no human mind, but rather an artificial one. A common example is the case of AI-powered drone malfunctions which causes harm, without mens rea being present.

Generally, AI cannot form its own mens rea because ultimately, its actions trace back to designers, operators, or users. Highly autonomous systems are making it increasingly difficult to distinguish between human and machine agency, and creating obstacles in cases where harm is caused to humans as a result of an AI’s harmful actions, which no human can predict or comprehend.

Criminal Liability & Artificial Entities, from Corporations to Algorithms

While discussing the criminal liability of AI, it is crucial to deconstruct the concept of corporate liability. In such a case, where AI has autonomously acted, it must be held accountable, not just the corporate body which created such AI. It becomes complex when decisions are data-driven, self-learning, and unpredictable, as is often the case with algorithms. Injuries caused by autonomous AI may not always be prevented by humans, and the traditional model of imposing liability may not be able to sufficiently address them. This brings up the most crucial question of whether AI’s lack of mens rea absolves it of its actual responsibility or should it be regarded as a corporation for criminal liability purposes?For instance, during the 2010 “Flash Crash,” an automated trading algorithm caused a sudden market crash, which sparked debates on algorithmic accountability.

This highlights the need to revisit how policies on liability imposition are structured. Tools such as strict liability, risk liability, human oversight, and liability established through statutes, as well as prior certification, regular audits, and transparency, will be instrumental in ensuring that harm does not occur. Therefore, as technology continues to advance, the criminal law should also evolve, and hybrid schemes based on control, foreseeability, and risk become necessary.

Can Machines Think? Assessing the Possibility of Mens Rea in Artificial Intelligence

Has a machine ever intended to do something? This question is a real legal dilemma that we face in today’s world, as AI systems now make decisions that impact safety, privacy, finance, warfare, etc. As courts and governments struggle with this issue, one question stands out distinctly: Can AI possess the mens rea required for criminal responsibility? Philosophers such as Alan Turing have turned the question around from “can machines think?” to “can machines behave as if they think?” and this is the distinction that matters under criminal law because AI can behave in ways that look intelligent, but it does not prove the existence of a mind. To be guilty, the accused must have intention, knowledge, recklessness or a legally recognised form of negligence. Even sophisticated AI systems do not always understand, do not experience consequences, do not appreciate risk, have no goals beyond their programmed optimisation function. Therefore, even if AI behaves rationally, it is only a functional imitation and not an actual mental activity. Thus, equating algorithmic optimisation to legal intent would be a categorical error.

Attribution and Accountability, Human Oversight, Designer Liability and Regulatory Frameworks

A machine lacks the mens rea, which the law cannot logically or reasonably attribute to it because it should be attributed to the chain of responsible individuals. Since AI lacks intention or awareness, whose mental state would be relevant to the harm caused? If a programmer, operator or deployer intentionally configures or uses an AI system to cause harm, mens rea would be directly attributed to them. In the case of the “Tay” chatbot by Microsoft, human oversight failed when the AI was exploited to produce offensive tweets. In such situations, courts may use tests such as reasonable foreseeability, proximate causation, control and supervision and knowledge of risk, etc. The idea is that liability travels upstream from AI to its creators or those who trained or deployed it. The EU AI Act, OECD AI Governance Principles and autonomous systems debates have emphasised that the accountability issue is attributed to humans retaining ultimate control. It is the AI designer’s decisions regarding datasets, model architecture, risk testing, and safety features that directly shape outcomes. AI is unpredictable, and this creates a legal expectation that designers must embed fail-safes and ensure transparency and auditability. In case the harm flows from systemic flaws rather than individual acts, responsibility shifts from the AI system to the institutions behind it in order to prevent offloading of blame onto “rogue AI”.

Conclusion

Artificial Intelligence challenges the foundational assumptions of criminal law by exposing a gap between doctrines of culpability and the newer realities of autonomous decision-making. AI cannot form intention, knowledge or lack moral awareness; it can only imitate these mental states through data-driven behaviour. This makes it almost impossible to directly attribute criminal culpability to AI, yet the harms AI can cause are real, often severe, and increasingly unpredictable. Given the global scale of AI development and deployment, individualised national approaches are inadequate. There is a need for a global, harmonised regulatory architecture which must consist of internationally agreed standards for high-risk AI autonomy, implemented through domestic laws, and ensure uniform safety, accountability, and transparency, while leaving the enforcement aspect to national regulators. Accountability should not be watered down in front of algorithmic processes, and the appropriate framework of regulation needs to be put in place, integrating ex-ante obligations and corporate liability through the prism of high-risk AI systems. In the end, the liability for AI is not tied to the reification of the guilty party through the creation of artificial systems in the likeness of the wronged party but to the reconceptualizing that concerns those that design and benefit. Therefore, a balanced framework would create mutual global standards, transparency, and risk management, at the same time, giving countries flexibility to enforce these rules through domestic legislation. This approach helps the law keep pace with AI’s rapid growth, ensuring accountability and protecting public trust along the way.

Tags:

Leave a comment


Find More Articles on:

,