top of page

Home > Publications > "AI and Legal Responsibility: Rethinking Civil Liability in the Age of Autonomous Systems"

June 5th 2025

AI and Legal Responsibility: Rethinking Civil Liability in the Age of Autonomous Systems

IMG_0193 - Tiago Matos.heic

By Tiago Matos

Bachelor’s Degree in Solicitorship from ISCET. Currently completing a Master’s in Law with specialization in Legal-Business Sciences at Universidade Lusófona do Porto. Also attending a Postgraduate Program in Real Estate Law at the Faculty of Law of the University of Lisbon (FDUL). Academic interests include Corporate Law, International Law, Real Estate Law, and Arbitration. Find Tiago Matos on LinkedIn.

Image by Tianyi Ma

Abstract
 

The growing use of artificial intelligence (AI) across both private and public sectors raises complex questions about legal responsibility when harm occurs. This article investigates whether existing civil liability frameworks are adequate to address the harms caused by autonomous systems. It explores proposals to adapt European law to the specificities of AI, focusing on risk allocation, shared responsibility, and the emerging European Artificial Intelligence Act (AI Act).


Keywords: Artificial Intelligence; Civil Liability; European AI Act; Autonomous Systems; Legal Control; Risk Allocation; Shared Responsibility; AI Regulation

​

1. Introduction

The rapid development of artificial intelligence technologies is reshaping numerous areas of society, from healthcare and transportation to public safety and financial systems. However, this technological revolution also brings new legal challenges, particularly in the realm of civil liability. When an autonomous system makes a decision that results in harm to third parties, a fundamental question arises: who should be held accountable?


AI-based systems operate with varying degrees of autonomy and often make decisions that are unpredictable or not entirely comprehensible to the humans involved. This challenges the foundational principles of civil liability, which typically rely on fault, a clear causal link, and the identification of a responsible human agent. In light of these difficulties, it is necessary to rethink existing legal mechanisms to ensure fairness, protect victims, and foster public trust in emerging technologies.

 

This article examines the shortcomings of current civil liability regimes in the context of autonomous systems and highlights the legislative proposals put forward by the European Union, in particular the European AI Act and the proposed AI Liability Directive. Based on this analysis, the article discusses alternatives for risk allocation, shared liability models, and the importance of transparency and explainability in algorithmic decision-making.

​

2. The Challenges of Civil Liability in the Age of AI

Artificial intelligence systems are distinguished by their capacity for autonomous learning, adaptation to new contexts, and decision-making without direct human intervention. While these features are highly advantageous in many applications, they also generate considerable legal uncertainty, particularly concerning the predictability of system behaviour and the attribution of legal responsibility.


Traditional models of liability in Western legal systems can generally be divided into two categories: fault-based liability, which requires proof of negligent or wrongful conduct by a human actor, and strict liability, which applies to those who control inherently dangerous activities regardless of fault. However, both models face serious limitations when applied to AI-driven systems. The absence of human intention, the distributed nature of system development and deployment, and the complexity of the AI supply chain all contribute to a fragmented and often unclear allocation of legal responsibility.

Moreover, the technical opacity of many AI systems, especially those employing deep learning, poses significant challenges in proving causation. Establishing a causal link between a system’s operation and a given harm is often hindered by the “black box” nature of algorithmic processes, which can obscure the rationale behind automated decisions. These factors complicate both the attribution of liability and access to effective legal remedies.

​

3. The European Approach: From Regulation to Civil Liability


The European Union has adopted a dual approach to addressing the legal implications of artificial intelligence: a preventive regulatory framework and a complementary reform of civil liability rules. The AI Act, proposed by the European Commission, establishes a horizontal structure based on risk classification, imposing strict requirements on “high-risk” systems, particularly regarding transparency, safety, and human oversight. Although primarily focused on risk prevention, the Act indirectly addresses liability by clarifying the roles and obligations of different actors along the AI value chain.

​

In parallel, the proposed AI Liability Directive seeks to modernise the EU’s civil liability regimes by introducing procedural innovations tailored to the complexity of AI-related harm. Notably, it allows for a rebuttable presumption of causality when damage results from a high-risk system’s non-compliance with legal obligations, thereby easing the evidentiary burden on claimants. It also grants victims improved access to technical information, while safeguarding trade secrets and avoiding excessive disclosure demands. Together, these instruments aim to ensure legal certainty, enhance access to justice, and promote responsible innovation across the European AI ecosystem.

​

4. Emerging Models of Liability: Proposals and Perspectives


As artificial intelligence systems become increasingly complex and integrated across sectors, traditional liability frameworks are being re-evaluated. One emerging perspective emphasizes the notion of shared responsibility, acknowledging the diverse actors involved throughout the AI life cycle, from developers and data trainers to end-users and operators. This distributed model calls for joint or proportionate liability approaches that better reflect the collaborative nature of AI systems.


Another proposal involves expanding strict liability to cover risks inherent in autonomous technologies, particularly in sensitive domains like healthcare or transportation. In this context, victims would only need to prove harm and a causal link to the AI system’s operation, rather than fault. Additionally, some scholars and policymakers support the creation of guarantee funds or mandatory insurance schemes for high-risk AI applications. Inspired by existing regimes in fields like nuclear energy and motor vehicles, these mechanisms aim to secure compensation for victims while promoting legal certainty and innovation resilience.

​

5. Transparency and Explainability as Pillars of Liability


The lack of transparency in AI algorithms, especially those based on deep learning, hampers both preventive efforts and post-incident accountability. The requirement for explainability, that is, the ability to clarify how decisions are made, is essential for the effective enforcement of any liability regime.


The AI Act acknowledges this need by imposing obligations related to documentation, record-keeping, and clear communication with users. These measures enhance accountability and facilitate oversight by authorities and courts, thereby contributing to a more robust legal framework for AI governance.

​

6. Conclusion


The era of autonomous systems necessitates an urgent re-evaluation of traditional civil liability models. Although existing legal frameworks offer some useful tools, they remain inadequate in addressing the complexity and risk dynamics of artificial intelligence.


The European Union has taken significant steps in this direction through the proposed AI Act and the AI Liability Directive. These initiatives represent meaningful progress toward a legal system that is better aligned with contemporary technological realities, striving to balance victim protection, innovation, and legal certainty.


Nevertheless, further action is required. Continued refinement of risk allocation mechanisms, the development of shared responsibility models, and the assurance of transparency and explainability in algorithmic decision-making are essential. Only by addressing these issues can we build a legal system that is just, effective, and capable of confronting the challenges of the AI era.​​​​​​​​​​​​​​​​​​

IMG_0193 - Tiago Matos.heic

By Tiago Matos

Bachelor’s Degree in Solicitorship from ISCET. Currently completing a Master’s in Law with specialization in Legal-Business Sciences at Universidade Lusófona do Porto. Also attending a Postgraduate Program in Real Estate Law at the Faculty of Law of the University of Lisbon (FDUL). Academic interests include Corporate Law, International Law, Real Estate Law, and Arbitration. Find Tiago Matos on LinkedIn.

Disclaimer: The International Platform for Crime, Law, and AI is committed to fostering academic freedom and open discourse. The views and opinions expressed in published articles are solely those of the authors and do not necessarily reflect the views of the journal, its editorial team, or its affiliates. We encourage diverse perspectives and critical discussions while upholding academic integrity and respect for all viewpoints.

bottom of page