top of page

Home > Publications > "Lights and Shadows of The First-Ever Legal Framework on AI Worldwide"

March 28th 2025

Lights and Shadows of The First-Ever Legal Framework on AI Worldwide

By Giulia Bassini

Screenshot 2025-03-03 alle 16.18.23 - Giulia Bassini.png

Giulia Bassini holds a single-cycle Master’s Degree in Law from the University of Siena (Italy), where she also spent a study period at the University of Bordeaux (France). Bassini's academic path has been strongly oriented toward the law of digital technologies, with a particular interest in data protection. Her thesis, titled “Internet Service Providers and the Right to Be Forgotten: A Comparative Analysis between Europe and the United States”, explored the evolving landscape of online privacy and liability. Currently, she is developing a PhD proposal focusing on the regulation of post-mortem personal data. Alongside her research activities, she regularly contributes to legal journals. Find Giulia Bassini on LinkedIn.

Data Processing

The Regulation (EU) 2024/1689, which aims to harmonize rules on artificial intelligence, came into effect on 1st August 2024. Since then, legal experts around the world have been carefully interpreting this European Union’s Act to understand its juridical impact. However, what are the doctrine’s misgivings on the subject?

​

The second decade of the 21st century has been characterised by rapid advances in machine learning and the growing integration of artificial intelligence (AI) across various sectors. Whilst these technologies have had a predominantly positive impact, facilitating scientific research and simplifying many aspects of daily life, legal scholars and policymakers have expressed increasing concern about their potential negative effects on individuals' rights and freedoms.

​

In response to these challenges, several regulatory initiatives have been introduced worldwide, with the European Union playing a pioneering role. These regulatory initiatives aim to mitigate the risks posed by AI systems, particularly their impact on fundamental rights such as freedom of expression and information, democratic participation, equality and non-discrimination, as well as on privacy, personality rights, personal data protection, and intellectual property.

​

A significant development was marked by the adoption of the EU's Regulation (EU) 2024/1689 on 12th July 2024, which established a comprehensive set of harmonised rules concerning artificial intelligence, often referred to as the AI Act. The Act is noteworthy for its comprehensiveness, encompassing 68 definitions, a wide scope of application, and an innovative regulatory architecture comprising the European AI Office, the EU AI Board, and various national notifying authorities.

​

A distinctive feature of the AI Act is its risk-based classification of AI systems, defined in Article 3(2) as "the combination of the probability of an occurrence of harm and the severity of that harm." This framework serves as the foundation for the entire regulation.

​

In accordance with Article 5 of the Act, certain AI practices are identified as being prohibited, on the basis that they pose an unacceptable risk to individuals and society. These include activities such as the exploitation of vulnerabilities of specific groups, general-purpose social scoring, predictive policing, the creation of facial recognition databases, the use of emotion recognition in the workplace or education, biometric categorisation of protected characteristics, and real-time biometric identification in public spaces. The utilisation of machine learning for these applications is categorically proscribed, as such applications are considered to pose a substantial threat to safety, human dignity, and fundamental rights.

​

Conversely, Article 6 stipulates the regulation of high-risk AI systems, defined as those with the potential to exert a significant impact on health, safety, or fundamental rights, without being categorically proscribed. Examples of such systems include AI components in critical infrastructure, AI systems utilised in educational settings, remote biometric identification tools, emotion recognition systems, and AI applications in judicial administration.

​

Prior to the placement of such high-risk systems on the market, they are subject to a set of strict compliance obligations. Firstly, developers must conduct comprehensive risk assessments and implement robust mitigation measures to prevent or minimise harm. This requirement assumes particular significance in domains such as healthcare, law enforcement, and public safety, where the consequences of AI malfunction or misuse can be especially dire.

​

Another crucial obligation pertains to the utilisation of high-quality datasets, with the objective of mitigating the risk of discriminatory outcomes. Ensuring that the data used is representative, accurate, and free from bias is pivotal in promoting algorithmic fairness and preventing the replication of social inequalities.

​

The AI Act also stipulates the implementation of logging mechanisms to ensure traceability, that is, the ability to reconstruct and comprehend the generation of specific outputs. This enhances transparency and accountability, enabling both regulators and affected individuals to assess the AI system's reasoning.

​

Furthermore, providers are obligated to produce comprehensive technical documentation, meticulously delineating the system's design, intended purpose, and adherence to regulatory stipulations. This documentation is vital for regulatory oversight and plays a central role in enabling effective monitoring and enforcement. The Act also underscores the significance of providing clear and accessible instructions to those who will deploy or use the system. The deployer must be equipped with the information necessary to operate the system in a responsible and safe manner, including a full understanding of its limitations.

​

It is imperative to note that high-risk systems must incorporate suitable human oversight mechanisms. This ensures that human operators remain in control and can intervene, when necessary, particularly in contexts where decisions significantly affect individuals' rights or well-being.

​

Finally, the regulation requires these systems to demonstrate a high level of robustness, cybersecurity, and accuracy. The overarching objective is to avert technical malfunctions and to ensure that malicious interference is prevented, whilst also guaranteeing that the outputs produced by the AI system are both reliable and precise.

​

The AI Act, which entered into force recently, has been regarded as a landmark achievement in the European Union's ongoing efforts to regulate artificial intelligence in a manner that is both harmonised and rights-based. Nevertheless, the regulation has not been universally celebrated. Within academic circles, it has given rise to a wide-ranging debate, with scholars expressing concerns regarding conceptual ambiguity, operational complexity, and the challenges of effective enforcement.

​

A recurrent criticism is the regulation's broad and vague definition of "AI systems". The current text risks covering a wide spectrum of technologies, including those with minimal potential for harm. This ambiguity has the potential to engender legal uncertainty, particularly in conjunction with the regulation's risk-based classification. While the division of AI into risk categories may appear pragmatic, its execution is contingent on subjective and, at times, ambiguous thresholds. This ambiguity has the potential to impede legal predictability and consistent enforcement.

​

Another significant concern is the risk of overregulation. The AI Act imposes extensive procedural duties on providers of high-risk systems, including conformity assessments, data documentation, and human oversight measures. While these provisions are undoubtedly vital, it is important to note that their implementation may prove to be disproportionately onerous for smaller enterprises. This could potentially impede innovation by imposing excess compliance costs on developers with limited resources.

​

Furthermore, the regulation's extraterritorial scope gives rise to significant questions. The Act's extraterritorial scope is of particular concern, as it extends the legal authority of the EU beyond its borders to include providers and users of AI systems located outside the Union who nevertheless have effects within the EU. While the intention is to protect EU residents, there is a risk of causing regulatory friction with third countries and complicating international efforts towards coordinated AI governance.

​

Concerns have been raised regarding the regulation's institutional framework. The effective enforcement of the regulation is contingent upon the capacity of national authorities and the European AI Office to coordinate and implement the rules in a consistent manner. However, there are concerns among scholars regarding whether these bodies will be adequately resourced and sufficiently independent to fulfil their mandate. Absent clear standards and robust oversight, there is a risk of significant variation in implementation across Member States.

While the protection of fundamental rights is a central tenet of the AI Act's rationale, concerns have been raised regarding its adequacy in certain sensitive areas, such as biometric surveillance, predictive policing, and automated decision-making in employment or credit scoring. Despite the inclusion of formal safeguards, concerns remain regarding the potential for discriminatory or opaque outcomes, particularly given the opacity of many AI systems.

​

Finally, the AI Act has also been described as lacking agility in the face of rapid technological evolution. Its reliance on predefined categories and exhaustive lists risks becoming obsolete as AI technologies advance. In light of these challenges, a more flexible, principle-based regulatory approach has been proposed to better accommodate emerging innovations without necessitating continuous legislative revision.

​

In conclusion, while the AI Act is a landmark achievement in the global regulation of artificial intelligence, its implementation is not without difficulties. The regulation aims to balance safety, innovation, and the protection of rights, but it must do so in a legal and technological environment that is subject to rapid change. The success of the Act going forward will depend on careful interpretation, consistent enforcement, and an openness to future refinement. It is only through such responsiveness that the Act can achieve its goal of fostering trustworthy and rights-respecting AI within the EU and beyond.

​

​​​​​​

References

​

European Commission. (2025, February 4). Approval of the content of the draft Communication from the Commission - Commission Guidelines on prohibited artificial intelligence practices established by Regulation (EU) 2024/1689 (AI Act) (C(2025) 884 final).

​

European Parliament and Council. (2024, July 12). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (AI Act). Official Journal of the European Union, L, 1-144.

​

European Commission. (2020, February 19). White paper on artificial intelligence: A European approach to excellence and trust (COM(2020) 65 final).

​

European Commission. (n.d.). European approach to artificial intelligence. Retrieved from https://digital-strategy.ec.europa.eu

​

European Commission. (n.d.). Digital strategy. Retrieved from https://digital-strategy.ec.europa.eu

 

e Silva, N. S. (2024). The Artificial Intelligence Act: critical overview. Available at https://arxiv.org/abs/2409.00264

​

By Giulia Bassini

Screenshot 2025-03-03 alle 16.18.23 - Giulia Bassini.png

Giulia Bassini holds a single-cycle Master’s Degree in Law from the University of Siena (Italy), where she also spent a study period at the University of Bordeaux (France). Bassini's academic path has been strongly oriented toward the law of digital technologies, with a particular interest in data protection. Her thesis, titled “Internet Service Providers and the Right to Be Forgotten: A Comparative Analysis between Europe and the United States”, explored the evolving landscape of online privacy and liability. Currently, she is developing a PhD proposal focusing on the regulation of post-mortem personal data. Alongside her research activities, she regularly contribute to legal journals. Find Giulia Bassini on LinkedIn.

Disclaimer: The International Journal for Crime, Law, and AI is committed to fostering academic freedom and open discourse. The views and opinions expressed in published articles are solely those of the authors and do not necessarily reflect the views of the journal, its editorial team, or its affiliates. We encourage diverse perspectives and critical discussions while upholding academic integrity and respect for all viewpoints.

bottom of page