Click here to read our latest publication "Governing Algorithms That Govern Us: Addressing Social Safety and Security Risks in Dutch Law Enforcement" by Olivia V. De Rita
Home > Publications > "Governing Algorithms That Govern Us: Addressing Social Safety and Security Risks in Dutch Law Enforcement"
May 14th 2025
Governing Algorithms That Govern Us: Addressing Social Safety and Security Risks in Dutch Law Enforcement
By Olivia V. De Rita
Holds a Bachelor's degree in Political Science, with a specialisation in International Relations and a minor in Conflict Studies. Currently completing a Master's degree in Political Science and International Relations at the University of Amsterdam, with a continued focus on issues related to crime and conflict. Find Olivia V. De Rita on LinkedIn.

Institutions today struggle to keep pace with the rapid, at times unregulated, development of AI. Among these institutions are governments. The widespread adoption of AI systems across societal sectors such as healthcare, law enforcement, and education, has brought about concerns surrounding ethics, social safety and security. In fact, a particular dilemma can be noticed in the public sector: how a government can ensure citizen protection from algorithmic harms whilst advancing towards the adoption and normalisation of algorithms (Kuziemski & Misuraca, 2020).
Addressing this dilemma does not necessarily imply that governments should avoid enhancing their efficiency. On the contrary, this article presupposes that governments should be encouraged to adopt AI in their modes of governance, provided that they implement strict and effective regulatory frameworks. If regulated, these systems manifest in non-harmful, productive tools that reduce financial and temporal costs, two common obstacles when governing institutions. If left under-regulated, however, these systems can exacerbate social inequalities, enable discriminatory policing practices, and erode public trust in democratic institutions.
This short article examines the Netherlands as a case study to illustrate the potential risks and limitations of algorithmic governance in practice. In the Netherlands, the Dutch Police Law underlines two main responsibilities: to protect the rule of law and to aid those in need. These responsibilities are accompanied by the constitutional right to use violence for the enforcement of the law–when necessary. This legal power is effective and faces little resistance if trust is high in Dutch society. Inevitably, Dutch law enforcement works in tandem with (high) public trust. Therefore, the police must work on building and retaining public trust especially when introducing new technologies such as AI (Zardiashvili et al., 2019).
The application of AI in law enforcement comes in several forms and proves beneficial when used to maximise the ability to enforce the law and consequently foster a peaceful society. According to Zardiashvili et al. (2019), these algorithm-based systems include predictive policing, automated monitoring, (pre-)processing large amounts of data to aid investigation and prosecution, as well as communication services for civilians such as chatbots and interactive forms.
If these systems are handled responsibly, monitored with effective risk assessment, the side effects are minimal, and the benefits are overwhelming. With proper regulation, AI systems can go as far as enhancing democratic institutions by facilitating the distribution of ‘a voice’ to people who otherwise found it challenging to climb the political ladder (Landemore, 2023). In this case, AI is an optimal, cost-effective, means to provide a voiced platform and a mode of interaction between institutions and individuals. Problems arise when AI systems cause harm or infringe on people’s privacy, issues that can be avoided through effective regulation, risk-assessments and accountability frameworks.
The Netherlands has experienced notable controversies surrounding the application of AI. The country has been working on combatting social security fraud in the wake of political pressures to respond to cases of welfare fraud concerning individuals of immigration backgrounds (Kempeneer et al., 2024). A combination of understaffed caseworkers and growing claims of fraud as social security expenditures and welfare recipients increased over time, meant that the adoption of AI systems would work as a shortcut to process large amounts of data sent in by volunteered reports of fraud (Kempeneer et al., 2024).
The Dutch Childcare Benefits Scandal, or Toeslagenaffaire, concerns how the Dutch tax administration discriminated against and violated the rights of circa 35,000 welfare recipients using AI algorithms (Hadwick & Lan, 2022). Hadwick and Lan (2022) report a case of targeted individuals flagged by an algorithm fuelling false accusations of fraudulent activities. Said algorithm assigned higher fraud-risk scores disproportionately to individuals with non-Dutch nationalities (Zardiashvili et al., 2019). Nationality data used as a risk factor meant that the AI system mirrored the biases within the Tax Authority and those involved in the process design and development, consequently reinforcing the assumption that Western nationals were less likely to tax commit fraud.
Questions have been raised about how the Netherlands, despite its technological advancement, could have erred so gravely as to compromise the rights and safety of its people. Kempeneer et al. (2024) argue that the Dutch system did not fail, rather, it was designed to fail. That the system did not suffer from a faulty set of conclusions, rather, it had been fed by the Dutch tax authority tainted with institutionalised racism (Kempeneer et al., 2024). This is because the algorithm selected individuals with targeted nationalities as individuals involved in fraudulent activities because the people in power “taught” the algorithm to “think” in this manner.
Issues with accountability followed by the risks of oversimplification, discrimination and selection bias are to be addressed. AI is understood as a set of machine-learning algorithms that process data, ‘memorise’ it and use its ‘intelligence’ to apply gained information to different information (Raaijmakers, 2019). This training is usually done by humans, where human input is fed to an algorithm for said algorithm to understand a word or a picture, for example, and recognise it elsewhere. Deep learning, a subset of machine learning, allows for the algorithm to recognise complex patterns in data. During this process, deep learning may result in the algorithm receiving specific information and producing a generalisation about a person, a group, or a business. This form of oversimplification poses risks of false accusations and illegitimate investigations, and consequently, invasions of personal privacy. Due to the insertion of human input, selection bias may arise as a result of individuals and institutions with predisposed, conscious or unconscious biases that lead to tunnel vision and faulty predictions.
The Dutch Childcare Benefits Scandal indicates a neglected issue. Whether due to rapid development or outdated frameworks, governing innovations often leaves policy makers with grey areas. Under-regulation and national frameworks lacking risk assessments and accountability frameworks can prove extremely harmful to society if these grey areas can be used as a tool to compromise freedom, equality, and privacy. While technological advancements in law enforcement and governance drive progress and efficiency, they also undermine previously established principles of accountability and transparency. These are the foundations of our democratic political system. In this case, when societal harm occurs, responsible actors can avoid accountability by attributing the fault to algorithmic systems. This creates a dangerous loophole, as it ignores the fact that algorithms are designed, trained, and applied by humans, meaning that bias or discriminatory intent can be embedded into these systems from the outset. As Kempeneer et al. (2024) claim, the EU can use this scandal as a lesson and warning.
References
​
Hadwick, D., & Lan, S. (2021). Lessons to be learned from the Dutch childcare allowance scandal: a comparative review of algorithmic governance by tax administrations in the Netherlands, France and Germany. World tax journal. Amsterdam, 13(4), 609-645.
Kempeneer, S., Ranchordas, S., & van de Wetering, S. (2024). AI failure, AI success, and AI power dynamics in the public sector. Available at SSRN.
Kuziemski, M. & Misuraca, G. (2020). AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings. Telecommunications policy, 44(6).
Landemore, H. (2023). Fostering more inclusive democracy with AI. Finance & Development, 60(4), 12-14.
Raaijmakers, S. (2019). Artificial intelligence for law enforcement: challenges and opportunities. IEEE security & privacy, 17(5), 74-77.
N.B. This article is written using an academic report written on the use of AI in Dutch law enforcement.
​​​​​​​​​​
By Olivia V. De Rita
Holds a Bachelor's degree in Political Science, with a specialisation in International Relations and a minor in Conflict Studies. Currently completing a Master's degree in Political Science and International Relations at the University of Amsterdam, with a continued focus on issues related to crime and conflict. Find Olivia V. De Rita on LinkedIn.
Disclaimer: The International Journal for Crime, Law, and AI is committed to fostering academic freedom and open discourse. The views and opinions expressed in published articles are solely those of the authors and do not necessarily reflect the views of the journal, its editorial team, or its affiliates. We encourage diverse perspectives and critical discussions while upholding academic integrity and respect for all viewpoints.