top of page

Home > Publications > "Lessons Not Learned: AI Policing and Unresolved Criminal Justice Challenges in England and Wales"

July 17th 2025

Lessons Not Learned: AI Policing and Unresolved Criminal Justice Challenges in England and Wales

By Daniel Adyera

Screenshot 2025-07-11 152645 - Daniel Adyera.png

Daniel Adyera is a multidisciplinary criminal justice policy consultant and aspiring criminal solicitor based in the UK, with a strong focus on forensic investigations and emerging legal technologies. He holds a Master of Laws in Legal Practice (SQE1) from Nottingham Trent University, where his dissertation critically examined AI-driven policing and the adequacy of existing legal safeguards in upholding defendants’ fair trial rights. Daniel also holds a Master of Laws in Forensics, Criminology and Law from Maastricht University (Netherlands), a Bachelor of Laws from the University of London (UK), and a Bachelor’s degree in Industrial and Organisational Psychology from Makerere University (Uganda). He has taught digital forensics and criminal investigations at the undergraduate level in Uganda. His research interests lie at the intersection of criminal law, forensic science, artificial intelligence, criminal psychology, and security governance. Find Daniel Adyera on LinkedIn.

Image by Maxim Hopman

Abstract


Contemporary debates on AI policing and criminal justice in England and Wales have largely focused on the issues concerning safe, ethical and responsible use of AI. The aim of this paper is to highlight and analyse overt yet seemingly ignored critical criminal justice challenges associated with the use of AI in policing. It discusses three unintended consequences arising from the use of AI algorithms particularly for predictive policing and automated facial recognition (AFR) namely: the risk of “crime suspect and evidence shopping”, the risk of criminalising “suspicious behaviour” and “predicted crimes”, and the potential for resource wastage. It advocates for urgent remedial solutions aimed at mitigating and addressing such challenges in order to harness and reap further benefits of using AI in policing.

​

Introduction

​

In R (Bridges) v. Chief Constable of South Wales Police and Others (2019), hereinafter Bridges v. SWP (2019), an applicant, a civil liberties campaigner filed an application challenging the legality of deploying a live Automated Facial Recognition (AFR) technology at a public event by the South Wales Police’s (SWP). The deployment was part of a series of SWP’s trial of its AI-driven technology (AFR Locate) which operates by capturing live digital images of individuals and automatically comparing them with those stored on the SWP’s watch list database. The applicant claimed that the technology’s deployment was incompatible with Article 8 (right to respect of private life) of the European Convention of Human Rights (ECHR); and violated data protection laws and the Public Sector Equality Duty (PSED) and thus unlawful (Bridges v. SWP, 2019).

​

During trial in the High Court, the applicant’s claim was dismissed. The Court ruled that the SWP’s deployment of AFR technology was lawful and that breach of the appellant’s Article 8 rights was proportionate and therefore justified (Bridges v. SWP, 2019, para. 108). The Court rejected the appellant’s argument that the deployment of the AFR technology had no legal basis holding that it was “in accordance with the law”. The Court held that the deployment of the AFR technology had a sufficient legal framework and basis grounded in
police’s common law powers in addition to the available laws (primary and secondary legislations) and the SWP’s operation policy documents (Bridges v. SWP, 2019, para. 69). On appeal in Bridges v. SWP (2020), whereas the Court of Appeal recognised police’s common law powers to use various information gathering methods including during investigations (Bridges v. SWP, 2020, para. 38), it unanimously found that the SWP’s deployment of the AFR technology lacked the necessary quality of law requirement (Bridges v. SWP, 2020, para. 94). The Appellate Court held that the technology’s deployment did not have sufficient legal framework (Bridges v. SWP, 2020, para.90). It reasoned that the SWP’s legal framework had two “fundamental deficiencies” which did not sufficiently address the “who question” and the “why question” – a gap the Court of Appeal noted left wide discretionary powers to individual police officers who could arbitrarily determine who could be placed on the police watch list or decide on deployment location without any formal criteria or guidelines (Bridges v. SWP, 2020, para. 91).

​

While summing up, the Court of Appeal considered that to meet the adequate legal framework requirement for the use of AFR technology, the SWP would need to establish clear operational guidelines and frameworks that were not mere descriptions of how the technology worked (Bridges v. SWP, 2020, para. 93 and 96). On the issue of data protection, the Court held that personal data may be lawfully processed for law enforcement purposes only to the extent that that purpose is “based on law” and either the data subject has given their consent to their data being processed for that purpose, or the processing of such data was necessary for the execution of an activity where such data is required by a competen authority (Bridges v. SWP, 2020, para. 101). Finally on the PSED claim of violating section 149 of the Equality Act 2010 which obligates public authorities to have due regard to eliminate discrimination and promote equality, the Court concluded that the SWP had not reasonably fulfilled its duty under the Act since the technology could not be certified as free from bias (Bridges v. SWP, 2020, para. 201). Recognising AFR’s novelty and controversial nature, the Court of Appeal urged all police forces intending to deploy it in future to do everything reasonably possible to ensure that the technology was free from discrimination (Bridges v. SWP, 2020, para. 201).

​

Bridges v. SWP (2019 and 2020) are not the only cases to challenge the use of advanced technologies by the UK law enforcement agencies. Big Brother Watch, a UK surveillance watchdog also challenged the use of covert intelligence and investigatory technologies in Big Brother Watch and others v United Kingdom (Applications Nos. 58170/13, 62322/14 and 24960/15) where the Grand Chamber held that UK’s surveillance and interception laws breached the right to privacy (Article 8) as it did not have sufficient legal safeguards and oversight mechanisms. Consequentially, the UK government amended the Investigatory Powers Act 2016 by passing the Investigatory Powers (Amendment) Act 2024 which incorporated for additional safeguards and oversight mechanisms. For instance, the amended Act obligates the Investigatory Powers Commissioner to keep external covert surveillance activities under review through audit, inspection and investigation of UK forces’; and provides for the appointment of Judicial Commissioners as Deputy Investigatory Powers Commissioners as part of its oversight measures (Investigatory Powers (Amendment) Act 2024, s 10(2)(b) and s 7(2)).

​

In November 2022, the House of House of Lord’s Justice and Home Affairs Committee (“the Committee”) issued a report following an investigation into how algorithms and machine learning systems were applied in the justice sector (Brader, 2022). The Committee found that whereas the use of AI in the justice system had some advantages such as improved efficiency, high productivity and quick problem solution, it nevertheless highlighted concerns regarding its use in the criminal justice system citing issues such as limited safeguarding rules and standards, AI’s technical opacity and police officer’s inadequate knowledge and lack of training in AI they contended could jeopardise human rights and civil liberties (Brader, 2022). The Committee also noted that there was lack of clear legal and institutional frameworks to coordinate and regulate the use of AI in the justice sector although the technology posed serious challenges particularly to human rights (Brader, 2022). It recognised that algorithms have the potential to manipulate evidence hence creating grave dangers to defendant’s rights to a fair trial (Brader, 2022). In response, the UK government labelled the Committee’s findings as a “mischaracterisation” – stressing that humans and not AI would make decisions regarding arrests, charge and prosecution (Brader, 2022). However, the government did not specify measures guaranteeing that that AI would be a fair, credible and reliable policing and criminal justice tool.

​

The use of Artificial Intelligence (AI) tools and systems in policing has created mixed feelings of excitement, optimism and anxiety (Ezzeddine et al, 2023). AI has been lauded as a formidable and efficient security and law enforcement tool – assisting police with several administrative and field tasks (Mandalapu et al, 2023). However, this has not been without parallel scepticism and criticism related to the technology’s legal, moral and ethical use due to its susceptibility to error, fabrication, prejudices, opacity and lack of accountability (Ganesan, 2024). Doubts have also been raised about AI’s overall efficacy and effectiveness especially towards upholding and promoting human rights and the rule of law (Fair Trials, 2021). Fair Trials, an international non-governmental organisation also raised concerns that AI algorithms should not completely be trusted in the administration of criminal justice because they are susceptible to hallucination and can manipulate evidence that can negatively impact individual human rights especially minorities and the under privileged members of society who they argue constitute the bulk of the AI algorithm training data (Fair Trials, 2021). Their concern about police’s use of AI in activities such as predictive analytics stems from the technology’s lack of transparency and accountability, especially when used for making important decisions that have profound impact on individual lives. They also argue that AI has the effect of perpetuating existing social biases, discrimination and othe prejudices and thus contend that it is not fundamentally suitable for the administration of criminal justice. In the US for instance, algorithm-driven risks assessment tools like COMPASS have already been accused of sending innocent defendants to prison (Hao, 2019). In England and Wales, a similar tool, Durham Police’s Harm Assessment Risk Tool was heavily criticised as unreliable and biased, thus incompatible with the dictates of the rule of law and proper administration of justice leading to its discontinuation (Big Brother Watch 2020).

​

The AI revolution is recalibrating and shaping every aspect of human activity, and policing is one of them where its influence seems unstoppable and moving at a tremendous speed continuously penetrating new grounds, testing barriers and redefining boundaries in England and Wales. However, the rate at which the AI has been fast tracked and integrated within the policing infrastructure has outpaced the development of legal rules necessary to regulate its life cycle from inception to termination. These rules would firstly help to determine the necessity, nature and extent of using AI in policing a democratic society – something the state ought not to arbitrarily dictate on given the highly intrusive and potential harms that could arise from its use. Secondly, the rules would not only regulate their design, deployment and implementation but also stipulate adequate accountability and oversight mechanisms (Anny, 2021) thus progressively plugging any gaps within the existing laws. Thirdly, and arguably the most important, the rules could stipulate and dictate on important matters such proportionate AI use and when police must cease or terminate its operation. Such a rule would transform the use of AI in policing from being a nonstop routine endeavour to a piecemeal activity akin to a project with a specified life cycle. With adequate and relevant regulatory rules, a thorough and meaningful audit of the use of AI in policing is possible. This may help with tracking, stock taking, troubleshooting and revising the use of AI in policing. The aim of this paper is therefore to explore subtle traces of unintended consequences related to the use of AI in policing in the administration of criminal justice in England and Wales.

​

The AI revolution in policing


Historically, policing was a highly human-led and labour intensive activity where for example police officers had to rely on traditional information and intelligence sources such as informants, tip-offs, eye witnesses and undercover agent operations to help detect, prevent and solve crimes (Pereira et al., 2021). However, technological advancement through AI has ushered in new tools and methodologies for automating policing thus improving efficiency and accuracy in law enforcement (Mandalapu et al., 2021). AI is currently enjoying widespread use by police and law enforcement agencies in England and Wales and around the world ranging from simplifying administrative tasks, resource planning and allocation and to predicting crime, detecting criminal through facial recognition and assessing and profiling individuals based on their risk of offending (Kaufmann, 2024). However, the AI optimism and excitement is not universally shared (Ezzeddine et al., 2023, Mandalapu et al., 2021).

​

Fears and concern about AI’s prominence in policing stem from at least three critical angles. Firstly, whereas AI is at its nascent stage of development and still marred with technical infirmities such as the potential for bias and discrimination, proneness to hallucination and falsity, it has nevertheless been fast tracked and integrated to become an indispensable law enforcement tool – widely deployed but without prior public knowledge or acquiescence. According to Zilka et al., (2022), the lack of transparency in the deployment AI tools and systems is prevalent in the UK. Further, concerns have also been raised as to whether such tools have undergone elaborate tests, trials and impact assessment to determine their suitability for implementation in the administration of criminal justice (Fair Trials, 2021). For instance, Fair Trials (2022) contended that Durham Police’s Harm Assessment Risk Tool (HART) was unfit for purpose. The use of the tool was subsequently terminated but without accountability for its failures yet thousands of individuals were wrongly predicted by the botched system as being “high risk” thus missing out on the leniency programme (Fair Trials, 2022).

​

Secondly, there is a profound uneasiness about the lack of adequate formal regulatory an accountability frameworks aimed at providing robust safety guarantees and responsible use of AI in predictive policing (Blount, 2023). Although attempts have been made to regulate AI use in law enforcement through comprehensive formal regulations for instance the European Union (EU)’s EU Act, England and Wales lacks such robust regulations. In 2023, the UK government hosted the first international summit on AI safety attended by world leaders, AI technology developers and policy researchers to discuss ways on how to leverage and harness AI safety and avert its potential threats or harm to humanity (University of Oxford, 2023). It crystallised and outlined its ideas on the development of safe and responsible AI technologies based on five principles namely: “safety, security and robustness”, “appropriate transparency and explainability”; “fairness, accountability and governance”; and “contestability and redress” (GOV.UK, 2023). In a rather remarkable twist, during the February 2025 AI global summit in Paris, the UK government declined to sign an international declaration on open, inclusive and ethical AI development. It argued that the declaration did not sufficiently address national security issues (Carroll, 2025). Whereas the UK government do have legitimate security concerns, discussions on establishing rules for ethical and responsible AI development is a critical step towards addressing AI unresolved concerns – and more so, a positive move towards the development of robust local regulations. However, the proposed Artificial Intelligence (Regulation) Bill 2025 sponsored by a private member represents a renewed interest in formal AI regulation the UK as it underscores the urgent need for formal governance of the technology in light of calls for international regulations. Being a private members Bill, it will be a testament to the UK government’s commitment to robust AI regulation through statutory rules given the fact that the government’s AI regulatory approach seems to favour informality.

​

On 28 September 2023, the UK’s National Police Chiefs’ Council (NPCC) adopted an AI covenant outlining a set of six principles and highlighting its AI governance framework for the adoption and use of AI in policing (NPCC, n.d). The principles are: Lawful (Principle A); Transparent (Principle B); Explainable (Principle C); Responsible (Principle D); Accountable (Principle E); and Robust (Principle F). The Covenant recognises AI’s persistent issues and stresses the importance of transparency and fairness regarding its integration and deployment in law enforcement work to ensure its responsible use and increase public confidence in its use in policing (NPCC, n.d). The Covenant alludes to the centripetal role of AI in NPCC’s policing agenda but it does not detail how the NPCC will handle persistent issues and challenges related to AI itself as an autonomous agent and potential for its misuse and abuse by police personnel who seem to have wide discretionary deployment powers as highlighted by the Court of Appeal in Bridges v. SWP (above). In the same case, the Court of Appeal implored police forces to make efforts towards ensuring that novel technologies they implemented were consisted with the rule of law (Bridges v. SWP, 2020). Reconciling and aligning police’s AI agenda with proposed legal standards and requirements in the propped AI Bill would be an ideal place to start for the NPCC.

​

And thirdly, there is growing concern about what may be termed as the “AI deployment dictatorship”. This term is used to describe the government’s or public authorities’ wide and largely unchecked discretionary powers to deploy of AI tools and systems without sufficient public knowledge, acceptance or participation – effect of lack of transparency that Zilka et al., (2022) highlighted. In some instances even if AI deployment has been publicised prior to deployment as was in the Bridges v. SWP (cited above), esearch shows that in many situations, the public is usually unaware, unsuspecting and oblivious about the existence of AI policing tools or its deployment (Zilka et al., 2022). Further reports have revealed a more sophisticated use of AI by the UK police forces (Milmo, 2024). According to Milmo (2024), at least 13 UK police forces use the iVe, an automated covert car data extraction software system developed by the US-based Berla Corporation that can download vast amount of data for the police (Milmo, 2024). According to experts, modern cars contain about 75 computer systems and can generate approximately 25GB of data per hour – a potential resource for police investigators (Milmo, 2024). However when pressed for further information and confirmation about the use of iVe under the Freedom of Information rules, two thirds of UK police neither admitted nor denied use of such software – only two police forces – Derbyshire and Gwent police admitted and confirmed use of the iVe system since 2018 (Milmo, 2024).​​​​​​​​​​​

​

Honest and full disclosure of police’s use of AI – a highly sensitive and potentially intrusive technology would be a fundamental step towards embracing institutional accountability if there is a breach of data and privacy rights. This is a fundamental feature highly valued and necessary in modern democratic societies. Fair Trials has decried such clandestine police conduct where police fail to disclose the use of technology in law enforcement activities and have described it as an “unacceptable secrecy” (Fair Trails, 2021). They have also advocated for its strict regulation and independent oversight. These issues remain largely unresolved in many countries including England and Wales – exacerbating the accountability and responsibility gaps in AI policing. Despite its clear benefits to law enforcement, AI is generally feared to have potential negative consequences that could be costly to the public if largely left unrestrained through strict and robust legal and accountability frameworks (Big Brother Watch, 2019).

​

A brief overview of AI use in Policing in England and Wales

​

AI is currently everywhere running many human activities (Klie, 2023). Susan Fourtané, alludes that the technology is here to stay, and its influence and impact on modern day life is unstoppable (Fourtané, 2019). In England and Wales, AI has morphed into a formidable law enforcement companion whose capabilities and capacity are improving with continuous refinement. It is a new found gem for criminal investigations, evidence processing and crime and offender risk prediction. AI’s efficiency is saving police enormous amounts of time and resources (Muir & O’Connell, 2025). For example, the 2023 Police Productivity Review reported that the use of automated data reduction tools could save approximately 618,000 hours of police staff time spent on manual reduction tasks (Muir & O’Connell, 2025). Bedfordshire Police, a pioneer user of DocDefender found that the use of this automated data reduction tool was efficient in saving time used to send documents containing sensitive information to the Crown Prosecution Services (Muir & O’Connell, 2025). With more administration time in their hands, police can swiftly discharge their legal obligations as required by the Criminal Procedure Rules 2020.

​

The use of AI imaging has also helped police to leverage internet and geo-data algorithm analytics to help locate missing persons, identify human trafficking victims, predict and map crime hotspots and make assessments about an individual’s likelihood to offend or their victimhood (Muir & O’Connell, 2025). The Kent Police used algorithmic crime prediction software Geolitica (formerly PredPol) developed by a US company for five years before abandoning it when it allegedly turned out to be costly (Big Brother Watch, 2019). Nevertheless, the use of AI and other automated systems in policing in the England and Wales is only on the rise with more investments predicted to ensure that its policing vision dubbed “Future Operating Environment 2040” is achieved (College of Policing, 2020)

​

The AI law enforcement industry is rapidly expanding with a growing public-private partnership to innovate and harness AI for law enforcement purposes. In England and Wales, some police forces have developed their own AI tools and systems whereas others have partnered with private vendors to procure AI software and related services from them. Notable examples of AI policing systems developed by the local police include the West Midlands Police's National Data Analytics Solutions (NDAS), Durham Police’s HART and Avon and Somerset’s risk assessment algorithm decision making system ((NPCC, n.d)). According to the NPCC, all police forces in England and Wales use data analytics and 15 of them deploy “advanced data analytics” capabilities, though mainly centred on administrative efficiency through automated workforce planning and organisation (NPCC, n.d). The South Wales Police (SWP) partnered with NEC Software Solutions UK Ltd and uses its software to operate its Automated Facial Recognition (AFR) technology known as AFR Locate (NPCC, n.d) which will be rolled out in the whole of the UK after trials are complete.

​

AI system development and why it remains a thorny criminal justice issue

​

According to Steidl et al (2023), the product life cycle of AI systems is based on four key critical pipeline steps needed in their design and development. In the phase, system developers should clearly define the problem to be solved and establish objectives for the AI to achieve (Steidl et al, 2023). This first step helps to ensure that the AI system is designed to achieve a desired and achievable purpose which does not invariably create or exacerbate existing problems that its intended to resolve. Secondly, the quantity and quality of the algorithm training data must be pure and unquestionable (Whang et al, 2023). This helps to ensure that AI learning is based on quality training data increasing the integrity and reliability of its output (Steidl et al, 2023). Thirdly, elaborate and sufficient trials through regulatory sandboxes ought to be carried out to test the veracity of the AI – applied in real life settings (Truby et al, 2021). And fourthly, model validation and iterative refinement for continuous improvement based on evaluation after real-world trials are important to maintain credibility of the AI output (Fair Trials, 2021).

​

The above AI system development phases are all equally important because they establish the necessary grounds and yardstick by which an innovation is judged as successful or not (Knauf et al, 2000). For instance, without clear objectives, explicit purpose and a well defined problem, an AI algorithm may be deemed useless as it lacks purpose. Furthermore, the training data must have been legally collected and addressed all data privacy regulations. Even more importantly, training data must be pure in a sense that it is not “adulterated” during curation in the name of making it suit the design purpose (Whang et al, 2023). Thus the need for purposeful AI design and development in addition to elaborate trials and tests in realistic scenarios is reflected in the desire to have more useful innovations that serve their design purpose without unnecessarily complicating, jeopardising or creating unintended or accidental consequences.

​

Unresolved criminal justice challenges


This section highlights and analyses unresolved criminal justice challenges arising from the use of AI in policing.

​

i) Risk of “crime suspect shopping” and “evidence shopping”

​

AI infrastructural design tend to incorporate back-end development protocols which allow a back-end developer to perform routine data base management, algorithm implementation and check the functionality of AI model features and application programming interfaces (APIs) (Agha, 2025). According to Agha (2025), developers can also carry out scalability and performance tests to assess and fast track machine learning optimisation to ensure that the model’s system fulfils the needs of the application. Essentially, a back-end developer’s duty includes troubleshooting to ensure that the developed system functions as intended. For example, ChatGPT, a generative AI (Gen AI) chatbot was retrospectively “banned” by its developers from predicting the outcome of the July 2024 UK general elections (Taaffe-Maguire, 2024).

​

When asked which party would win the elections, the AI chatbot predicted that the Labour party would win by 467 seats; that Conservative party would lose by win 101 seats with 46 seats going to the Liberal Democrats (Taaffe-Maguire, 2024). However, whereas the chatbot correctly predicted a Labour win and a Conservative loss, it nevertheless fabricated the figures. When asked to predict the outcome of the November 2024 U.S presidential elections, the chatbot predicted a Joe Biden victory yet in reality Biden stepped down for Kamala Harris who subsequently lost to Donald Trump. Afraid of a possible AI “prediction blunder”, ChatGPT developers substituted the chatbot’s source of information (Wikipedia) to more trusted sources like the UK Electoral Commission (Taaffe-Maguire, 2024). The move demonstrates the ability for back-end functionality amendment opening avenues for possible algorithmic manipulation to suit developers’ needs.

​

The ability to amend or adjust an AI system’s data base and other algorithmic infrastructure or model functionalities through back-end development may create opportunities for adjusting AI output thus raising subjectivity concerns. For instance in Bridges v. SW (2019), it was explained to court that the AFR technology matched captured images with those on the watch list by generating a “similarity score” whereby a higher score indicated a greater likelihood of a positive between the images and vice versa. To return a match score, the software is assigned a predetermined threshold value whereby if it is too low, there is a heightened risk of a “false alarm rate” – that is the percentage of false matches returned by the software. Conversely, if the threshold value set is too high, the risk of a “false reject rate” is also increased. This is the percentage of real matches that are ignored or not returned by the software. It was also revealed in the case that the threshold value of the AFR system is principally predetermined by the manufacturer depending on the system’s intended use, and the end-user could also; if they wished alter the threshold value to their needs.

​

The possibility of an infrastructural back-end amendment or adjustment of an AI policing tool like those used for risk assessments, predictive analytics and automated facial recognition is therefore not too farfetched. The back-end algorithm manipulation to with the intention to corruption its function to return a desired output to suit the needs of its operator may lead to “crime suspect shopping” and “evidence shopping”. The phrase “crime suspec shopping” is used to refer to a situation where law enforcement agencies such as police have no suspect on their radar and have exhausted their investigative leads and then manipulate the AI system to “shop” for a suitable suspect. This could be as simple as for example resetting similarity threshold to increase the chances of a positive match when using a technology like AFR. On the other hand, the phrase “evidence shopping” is used to denote a situation where for instance police have a suspect on their radar but lack (sufficient) evidence to pin or proceed with him, and then manipulates the AI system in an attempt to “shop” for the evidence needed to justify further action. The idea of “crime suspect shopping” and “crime evidence shopping” was mooted by the Court of Appeal in Bridges v. SWP (2020) where the Court was particularly critical of SWP’s deployment the of AFR system without answering the “who” question which it was concerned handed police wide powers to “shop” for suspects who could be on the watch list and thus have the evidence to stop and question him. With the possibility to amend algorithm output like as was done in ChatGPT-electio example
(discussed above), and the ability to predetermine likelihood ratios in score-based tools like AFR system and other risk assessment tools, the risk of AI manipulation or potential for abuse and misuse to return preferential output in line with the developer’s desires or a user’s expected outcomes cannot be overlooked. Crime suspect and evidence shopping are an abuse of the criminal investigation process and are a pervasion of criminal justice. In addition, such practices would breach fundamental principles of criminal justice like the presumption of innocence and right to a fair trial.

​

ii) Criminalisation of “suspicious behaviour” and “predicted crimes”

​

Whereas the use of AI is reshaping modern policing and criminal investigations, its ripple effects on the quality and functioning of the criminal justice system merit closer scrutiny especially on how the technology interacts with fundamental social values like individual human and legal rights especially of suspects and defendants (Min, 2022). Criminal justice is a retrospective process whose objective among others is to punish those who have breached the criminal code of conduct (Ashworth & Zedner, 2012). It does not aim to punish or reprimand individuals behaving “suspiciously”. It also has no business in punishing individuals who for instance are thinking (mens rea) about crime without actually committing it (actus reus). As such, individuals ought not to be punished or even threatened with the risk of punishment for crimes they have not yet committed even if the state suspects that they may commit it in future. This principle is clear and well buttressed in the legal maxim: nulla poena sine lege – meaning no punishment without crime (Husak 2010).

​

In R v. Loosely [2001] UKHL 53, it was held that courts have a duty to protect citizens from any overreach of policing or prosecutorial conduct they believe to be contrary to the rule of law. However, police use of AI crime prediction and facial recognition tools may lead to the criminalisation and punishment of “suspicious behaviour” and any “predicted crimes” through the implementation of (invisible) punishments like digital surveillance (Fair Trials, 2021). Digital surveillance and other informal punishments such as frequent stop and searches may amount to extrajudicial conviction and punishment (Fair Trials, 2021). In the Netherlands, research conducted by Fair Trials (2021) found that the use of offender predictive algorithms like ProKid and Top600 led to the unfair and discriminatory targeting and arrest of mostly poor young Dutch-Morocans who the police merely suspect or believe to be offenders.

​

In England and Wales, the use of Durham Constabulary’s machine learning algorithm tool (HART) for assessing and profiling individual’s risk of reoffending using AI-generated scores was found to be discriminative and unfair having overestimated people’s likelihood of offending (Big Brother Watch, 2019). The tool was used to classify people based on their suspected future criminal risk (Fair Trials, 2022). It was reported that the algorithm training data was tainted with biased stereotypes that clustered individuals based on their personal characteristics and heritage (Big Brother Watch, 2019). A Big Brother Watch investigation into the use of HART found that the tool was capable of discriminatory profiling yet it was used to make important criminal justice decisions – categorising individuals as “low risk”, “medium risk” and “high risk” which have profound impact on the lives of individuals (Big Bother Watch, 2019).

​

In a separate investigation, Fair Trials (2022) found that 3,292 individuals were flagged by the HART as being high risk, increasing their chances of being prosecuted compared to those considered to be low or medium risk. A fundamental concern with such automated machine-based decision making is that their predictions or output do not result from actual occurrences of events – that is, they are not “action-based” output or a result of some human behaviour but rather mathematical constellation of an algorithms based on correlation analysis of historical crime data (Hao, 2019). Such data may consist of criminal records of other persons, making a prediction about an individual’s propensity to criminality based on crime data unrelated to them is an imaginative over stretch of their future behaviour. It may be defended by some as a cautious and proactive policing approach but it may also lead to injustices through the criminalisation of suspected behaviour and predicted crimes that perpetuate the over criminalisation problem.

​

iii) Potential for resource wastage

​

Criminal justice response is without doubt expensive. Its functioning is heavily dependent on the availability of adequate resources to sustain it from the inception of investigations throughout trial to incarceration (Cohen, 2020). As a resource intensive process with occasional budgetary constraints, wastage ought to be reduced or completely avoided where possible. And as a reactive process, criminal justice response ought to be invoked sparingly – normally upon actual infractions or real breaches of the criminal law or social order (Ashworth & Zedner, 2012). Pure proactive policing has little to do with criminal justice response. At best, it is a cautious endeavour aimed at promoting safety and reinforcing security in communities. This is because proactive policing has limited criminal justice response value if no actual crime is detected. For example, Geolitica, a geographic crime prediction tool used by Kent Police between 2013 and 2018 to assess and foretell potentia crime hotspots was discontinued after it was found to be not quite helpful in reducing crime as anticipated despite heavy investment in the technology (BCC, 2018). Other tools like AFR have dual value in a sense that they may be used for proactive (surveillance) and reactive policing (investigations) hence, have crucial criminal justice response values.

​

After implementing Geolitica for some time, Kent Police admitted that the software was effective at “proactive” policing but not in tackling crime (BCC, 2018). Chris Carter, the Kent Police Federation Chairman revealed that police officers did not have enough time to use the system due to increased crime rates and limited resources (BCC, 2018). Durham Police ceased using HART citing resource constraints brought about by the need for routine refinement, and upgrade of the model to bring it within the confines and dictates of ethical compliance oversight guidelines (Fair Trials, 2022). The argument here is not a denigration of proactive or preventive policing since it has obvious benefits to society but rather an economic one for a directed effective utilisation of limited criminal justice resources – a reason and argument touted for argument continued deployment and implementation of AI in policing.

​

Automated proactive policing should not necessarily lead to an immediate criminal justice response without external contextual information that is not only reasonable but also justifiable. A contrary practice would not be an effective use of limited policing resources. For example, if an AI policing tool like NDAS makes a prediction that an individual A has a high likelihood of committing a serious violent crime, the value of such information is for proactive police intervention where reasonably necessary. This is because proactive police actions that do not amount to reasonable and objective suspicion have limited immediate actionable criminal justice response value to invoke the machinery of the criminal justice system (Police and Criminal Justice Act 1984, s 1(3)). As a reactive process with strict normative procedural rules, there must be sufficient justification to invoke a criminal justice response. In addition, such action must have some sound legal basis and reasonably justified.

​

Furthermore, unlike human decisions that can easily be explained with the possibility of immediate review and reversal in case of an error in decision making that may save time and resources, the consequences of technological errors can be colossal and extremely devastating with huge costs to rectify. For example, statistics revealed by Fair Trials showed that 12,200 individuals were assessed using Durham Police’s botched algorithmic tool (HART) 22,265 times within five years yet its risk prediction accuracy rate had already dwindled to a paltry 53.8% (Fair Trials, 2022). A senior official at Fair Trials lamented that a good number of individuals were profiled using a “flawed police AI system” and doubted whether they were informed that their criminal charge decisions were influenced or determined by an AI algorithm (Fair Trials, 2022). Additionally, there are serious challenges in explaining or reviewing AI system output due to the largely unfathomable nature of their decision making processes – the so-called AI black box challenge (von Eschenbacj, 2021). Even worse, in the event (a very possible one) that an AI algorithm has fabricated or over estimated its predictions for instance due to hallucination, there may be little to no room for a review which even if commissioned would also consume a lot of resources to unearth yet justice demands expediency (Criminal Procedure Rules 2020, para 1.1(2)(f)).

​

Conclusion


The use of AI for crime prediction and facial recognition has become an invaluable tool for police forces in England and Wales for obvious reasons. Increased globalization and rapid technological advancement has created convenient avenues for criminals to perpetrate crimes with minimum and risks of detection and identification. Investing in AI technologies has enabled swift and detection, investigation and prosecution of such crimes. Unlike criminals, police have legal duties and moral obligations to fulfil in the execution of their law enforcement activities. As such, their use of AI must be sanctioned by law. This was the Court of Appeal’s position in Bridges v. SWP (2020). The Court also noted how complex AI is and stressed the need for robust legal framework to regulate its use in policing. However, whereas regulation is much needed, attention must also be drawn to the consequences of its use that might be overlooked Whereas regulation may much desired control, a more comprehensive policy review is required to offer sustainable remedies that mitigate against unintended consequences of implementing AI in policing and reap its full potential benefits.

​

​​​​​​​​​​References

​

Agha, A. S. (2025). Evaluating AI efficiency in backend software development: A comparative analysis across frameworks. (Master’s thesis). Jyväskylä University of Applied Sciences, Finland.


Akpobome, O. (2024). The impact of emerging technologies on legal frameworks: A model for adaptive regulation. International Journal of Research Publication and Reviews, 5(10), 5046–5060.


Anny, D. (2021). Ethical and legal challenges of AI decision-making in government and law enforcement. ResearchGate. https://www.researchgate.net/publication/371764380_Ethical_Decision-
Making_in_Law_Enforcement_A_Scoping_Review


Ashworth, A., & Zedner, L. (2012). Prevention and criminalization. New Criminal Law Review, 15 (4), 542–571.


BBC. (2018, November 26). Kent police stop using crime predicting software. BBC News. https://www.bbc.co.uk/news/uk-england-kent-46345717


BBC. (2025, May 17). Facial recognition: Cameras to be mounted on Croydon street furniture. BBC News. https://www.bbc.com/news/articles/c5y913jpzwyo


Big Brother Watch. (2019). Big Brother Watch’s written evidence on algorithms in the justice system for the Law Society’s Technology and the Law Policy Commission. https://bigbrotherwatch.org.uk/wp-content/uploads/2019/02/Big-Brother-Watch-written-evidence-on-algorithms-in-the-justice-system-for-the-Law-Societys-Technology-and-the- Law-Policy-Commission-Feb-2019.pdf


Big Brother Watch. (2020). Big Brother Watch briefing on algorithmic decision-making in the criminal justice system. https://bigbrotherwatch.org.uk/wp-content/uploads/2020/02/Big-
Brother-Watch-Briefing-on-Algorithmic-Decision-Making-in-the-Criminal-Justice-System-February-2020.pdf


Brader, C. (2022). AI technology and the justice system: Lords Committee report. House of Lords Library. https://lordslibrary.parliament.uk/ai-technology-and-the-justice-system-lords-committee-report/


Brown, D. K. (2014). The perverse effects of efficiency in criminal process. SSRN. https://ssrn.com/abstract=2383504

Buocz, T., Pfotenhauer, S., & Eisenberger, I. (2023). Regulatory sandboxes in the AI Act: Reconciling innovation and safety? Law, Innovation and Technology, 15(2), 357–389.


Button, M., Shepherd, D., Blackbourn, D., et al. (2022). Assessing the seriousness of cybercrime: The case of computer misuse crime in the United Kingdom and the victims’ perspective. Criminology & Criminal Justice, 25(2), 670–691.


Carroll, M. (2025, May 11). Why the UK didn’t sign up to global AI agreement. Sky News. https://news.sky.com/story/why-the-uk-didnt-sign-up-to-global-ai-agreement-13307926


Cohen, M. A. (2020). The costs of crime and justice. Routledge.


College of Policing. (2020). Policing in England and Wales: Future operating environment 2040. https://assets.college.police.uk/s3fs-public/C147I0820_FOE%202040_User
%20guide.pdf


Das, S. (2024, April 21). Sex offender banned from using AI tools in landmark UK case. The Guardian. https://amp.theguardian.com/technology/2024/apr/21/sex-offender-banned-from-using-ai-tools-in-landmark-uk-case


Dearden, L. (2024, November 24). AI increasingly used for sextortion, scams and child abuse, says senior UK police chief. The Guardian. https://www.theguardian.com/technology/2024/nov/24/ai-increasingly-used-for-sextortion-scams-and-child-abuse-says-senior-uk-police-chief


Ezzeddine, Y., Bayerl, P. S., & Gibson, H. (2023). Safety, privacy, or both: Evaluating citizens’ perspectives around artificial intelligence use by police forces. Policing and Society, 33(7), 861–876.


Fair Trials. (2021). Automating injustice: The use of artificial intelligence & automated decision-making systems in criminal justice in Europe https://www.fairtrials.org/app/uploads/2021/11/Automating_Injustice.pdf


Fair Trials. (2022). FOI reveals over 12,000 people profiled by flawed Durham Police
predictive AI tool. Fair Trials. https://www.fairtrials.org/articles/news/foi-reveals-over-
12000-people-profiled-by-flawed-durham-police-predictive-ai-tool/


Fourtané, S. (2019, February 27). The three types of artificial intelligence: Understanding AI. Interesting Engineering. https://interestingengineering.com/innovation/the-three-types-of- artificial-intelligence-understanding-ai


Ganesan, A. (2024). Ethical use of AI in criminal justice system. In Advances in Computational Intelligence and Robotics (pp. 337–366).

Greater Manchester Police. (2024, October). Man who created indecent images using AI-enabled technology sentenced to 24 years. Greater Manchester Police. https://www.gmp.police.uk/news/greater-manchester/news/news/2024/october/man-who-created-indecent-images-using-ai-enabled-technology-sentenced-to-24-years/


Hao, K. (2019, January 21). AI is sending people to jail—and getting it wrong. MIT Technology Review. https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/


Husak, D. N. (2010). Overcriminalization: The limits of the criminal law. Oxford University Press.


Igwe, O. (2024). Artificial intelligence: A twenty first century international regulatory challenge. Athens Journal of Law, 10(4), 737–764.


Kaufmann, M. (2024). AI in policing and law enforcement. In Handbook on Public Policy and Artificial Intelligence (pp. 295–306).


Klie, L. (2023). AI seems to be everywhere. Speech Technology Magazine.


Knauf, R., Philippow, I., & Gonzalez, A. J. (2000). Towards validation and refinement of rule-based systems. Journal of Experimental & Theoretical Artificial Intelligence, 12(4), 421–431.


Mandalapu, V., Elluri, L., Vyas, P., et al. (2023). Crime prediction using machine learnin and deep learning: A systematic review and future directions. IEEE Access, 11, 60153–60170.


Milmo, C. (2024, April 20). Your car is spying on you—but police won’t say if they’re using the data. The i Paper. https://inews.co.uk/news/car-spying-police-data-security-3187756


Min, B. (2022). Balancing the need for due process, fair trials and systemic efficacy: Th benefits and challenges of technological improvements and greater efficiencies for the criminal justice system. Irish Probation Journal, 19, 7.


Muir, R., & O’Connell, F. (2025). Policing and artificial intelligence. The Police Foundation. https://www.police-foundation.org.uk/wp-content/uploads/2010/10/policing-and-ai.pdf.pdf


National Police Chiefs Council. (n.d.). Covenant for using artificial intelligence (AI) in policing. https://science.police.uk/delivery/resources/covenant-for-using-artificial-
intelligence-ai-in-policing/


Office for National Statistics. (2025). Crime and justice. https://www.ons.gov.uk/peoplepopulationandcommunity/crimeandjustice

Pereira, A. R., Rosado, D. P., & Lopes, H. S. (2021). From the traditional police model to intelligence-led policing model: Comparative study. EAI/Springer Innovations in Communication and Computing, 457–473.


R (Bridges) v. Chief Constable of South Wales Police and Others [2020] EWCA Civ 1058.


R (Bridges) v. Chief Constable of South Wales Police and Others [2019] EWHC 2341.
 

R v. Looseley [2001] UKHL 53.


Steidl, M., Felderer, M., & Ramler, R. (2023). The pipeline for the continuous development of artificial intelligence models—current state of research and practice. Journal of Systems and Software, 199, 111615.


Taaffe-Maguire, S. (2024, May 8). OpenAI’s ChatGPT stops answering election questions after giving wrong answers. Sky News. https://news.sky.com/story/openais-chatgpt-stops-answering-questions-on-election-results-after-wrong-answers-13148929


Truby, J., Brown, R. D., Ibrahim, I. A., et al. (2021). A sandbox approach to regulating high- risk artificial intelligence applications. European Journal of Risk Regulation, 13(2), 270–294.


UK Government Department of Science, Innovation & Technology. (2023). AI regulation: A pro-innovation approach. GOV.UK. https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach


UK Government. (2023). Cyber security breaches survey 2023. GOV.UK. https://www.gov.uk/government/statistics/announcements/cyber-security-breaches-survey-2023


University of Oxford. (2023). Expert comment: Oxford AI experts comment on the outcomes of the UK AI Safety Summit. https://www.ox.ac.uk/news/2023-11-03-expert-comment-oxford-ai-experts-comment-outcomes-uk-ai-safety-summit


von Eschenbach, W. J. (2021). Transparency and the black box problem: Why we do not trust AI. Philosophy & Technology, 34(4), 1607–1622.


Whang, S. E., Roh, Y., Song, H., et al. (2023). Data collection and quality challenges in deep learning: A data-centric AI perspective. The VLDB Journal, 32(4), 791–813.


Zilka, M., Sargeant, H., & Weller, A. (2022). Transparency, governance and regulation of algorithmic tools deployed in the criminal justice system: A UK case study. In Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (pp. 880–889).

​

By Daniel Adyera

Screenshot 2025-07-11 152645 - Daniel Adyera.png

Daniel Adyera is a multidisciplinary criminal justice policy consultant and aspiring criminal solicitor based in the UK, with a strong focus on forensic investigations and emerging legal technologies. He holds a Master of Laws in Legal Practice (SQE1) from Nottingham Trent University, where his dissertation critically examined AI-driven policing and the adequacy of existing legal safeguards in upholding defendants’ fair trial rights. Daniel also holds a Master of Laws in Forensics, Criminology and Law from Maastricht University (Netherlands), a Bachelor of Laws from the University of London (UK), and a Bachelor’s degree in Industrial and Organisational Psychology from Makerere University (Uganda). He has taught digital forensics and criminal investigations at the undergraduate level in Uganda. His research interests lie at the intersection of criminal law, forensic science, artificial intelligence, criminal psychology, and security governance. Find Daniel Adyera on LinkedIn.

Disclaimer: The International Platform for Crime, Law, and AI is committed to fostering academic freedom and open discourse. The views and opinions expressed in published articles are solely those of the authors and do not necessarily reflect the views of the journal, its editorial team, or its affiliates. We encourage diverse perspectives and critical discussions while upholding academic integrity and respect for all viewpoints.

bottom of page