Click here to read our latest publication "Legal Challenges of Remote Work in the European Union: Safeguarding Rights in a Digital Rule of Law" by Tiago Matos
Home > Publications > "Artificial Intelligence in the Courtroom: A Marvel or a Menace?"
February 21st 2025
Artificial Intelligence in the Courtroom: A Marvel or a Menace?

By Ashwini Singh
Master of Laws in Medical Law from the University of KwaZulu-Natal

“Artificial Intelligence” and “AI” have undoubtedly become international catchphrases in the late 2010s and early 2020s. With the concept formerly conceived in the 1950s (Reiling 2020, p.2), AI is widely understood as the utilisation of technology to automate tasks that traditionally require human intelligence (Surden 2019, p.1307). Dr Dory Reiling (2020, p.2), a former senior judge of the Amsterdam court, defines intelligence as: “…the ability to reason abstractly, logically and consistently, discover, lay and see through correlations, solve problems, discover rules in seemingly disordered material with existing knowledge, solve new tasks, adapt flexibly to new situations, and learn independently, without the need for direct and complete instruction.”
​
Summarily, AI encompasses the artificial replication of human intelligence. As a result, AI has facilitated the automation of tasks that historically relied upon humans to perform (Acemoglu and Restrepo 2019, p.198). In furtherance, owing to the remarkable proliferation of the usage of AI in various industries within the past decade (Huang et al. 2023, p.799), AI has unsurprisingly permeated the legal sectors of certain countries.
​
The United Nations Educational, Scientific and Cultural Organization (commonly referred to as the “UNESCO”) expressed that although AI is being explored by judicial systems worldwide, there are challenges that arise from the use of AI during judicial proceedings. The UNESCO (2025) maintains that AI can cause bias in decision-making due to the fact that AI has a tendency to elicit its information from limited data sets. This problem of AI being trained via specific data – ultimately influencing the accuracy of the information it presents – is a recurring predicament in judiciaries where AI-generated content has been submitted during litigation, with a prominent instance emanating from South Africa recently.
​
In the South African High Court case of ‘Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others’ (2025), the issue of AI-oriented research finding its way into the court file was at the forefront of the judgment handed down by the presiding judicial officer, Judge Elsje-Marie Bezuidenhout.
​
During the ‘Mavundla…’ (2025) matter, Judge Bezuidenhout found that the Counsel for the Applicant had filed an incorrect reference to a case. This prompted Judge Bezuidenhout to investigate the list of authorities relied upon by the Counsel for the Applicant. After conferring with two legal researchers, Judge Bezuidenhout established that only two of the cases submitted by the Counsel for the Applicant actually existed (‘Mavundla…’ 2025, p.10). Consequently, the Counsel for the Applicant was summoned to the Court to answer for the citation of invalid case authorities in her court papers.
​
Despite being afforded the opportunity to explain herself, the Counsel for the Applicant conceded that she did not draft the submission made to the Court on behalf of the Applicant. Instead, a Candidate Attorney was delegated the task of preparing the list of cases that were cited as authorities. Judge Bezuidenhout stood the matter down and directed the Candidate Attorney and Counsel for the Applicant to retrieve copies of the cases they cited and bring copies of the same to the Court (‘Mavundla…’ 2025, p.11).
​
What ensued thereafter was a court appearance made by the Sole Proprietor of the law firm that the Candidate Attorney was in the employ of, where the Sole Proprietor contended that he could not locate the cases cited by the Candidate Attorney (‘Mavundla…’ 2025, p.12). A further court appearance was made by the Sole Proprietor wherein he admitted that he could only elicit some of the cases cited by the Candidate Attorney (‘Mavundla…’ 2025, p.17).
​
At the court hearing for the matter, the Counsel for the First Respondent argued that he struggled to find the cases cited by the Candidate Attorney and concluded that artificial intelligence applications such as ChatGPT and Meta AI were used as such applications tend to “make up cases” (‘Mavundla…’ 2025, pp.17-18). Subsequently, the Sole Proprietor confessed to the usage of AI applications, but appeared to be attempting to justify their utilisation in court (‘Mavundla…’ 2025, p.18).
​
After hearing both parties, Judge Bezuidenhout determined that AI is an unreliable source of information for legal research because of its inaccuracy in advising on legal matters. Judge Bezuidenhout elaborated by stating that a reliance upon AI technologies during legal research is “irresponsible and downright unprofessional” in the context of South African legal practice (‘Mavundla…’ 2025, p.25). Accordingly, Judge Bezuidenhout ruled that the Sole Proprietor’s law firm bear the costs of the court appearances caused by the incorrect citation of cases, and Judge Bezuidenhout further referred the conduct of the Sole Proprietor and the Candidate Attorney for investigation by the national Legal Practice Council of South Africa (‘Mavundla…’ 2025, p.26).
​
Albeit in the context of legal research, Judge Bezuidenhout’s sentiment regarding the unreliability of AI for providing valid information was reaffirmed by Dadkhah et al. (2024, p.1), whereby it was found that ChatGPT was an unreliable tool in determining academic journal authenticity, resulting in the AI application promoting the publication of articles in journals that were classified as “predatory” (Dadkhah et al. 2024, p.2).
​
Surely if an AI tool is suggesting and retrieving information from sources that are not credible, then the aggregated content produced by the AI cannot be relied upon due to its original sources of information being dubious in the first place. With a costly and demanding area such as the legal industry, there is minimal to no room for error in court submissions that significantly impact the lives of those who are parties to those proceedings.
​
On one hand, it can be acknowledged that AI has the potential to ease the administrative workload in courts, such as by providing transcription services (Verbit 2025). However, in terms of crucial court submissions, Artificial Intelligence cannot match human intelligence at this stage. If abused by legal practitioners, AI can become a menace to the integrity of the legal profession.
​
On one hand, it can be acknowledged that AI has the potential to ease the administrative workload in courts, such as by providing transcription services (Verbit 2025). However, in terms of crucial court submissions, Artificial Intelligence cannot match human intelligence at this stage. If abused by legal practitioners, AI can become a menace to the integrity of the legal profession.
​
References
Acemoglu, A. and Restrepo, P. (2019). Artificial Intelligence, Automation, and Work. In: Agrawal, A., Gans, J. and Goldfarb, A., eds. The Economics of Artificial Intelligence: An Agenda. Chicago: University of Chicago Press, pp.197-236.
Dadkhah, M., David, L., Hegedus, M., Oermann, M. and Raman, R. (2024). Diagnosis Unreliability of ChatGPT for Journal Evaluation. Advanced Pharmaceutical Bulletin, 14(1), pp.1-4.
Huang, C., Mao, B., Yao, X. and Zhang, Z. (2023). An Overview of Artificial Intelligence Ethics. IEEE Transactions on Artificial Intelligence, 4(4), pp.799-819.
‘Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal and Others’ (2025) South African High Court, case 7940/2024P. Southern African Legal Information Institute. Available at: https://www.saflii.org/za/cases/ZAKZPHC/2025/2.pdf [Accessed 12 February 2025].
Reiling, AD. (2020). Courts and Artificial Intelligence. International Journal For Court Administration, 11(2), pp.1-10.
Surden, H. (2019). Artificial Intelligence and Law: An Overview. Georgia State University Law Review, 35(4), pp.1306-1337.
UNESCO. (2025). AI and the Rule of Law: Capacity Building for Judicial Systems [online]. Available from: https://www.unesco.org/en/artificial-intelligence/rule-law/mooc-judges [Accessed 09 February 2025].
Verbit. (2025). AI Court Reporting: Transforming Legal Transcription [online]. Available from: https://verbit.ai/legal/why-ai-transcription-is-a-court-reporters-secret-weapon [Accessed 17 February 2025].