Click here to read our latest publication "The AI and Automation Takeover: What Can (and Should) the U.S. Government Do for Its Workers under Domestic and International Law?" by Muhammad Farrel Abhyoso
Home > Publications > "Precaution or Automation? Evaluating AI-Based Targeting Under the IHL Precautionary Principle"
May 23rd 2025
Precaution or Automation? Evaluating AI-Based Targeting Under the IHL Precautionary Principle
By Tycho De Vriendt
Tycho De Vriendt holds a Master of Laws from Ghent University and an LL.M. in Public International Law from Leiden University. He currently works as a legal advisor at the Federal Public Service for Public Health, Food Chain Safety and Environment in Brussels. His interests lie in public international law, with a particular focus on international humanitarian law. Find Tycho De Vriendt on LinkedIn.

​Introduction
In recent years, artificial intelligence (AI) has become increasingly common in households, from ChatGPT to smart thermostats and home assistants. Likewise, AI is gradually making its way to the battlefield. Artificial intelligence-based decision support systems (AI DSS) are tools that utilize AI techniques to analyse data, provide actionable recommendations, and assist decision-makers at various levels of the command hierarchy in handling semi-structured and unstructured decision tasks (Klonowska, 2021).
Unlike Lethal Autonomous Weapon Systems (LAWS), which are under international review through the UN’s Group of Governmental Experts on LAWS, AI DSS are not considered weapons themselves. Instead, they are categorized as “means of warfare” (Mimran and Dahan, 2024). AI DSS have gained less attention than LAWS, largely due to the belief that human decision-making continues to play a major role in attacks involving AI DSS. However, concerns about the use and functionality of AI DSS persist. While these systems offer clear advantages, such as faster data processing and predictive analysis, they also pose significant risks, especially when human lives are involved.
This article analyses the intersection of AI DSS and the precautionary principle under IHL. It begins by detailing the real-world use of AI DSS in current conflicts, particularly in Gaza and Ukraine. It then outlines the legal framework underpinning the precautionary principle before assessing how AI DSS challenge this IHL obligation. In response, it proposes legal and operational reforms, including the development of a more precise definition of “Meaningful Human Control.” This concept is advanced as a safeguard to ensure that humanitarian norms, especially the precautionary principle, are upheld amid the growing integration of algorithmic technologies in warfare.
The rise of AI DSS
In 2019, China highlighted “intelligent warfare” as a key element of the People’s Liberation Army in a State Council White Paper (China’s National Defence in the New Era, 2019). Similarly, the European Parliament acknowledges the growing role of AI in armed conflict and the emergence of a Global AI arms race (European Parliament, 2025). However, AI DSS is no longer a theoretical concept referred to in policy documents – contemporary armed conflicts now demonstrate the practical use of AI DSS on the battlefield.
An investigation by +972 Magazine reveals that during Israel’s military campaign in Gaza, the Israeli Defense Forces (IDF) significantly expanded their targeting parameters using an AI tool referred to as “Habsora” – Hebrew for The Gospel (+972 Magazine, 2023). This AI-powered tool marked a significant shift in the conduct of warfare by allowing the IDF to process vast amounts of intelligence data and generate large numbers of potential targets at unprecedented speed. While human analysts and commanders retained final authority, there are serious concerns that the sheer volume and speed of these AI-generated recommendations led to a rubber-stamping dynamic, where oversight became superficial rather than substantive (Renic & Schwarz, 2023).
Similarly in Ukraine, a variety of AI systems are being deployed both by Russia and Ukraine. Some experts refer to the armed conflict as a “testing ground” or “living lab” for AI-driven warfare (Bergengruen, 2024). AI DSS like Palantir’s MetaConstellation, Kropyva, and Griselda are helping Ukrainian forces with real-time intelligence, artillery targeting, and battlefield management (Nadibaidze and others, 2024). However, concerns about over-reliance on AI persist. A Ukrainian commander acknowledged AI’s effectiveness in tasks like detection and surveillance, where it can surpass humans due to factors such as fatigue. Yet, he emphasized the need for caution with AI-controlled targeting, recommending that AI should only guide munitions to pre-programmed coordinates, as entrusting human lives to algorithms carries inherent risks (Bendett, 2025).
Precautionary principle
The principle of precaution in attack is a cornerstone of IHL and is comprehensively articulated in article 57 of Additional Protocol I (AP I) to the Geneva Conventions of 12 August 1949. Although not all states are parties to AP I, the precautionary obligations it establishes are widely recognised as part of customary international law (Rule 15 of customary IHL) and thus binding in both international and non-international armed conflicts.
Under Article 57 of AP I, parties to a conflict are required to take “all feasible precautions” when planning and executing attacks to avoid, or at least minimise, incidental loss of civilian life, injury to civilians, and damage to civilian objects. This obligation entails a proactive and ongoing duty to assess the legality and impact of an attack in light of the available information at the time. The responsibility for compliance lies primarily with those who plan, authorise, or decide upon attacks – typically military commanders.
The precautionary principle requires commanders to do everything feasible to verify that targets are lawful military objectives and to choose means and methods of warfare that minimise risks to civilians. Feasibility, in this context, is understood as what is practicable or possible, taking into account all circumstances prevailing at the time, including humanitarian and military considerations. If it becomes evident that the expected incidental harm would be excessive in relation to the anticipated concrete and direct military advantage, the attack must not be launched. Moreover, this evaluation is not static; it must be continuously reviewed as the operation progresses, with an obligation to suspend or cancel the attack if it becomes disproportionate at any stage (West, 2022).
The principle of precaution is not only technical but also deeply human in nature. AP I emphasises that the implementation of these obligations involves a degree of subjectivity and relies on the “common sense and good faith” of military decision-makers (Quéguiner, 2006). The margin of discretion afforded to commanders – often described as a “fairly broad margin of judgement” – recognises the complex, context-dependent nature of armed conflict while reaffirming their duty to exercise rigorous care in operational decision-making (Commentary to AP I, 1987).
The inherent reliance on human judgment presents significant legal and ethical challenges in the context of AI DSS. While AI DSS remain “human in the loop”, the complexity of data protection and the outcomes involved make it difficult to uphold the subjective nature of the precautionary principle.
Speed and human control
As highlighted by numerous scholars and practitioners, speed is a key factor driving the increasing use of AI DSS (Bo and Dorsey, 2024). AI DSS are designed to accelerate decision-making processes, often surpassing the human capacity to observe, orient, decide, and act (OODA loop). While this rapid processing can offer tactical advantages, it raises concerns about the feasibility of maintaining meaningful human control. Human operators may lack comprehensive understanding of the data inputs, training datasets, algorithmic parameters, and the accuracy of AI outputs, making it challenging to critically assess AI-generated recommendations (Bo and Dorsey, 2024).
For instance, the +972 Magazine report cites an interviewee who noted that “during the early stages of the war, only 20 seconds were spent on each target before authorizing a bomb” (+972 Magazine, 2023). As multiple authors note, a reduced timeframe for deliberation could transform humans into “cogs in a mechanized process” (The Guardian, 2023), emphasizing that it diminishes the potential for meaningful human control. Therefore, it could be argued that the systematic mode of killing facilitated by AI DSS leads to a troubling erosion of established targeting standards and contributes to the moral devaluation of those subjected to such violence (Renic & Schwarz 2023). It is difficult to reconcile this speed with the necessary measures under the precautionary principle.
Moreover, the relentless stream of AI-generated targets may induce cognitive action bias among human operators, who, under stress, tend to act impulsively rather than assess each situation with due care. This behaviour is compounded by the automation bias: an over-reliance on AI recommendations, particularly when under time constraints (Bo and Dorsey, 2024). As operators prioritize efficiency over thorough analysis, human oversight risks becoming superficial. Even when commanders retain the theoretical authority to override AI-generated outputs, the operational pressures and perceived authority of these systems often lead to uncritical acceptance (Mimran, 2024).
The effectiveness of AI DSS in fulfilling precautionary obligations also hinges on the integrity and accuracy of the data fed into these systems. If inputs are flawed, incomplete, or biased, the resultant recommendations could lead to erroneous targeting and unintended civilian casualties. IHL mandates a continuous reassessment of the legality and proportionality of attacks. Yet, many AI systems are ill-equipped to incorporate dynamic, real-time changes – such as shifts in civilian presence – that are essential for lawful targeting.
Related is the opaque or “black box” nature of many AI algorithms (Mimran & Dahan, 2024). These systems often operate without clear explanations of how specific outputs are derived, complicating efforts to ensure accountability and compliance with IHL. Under Article 57 of Additional Protocol I, commanders are responsible for ensuring that all feasible precautions are taken. However, when the internal workings of an AI DSS are not transparent or intelligible, it becomes nearly impossible to evaluate whether those legal responsibilities have been fulfilled (Mimran, 2024). Authors thus warns that this lack of explainability may “impede the [human] ability to minimize the risk of recurring mistakes” and could critically undermine the duty to investigate alleged breaches of IHL (Mimran, 2024).
In sum, while AI DSS offer significant operational advantages, their use in military contexts necessitates rigorous safeguards to prevent the erosion of human judgment, mitigate decision- making biases, ensure data integrity, and uphold legal accountability. Without these safeguards, AI-enabled warfare risks violating the fundamental tenets of humanitarian law and ethical conduct in armed conflict.
The way forward
The rise of new technologies in warfare is both inevitable and advantageous, as endorsed by key international platforms, including the 2024 REAIM Summit and the United Nations General Assembly resolution on AI in the military domain (Blueprint for Action, 2024; A/RES/79/239, 2024). Ignoring these advancements risks rendering existing legal frameworks obsolete and may inadvertently reduce overall compliance with IHL.
Recent analyses have emphasized the strategic benefits of AI DSS. One of the most significant advantages is their potential to reduce the uncertainty inherent in the “fog of war”, which often results from fragmented communication and incomplete situational awareness (Klamberg, 2023). By improving real-time information sharing and enhancing coordination between commanders and front-line units, AI systems can support faster and more informed decision-making. When responsibly integrated, such tools could help minimize harm to civilians and strengthen adherence to IHL principles.
However, these benefits are contingent upon strict compliance with existing legal standards. As the preceding analysis demonstrates, the current use of AI DSS raises serious questions about whether these systems are compatible with the precautionary principle. In response to these concerns, a possible quantification of “Meaningful human control” (MHC) is presented to close the gap between technological innovation and legal accountability.
The widespread belief that maintaining humans “in the loop” offers sufficient ethical and legal safeguards must be critically examined. While human involvement is often seen as a bulwark against the risks posed by fully autonomous systems, this assumption can be misleading. The central issue is not merely the physical presence of a human operator but the presence of humanity in decision-making. This includes a principled commitment to core IHL values and the avoidance of overly broad categorizations of legitimate military targets (Renic & Schwarz, 2023).
To meaningfully uphold these values, a nuanced and operationally effective form of human control must be embedded within AI DSS. This form of control must be more than symbolic, it must reinforce legal and ethical norms throughout the life cycle of the system. The concept of MHC, though widely cited, remains under-defined and often misunderstood. Its conceptual ambiguity creates a significant legal and operational gap, especially in lethal decision-making processes influenced by opaque algorithms (Ploughshares, 2023). It has been established that human involvement can improve the precision and quality of decision-making (Goldfarb & Lindsay 2021). However, MHC entails more than the mere presence of a human operator in the loop. It requires a substantive and informed engagement with AI-generated recommendations. This begins with informed decision-making: operators must understand the AI system’s inputs, logic, and limitations. As Roff and Moyes argue, control is not meaningful if the operator “does not know or understand what is being decided or why” (Roff & Moyes, 2016). Transparency interpretability, and trust are therefore essential prerequisites for informed human oversight. Equally vital is the opportunity for timely intervention. Operators must be able to challenge or override AI outputs within decision-making windows that realistically allow for ethical deliberation. This is difficult to reconcile with high-speed targeting cycles, such as those reported in recent military operations (ICRC, 2019; +972 Magazine, 2023).
Furthermore, MHC entails context-specific judgment. Under IHL, commanders must evaluate targets for legality, proportionality, and necessity – assessments that are inherently subjectiveand situation-dependent. If oversight is reduced to a procedural formality, the discretion that IHL mandates is functionally eliminated. Ekelhof (2021) distinguishes between “formal” and “substantive” control, only the latter satisfies legal expectations.
A fourth essential element is accountability. MHC must enable traceable and explainable decision-making processes. If commanders cannot understand or reconstruct how a targeting decision was made, the obligations under Article 57 of Additional Protocol I – such as verification, precautions, and post-strike investigations – become impossible to fulfil. As noted in the NATO Science and Technology Organization’s HFM-322 workshop report, the lack of clear responsibility chains and the potential for “semantic gaps” between human understanding and machine outputs severely undermine effective human oversight (NATO, 2023).
Collectively, these dimensions – understanding, intervention, judgment, and accountability – form the foundation of what should constitute meaningful human control (Roff & Moyes, 2016). Maintaining such control requires the development of metrics, training, and system design protocols that allow for the evaluation, retention, and regaining of MHC as operational contexts evolve (Miller et al., 2022). Without these safeguards, decision-making in armed conflict risks becoming dehumanised and detached from the foundational principles of IHL, such as the precautionary principle. Ensuring that MHC is implemented not only in policy, but also operationalised in system design and military training, is therefore a legal and ethical imperative.
Conclusion
AI DSS are transforming warfare, offering speed and efficiency, but at a cost. When used in targeting decisions, these systems risk undermining the precautionary principle of IHL, which relies on subjective human judgment, context, and accountability.
Therefore, it is argued that Meaningful Human Control must be clearly defined and operationalised to ensure AI enhances, rather than erodes, legal and ethical safeguards. Understanding, timely intervention, and traceable accountability are essential. Without them, algorithmic warfare risks dehumanising conflict and weakening the protections that IHL is meant to uphold.
​
References
​
Abraham Y., “Mass Assassination Factory: How Israel Uses Calculated Bombing in Gaza.”, +972 Magazine, 30 November 2023, https://www.972mag.com/mass-assassination-factory-israel-calculated-bombing-gaza/
​
Bendett, S. and Kirichenko D., “Ukraine Symposium – The Continuing Autonomous Arms Race.” Just Security, 19 February 2025, https://www.justsecurity.org.
​
Bergengruen, V., “How tech giants turned Ukraine into an AI war lab” TIME, 8 February 2024, https://time.com/6691662/ai-ukraine-war-palantir/.
​
Bo M. and Dorsey J., “The Need for Speed: The Cost of Unregulated AI Decision-Support Systems to Civilians.” Opinio Juris, 4 April 2024.
​
Clapp S., “Defence and Artificial Intelligence” European Parliament Briefing, April 2025, https://www.europarl.europa.eu/RegData/etudes/BRIE/2025/769580/EPRS_BRI(2025)769580_EN.pdf
​
Davies H., McKernan B. and Sabbagh D., “The Gospel: How Israel Uses AI to Select Bombin Targets” The Guardian, 1 December 2023.
​
Davidovic J., “On the purpose of meaningful human control of AI” Frontiers in Big Data 2023, 5, Article 1017677. https://doi.org/10.3389/fdata.2022.1017677.
​
Goldfarb A., and Lindsay J. R., “Prediction and judgment: Why artificial intelligence increases the importance of humans in war” International Security 2021, 46(3), 7-50.
​
Klamberg M., “Regulatory Choices at the Advent of Gig Warfare”, Journal of International Humanitarian Legal Studies (Forthcoming), Faculty of Law, Stockholm University Research Paper No. 121, 31 May 2023.
Klonowska, K, “Article 36: Review of AI Decision-Support Systems and Other Emerging Technologies of Warfare”, Yearbook of International Humanitarian Law, Vol. 23 2020, The Hague: T.M.C. Asser Press, 2021.
Klonowska K., “AI-Based Targeting in Gaza: Surveying Expert Responses, Refining the Debate.” Lieber Institute West Point, 7 June 2024.
Marijan, B., “Meaningful human control and AI-enabled warfare” Project Ploughshares, 9 December 2024, https://www.ploughshares.ca/publications/meaningful-human-control-and-ai-enabled-warfare.
Miller C., and others, M”eaningful Human Control of AI-based Systems Workshop: Technical Evaluation Report, Thematic Perspectives and Associated Scenarios (STO-MP-HFM-322)” NATO Science and Technology Organization, 2023, https://www.sto.nato.int.
Mimran T., Pacholska M., Dahan G. and Trabucco L., “Beyond the Headlines: Combat Deployment of Military AI-Based Systems by the IDF.” Lieber Institute West Point, 2 February 2024.
Mimran T. and Dahan G., “Artificial Intelligence in the Battlefield: A Perspective from Israel.” Opinio Juris, 20 April 2024.
Ministry of Foreign Affairs of the Republic of Korea. (2024), REAIM Summit 2024. https://www.reaim2024.kr/.
Nadibaidze, A., Bode, I., & Zhang, Q., “AI in military decision support systems: A review of developments and debates”, Center for War Studies, University of Southern Denmark, 2024, https://findresearcher.sdu.dk/ws/portalfiles/portal/275893410/AI_DSS_report_WEB.pdf .
Pilloud C. and others, “Commentary on the Additional Protocols of 8 June 1977 to the Geneva Conventinos of 12 August 1949”, International Committee of the Red Cross (1987).
Renic. N. and Schwarz E., “Inhuman in the Loop: AI Targeting and the Erosion of Moral Restraint.” Opinio Juris, 19 December 2023. https://opiniojuris.org/2023/12/19/inhuman-in-the-loop-ai-targeting-and-the-erosion-of-moral-restraint/.
Renic N. C., and Schwarz E., “Crimes of dispassion: Autonomous weapons and the moral challenge of systematic killing” Ethics & International Affairs 2023, 37(3), 321-343.
Roberts A. and Venables A., “AI Decision-Support Systems in Armed Conflict: Challenges under IHL.” In Proceedings of the 13th International Conference on Cyber Conflict (CyCon 2021), NATO CCDCOE, 2021. https://ccdcoe.org/uploads/2021/05/CyCon_2021_Roberts_Venables.pdf.
Roff H. M. and Moyes R., “Meaningful Human Control, Artificial Intelligence and Autonomous Weapons”, Briefing paper prepared for the Informal Meeting of Experts on Lethal Autonomous Weapons Systems, UN Convention on Certain Conventional Weapons, April 2016, https://article36.org/wp-content/uploads/2016/04/MHC-AI-and-AWS FINAL.pdf.
United Nations General Assembly. (2024, December 24), Artificial intelligence in the military domain and its implications for international peace and security (A/RES/79/239).
Quéguiner J., “Precautions under the law governing the conduct of hostilities”, International Review of the Red Cross 2006, Vol. 88, Nr. 864.
West L., “Privacy vs. Precaution in Future Armed Conflict”, Lieber Institute West point, 21 January 2022.
White Paper on China’s National Defense in the New Era. State Council Information Office of the People's Republic of China, 2019. https://english.www.gov.cn/archive/whitepaper/201907/24/content_WS5d3941ddc6d08408f502283d.html.
By Tycho De Vriendt
Tycho De Vriendt holds a Master of Laws from Ghent University and an LL.M. in Public International Law from Leiden University. He currently works as a legal advisor at the Federal Public Service for Public Health, Food Chain Safety and Environment in Brussels. His interests lie in public international law, with a particular focus on international humanitarian law. Find Tycho De Vriendt on LinkedIn.
Disclaimer: The International Platform for Crime, Law, and AI is committed to fostering academic freedom and open discourse. The views and opinions expressed in published articles are solely those of the authors and do not necessarily reflect the views of the journal, its editorial team, or its affiliates. We encourage diverse perspectives and critical discussions while upholding academic integrity and respect for all viewpoints.