top of page

Home > Publications > "Digital Borders: Artificial Intelligence and The Future of Migration Governance"

September 3rd 2025

Digital Borders: Artificial Intelligence and The Future of Migration Governance

IMG_7670 copy - Mkheidze Giorgi.HEIC

By Giorgi Mkheidze

Giorgi Mkheidze is a 4th-year Bachelor’s student in International Relations, and a Young Professional Project Assistant at the Civil Council on Defense and Security. Find Giorgi on LinkedIn.

Image by Chris Yang

Introduction


The growing use of artificial intelligence (AI) in migration governance is bringing major changes to the way states and international organizations manage human mobility. Tools such as algorithms that predict migration flows and biometric systems that monitor borders are increasingly shaping decision-making processes. Supporters of these technologies often stress their potential to make procedures faster, more efficient, and more secure. At the same time, critics warn that AI can lead to discrimination, lack of transparency, and a weakening of fundamental rights. In this paper, I argue that AI should not be viewed as a neutral innovation. Instead, it acts as a political instrument that redefines sovereignty, shapes questions of inclusion and exclusion, and reinforces inequalities in today’s international system.​

​

AI in Migration Governance: Practices and Applications


In recent years, artificial intelligence has become an integral part of migration governance, reshaping how states and international organizations manage mobility. At Europe’s external borders, biometric systems such as the Eurodac fingerprint database and the forthcoming Entry/Exit System (EES) allow authorities to track and monitor people moving into the Schengen area in real time (Bellanova & González Fuster, 2019). Alongside this,
technologies like facial recognition cameras, predictive risk assessments, and even AI-based lie detectors tested in projects such as iBorderCtrl demonstrate how border management is increasingly concentrated to automated systems (Beduschi, 2021).


Within asylum processes, automation has begun to influence decisions that directly affect people’s lives. For example, U.S. Immigration and Customs Enforcement (ICE) has relied on algorithmic tools to recommend whether individuals should be detained or released (Rahman, 2020). Similar experiments have taken place in Canada and the United Kingdom, where automated triage systems have been applied to visa and asylum applications (Molnar, 2019). While these measures are often framed as efficient and objective, they raise serious questions about fairness, bias, and transparency in decision-making.


At a broader level, predictive analytics are being used to anticipate migration flows and possible crises. Agencies such as the EUAA and Frontex employ AI-assisted forecasting models to prepare for potential “migration pressures” before they materialize (Broeders & Dijstelbloem, 2016). The UNHCR has also adopted biometric registration in refugee camps across Africa and the Middle East, presenting it as a tool to improve aid delivery and reduce fraud (Jacobsen, 2017).


Taken together, these practices point to a new reality that scholars describe as “digital borders.” Migration control is no longer limited to checkpoints or territorial boundaries—it now extends through networks of algorithms and biometric databases that shape mobility long before and long after people cross a border. This shift highlights both the promise of technological efficiency and the risks of embedding opaque systems into deeply political decisions about human mobility and protection.

​

Legal and Ethical Challenges


While artificial intelligence offers policymakers attractive tools for efficiency and control, its use in migration governance also raises profound legal and ethical concerns. To begin with, automated systems can come into tension with fundamental human rights, particularly the rights to privacy, non-discrimination, and asylum. The European Court of Human Rights (ECtHR) has emphasized that any intrusion into personal privacy must be both necessary and proportionate (ECtHR, 2017). However, biometric surveillance in migration contexts often lacks robust safeguards, fueling concerns about mass surveillance and the risk of “function creep,” where data collected for one purpose is quietly repurposed for another. A second challenge lies in the reproduction of bias. Because algorithms are trained on historical data, they frequently mirror and reinforce pre-existing patterns of discrimination. Research on U.K. visa decision-making, for example, revealed disproportionately negative impacts on applicants from African and South Asian countries, effectively entrenching racialized hierarchies of mobility (Kemp & Bossong, 2020). Such outcomes undermine the promise of objectivity that policymakers often associate with technology.


Third, the opacity of many AI systems undermines accountability and due process. Migrants are frequently subjected to algorithmic assessments that operate as “black boxes”—their logic is difficult to interpret, and their outcomes hard to contest. This lack of transparency conflicts with basic principles of fairness enshrined in both international refugee law and domestic administrative law (Wachter & Mittelstadt, 2019).


Finally, reliance on predictive analytics risks reinforcing the securitization of migration. By framing mobility as a potential threat to be managed in advance, such systems shift the emphasis away from protection and rights-based governance toward a logic of pre-emptive control. In doing so, they risk obscuring the human realities of migration behind a layer of technological abstraction.

 

Theoretical Framework: IR and Critical Legal Perspectives


The governance of migration through AI cannot be understood merely as a technical topic—it is deeply political. Drawing on securitization theory in International Relations, the turn to AI reflects the framing of migration as an existential threat demanding extraordinary measures (Buzan, Wæver, & de Wilde, 1998). By embedding this logic in algorithmic systems, states institutionalize securitization, making it appear objective and neutral. From the perspective of Critical Legal Studies (CLS), AI functions as a legal-technical dispositif that obscures the political choices underpinning migration law. Law, far from being a neutral arbiter, becomes a vehicle for reproducing inequalities of race, class, and nationality (Kennedy, 2016). AI intensifies this by delegating sovereign power to machines, distancing decision-makers from responsibility, and limiting migrants’ ability to contest exclusionary practices.


Both IR and CLS highlight that AI is not just a tool of governance but a political instrument of control, shaping who belongs, who is excluded, and who exercises authority over global mobility.

 

AI as a Political Instrument


The global deployment of AI in migration governance reflects broader struggles over sovereignty and inequality. Wealthy states in the Global North invest heavily in AI-driven border technologies and export them to neighboring or transit countries, effectively externalizing border control. For example, the EU has funded biometric databases in African states to prevent migration before it reaches Europe (Andersson, 2014). Such practices reinforce asymmetries between states of origin, transit, and destination, raising concerns of neo-colonial dependency.


AI also enables a shift towards “techno-sovereignty”, where control over data and algorithmic systems becomes a new expression of state power (Krasmann, 2020). Migrants’ biometric data are stored indefinitely in centralized databases, granting states unprecedented surveillance capacity while exposing vulnerable populations to risks of misuse, hacking, or political manipulation.


Thus, AI reshapes migration not only at the level of individual cases but also in terms of global power hierarchies, redefining sovereignty in the digital age.

 

Towards Ethical and Accountable Governance


Mitigating the risks of AI in migration governance requires a framework that balances technological innovation with transparency, fairness, and human dignity. Several steps are critical: Transparency and Explainability: States and institutions must ensure that algorithmic systems are open to scrutiny, with accessible explanations provided to applicants; Independent Oversight: AI tools should be subject to external audits by independent bodies to detect bias and ensure compliance with human rights standards; Human-in-the-Loop Decision-Making: Automated systems should never operate without meaningful human oversight, preserving migrants’ rights to appeal and contest decisions; International Regulation: Beyond domestic safeguards, global governance mechanisms—potentially under UN auspices—are needed to regulate cross-border use of biometric and AI technologies; Inclusion of Migrant Voices: Policy debates must incorporate perspectives of migrants and refugees themselves, rather than treating them solely as objects of control.


Conclusion


Artificial intelligence is transforming migration governance, embedding itself in practices of surveillance, asylum processing, and predictive analytics. While proponents highlight efficiency and security, the risks to human rights, accountability, and global equality are profound. As this article has argued, AI is not a neutral tool but a political instrument that reshapes the boundaries of belonging and exclusion in an unequal world. Engaging critically with these dynamics is essential if migration policies are to uphold transparency, fairness, and human dignity. Without such engagement, digital borders risk entrenching injustice at the core of global mobility governance.
​​

References

​

Andersson, R. (2014). Illegality, Inc.: Clandestine migration and the business of bordering Europe. University of California Press.


Beduschi, A. (2021). Artificial intelligence and migration management: The EU’s new frontier of digital borders. Georgetown Journal of International Affairs, 22(1), 31–38.


Bellanova, R., & González Fuster, G. (2019). Politics of metadata: Datafication of information and digital borders. European Journal of Migration and Law, 21(2), 149–170.


Broeders, D., & Dijstelbloem, H. (2016). The datafication of mobility and migration management. Ethnic and Racial Studies, 39(2), 169–185.


Buzan, B., Wæver, O., & de Wilde, J. (1998). Security: A new framework for analysis. Lynne Rienner Publishers.


ECtHR. (2017). Big Brother Watch and Others v. the United Kingdom. Application no. 58170/13. European Court of Human Rights.


Jacobsen, K. (2017). Experimentation in humanitarian locations: UNHCR and biometric registration of Afghan refugees. Security Dialogue, 48(2), 144–164.


Kennedy, D. (2016). A World of Struggle: How Power, Law, and Expertise Shape Global Political Economy. Princeton University Press.


Kemp, S., & Bossong, R. (2020). The politics of data-driven migration governance: The case of the UK. Journal of Ethnic and Migration Studies, 46(11), 2277–2294.


Krasmann, S. (2020). The logic of digital borders. European Journal of Social Theory, 23(2), 177–194.


Molnar, P. (2019). Technology on the margins: AI and refugee rights. Refugee Studies Quarterly, 38(4), 452–467.


Rahman, F. (2020). Algorithmic detention: The risks of AI in U.S. immigration enforcement. Yale Journal on Regulation, 37(2), 389–417.


Wachter, S., & Mittelstadt, B. (2019). A right to reasonable inferences: Re-thinking data protection law in the age of Big Data and AI. Columbia Business Law Review, 2019(2), 494–620.​

IMG_7670 copy - Mkheidze Giorgi.HEIC

By Giorgi Mkheidze

Giorgi Mkheidze is a 4th-year Bachelor’s student in International Relations, and a Young Professional Project Assistant at the Civil Council on Defense and Security. Find Giorgi on LinkedIn.

Disclaimer: The International Platform for Crime, Law, and AI is committed to fostering academic freedom and open discourse. The views and opinions expressed in published articles are solely those of the authors and do not necessarily reflect the views of the journal, its editorial team, or its affiliates. We encourage diverse perspectives and critical discussions while upholding academic integrity and respect for all viewpoints.

bottom of page