AI in the UK Asylum system: Innovation or injustice?
Written by: Caroline Echwald

Artificial intelligence (AI) is rapidly becoming part of how governments manage immigration and asylum processes. In the UK, where the asylum system is facing record delays, AI is now being used in new ways to help process claims. According to the Home Office, trials of AI-assisted tools have shown potential for making decisions faster. But with such high stakes where people’s lives, safety and rights hang in the balance, experts and advocacy groups are urging caution.
This article explains how AI is being used in the asylum system, why it matters now, and the potential benefits and harms of using this technology to decide who is granted protection in the UK.
Why is this an issue now?
At the end of 2024, 91,000 asylum cases were still waiting to be resolved in the UK. While this is a reduction from the 200,000 cases waiting to be resolved in 2023, it is still a large backlog by historical standards. The government has made reducing this backlog a priority and is increasingly turning to technology to do so. At the same time, its wider asylum policies have become significantly more restrictive.
Against this political backdrop, it’s vital to ask whether new technologies like AI will be used to improve fairness or simply to speed up refusals.
What kind of AI is being used?
Between May and December 2024, the Home Office tested two new AI tools in the asylum system, aiming to speed up decisions.
The first, AI Case Summarisation (ACS) helps staff navigate lengthy asylum interview transcripts by automatically producing a summary of key points. The second, AI Policy Search (APS), acts as a search assistant to quickly find relevant country policy documents. Both are designed to support, not replace, human decision-makers, following a “human in the loop” approach.
Following what it called a “positive” pilot, the Home Office announced plans to use of AI across more asylum cases, with the aim of improving decision speed and consistency.
However, concerns about AI in immigration go beyond these pilots. Another tool, called IPIC (short for Identify and Prioritise Immigration Cases), has drawn criticism for being far more secretive. It was uncovered by campaigners after months of legal pressure, and little is known about how it functions. What is clear is that IPIC automatically recommends individuals for immigration decisions or enforcement actions, including deportation and bail conditions, and may influence a wide range of cases across the system
Worryingly, internal guidance suggests that case workers may be encouraged to accept IPIC’s recommendations because doing so involves less paperwork and oversight than rejecting them. This has raised concerns that such tools, rather than simply helping staff, may steer decisions in ways that are hard to challenge, especially if people are unaware that an algorithm played a role in the outcome of their case.
Can AI improve the asylum system?
AI does offer certain advantages. Properly designed, it can help organise large amounts of data, highlight relevant country evidence, and increase caseworkers’ productivity. A paper by the Helen Bamber Foundationnotes that AI could also support real-time language translation and transcription, making communication easier where language barriers exist.
AI can help speed up certain tasks, but it does not (and should not) replace human judgment. Asylum decisions often rely on assessing personal narratives, understanding trauma, and recognising how individuals might behave under extreme stress. These are areas where human empathy and discretion are critical and where technology can easily miss important context.
What are the risks?
AI may seem neutral, but it reflects the data and assumptions it is trained on. This means that if past asylum decisions were biased, consciously or not, the AI could learn and repeat those same patterns. The research by Helen Bamber Foundation and academics has shown that some AI systems are likely to reproduce racial or nationality-based discrimination when fed biased input data.
Another major concern is transparency. Many AI systems, including those used in immigration, are what experts call “black box” tools meaning it’s unclear how they reach their conclusions. If an AI tool highlights or downplays part of someone’s claim, how can they appeal or challenge it?
Organisations such as Migrants’ Rights Network have warned that these technologies are being developed and rolled out without sufficient public scrutiny, especially under the wider “digital hostile environment” strategy. In other words, AI is being introduced in a climate where migrants are already facing increasing surveillance and suspicion, rather than one focused on rights and support.
As AI tools continue to be introduced in asylum and immigration processes, the key challenge will be balancing efficiency with fairness, accuracy, and transparency. The stakes are high: decisions made in this system can have life-altering consequences. That’s why stronger oversight and clearer public information on how these technologies are used is needed.
Where’s the evidence?
There is still little independent, public evidence about how AI is affecting real-life asylum decisions. The government’s own evaluation of the AI pilots was positive, but focused on technical performance and productivity, not on outcomes for asylum seekers or potential harm.
Experts and advocacy groups are calling for much more transparency. Before expanding AI across the asylum system, they argue that the government should publish clear data on outcomes, explain how the tools work, and ensure asylum seekers are told when AI has been used in their case.
Conclusion: progress or pitfall?
There is no question that the UK asylum system needs reform, but fast decisions are not the same as fair ones. AI may offer tools to support caseworkers, but only if it is used carefully, with transparency, oversight, and clear limits.
As the use of AI in asylum expands, the public and civil society must be able to ask: who benefits? Without answers, and without accountability, there is a real risk that these systems will entrench injustice rather than solve it.
Related Articles
-
Promises, Pressure, and Precarity: Immigration in Labour’s Britain
Written by: Christopher Desira
In a move that has ignited fierce debate across the political spectrum and within the business community, the UK government announced significant changes to the immigration system via the Immigration White Paper, Restoring Control over the Immigration System. In this article, learn about Labour’s UK immigration reforms and their potential impact on migrants and employers. […]Read article -
The UK’s hostile environment and the gig economy
Written by: Caroline Echwald
The UK’s “hostile environment” immigration policy—now officially rebranded as the “compliant environment”—was designed to make life unliveable for undocumented migrants. By embedding immigration checks into everyday services like employment, housing, banking and healthcare, the policy shifts the burden of enforcement onto employers, landlords, and public bodies, creating a climate of suspicion and exclusion. More than […]Read article -
The Labour government’s new immigration bill: A new bill, but the same old approach
Written by: Caroline Echwald
The Labour government has introduced the Border Security, Asylum and Immigration Bill, claiming it will strengthen border security and dismantle criminal networks exploiting vulnerable migrants. At first glance, this might sound reasonable—but scratch beneath the surface, and it becomes clear that this bill continues the same harmful approach to migration that treats people as threats rather […]Read article
Categories: AsylumHuman RightsImmigration Rights