The Home Office maintains that a human remains responsible for each decision and that the tool delivers efficiencies by prioritising work. However, Privacy International fears the system could lead to officials "rubberstamping" the algorithm's recommendations. The system is also being used for cases of EU nationals seeking to remain in the UK under the EU settlement scheme. Concerns have been raised about the potential for racial bias and invasion of privacy. The Home Office has been using the tool since 2019-20 and has previously refused freedom of information inquiries about it, citing potential circumvention of immigration controls.
Key takeaways:
- The Home Office's AI tool, which proposes enforcement action against adult and child migrants, has been criticised by campaigners who believe it could lead to unjust automated decisions.
- The system, known as Identify and Prioritise Immigration Cases (IPIC), uses personal information including biometric data, ethnicity, health markers and criminal convictions to make decisions, but critics fear it could lead to officials 'rubberstamping' the AI's recommendations.
- Privacy International, who obtained details about the system through a freedom of information request, has called for greater transparency and accountability in the use of AI in immigration decisions.
- The Home Office has defended the system, insisting that a human remains responsible for each decision and that the tool is being used to improve efficiency in handling the rising caseload of asylum seekers.