The use of AI in the public sector has previously proved controversial, with experts warning that poorly-understood algorithms are being used to make life-changing decisions without the people affected by those decisions even knowing about it. Concerns have also been raised about the abolition of an independent government advisory board which held public sector bodies accountable for how they used AI. The Cabinet Office has launched an “algorithmic transparency reporting standard”, encouraging departments and police authorities to voluntarily disclose where they use AI to help make decisions which could have a material impact on the general public.
Key takeaways:
- Government officials are using artificial intelligence (AI) and complex algorithms to make decisions in areas such as welfare, immigration, and criminal justice. However, there are concerns about the potential for these tools to produce discriminatory results.
- Examples of potentially discriminatory AI use include an algorithm used by the Department for Work and Pensions, which may have led to people having their benefits removed, and a facial recognition tool used by the Metropolitan police that has been found to make more mistakes recognising black faces than white ones.
- Experts warn that if the data used to train AI shows evidence of discrimination, the AI tool is likely to lead to discriminatory outcomes. There are concerns that British officials are using poorly-understood algorithms to make life-changing decisions without the people affected by those decisions even knowing about it.
- The use of AI in the public sector has previously proved controversial, such as in the Netherlands, where tax authorities used it to spot potential child care benefits fraud, but were fined €3.7m after repeatedly getting decisions wrong and plunging tens of thousands of families into poverty.