The IDF has claimed that the AI-based system reduces civilian harm by encouraging more accurate targeting. However, experts are sceptical of these assertions, stating there is little empirical evidence to support such claims. When a strike is authorised, target researchers reportedly know in advance the number of civilians expected to be killed, with each target having a collateral damage score. Critics argue that as humans come to rely on these systems, they lose the ability to consider the risk of civilian harm in a meaningful way.
Key takeaways:
- The Israel Defense Forces (IDF) has been using artificial intelligence (AI) to select targets in its bombing campaign in Gaza, specifically an AI target-creation platform called "the Gospel".
- The Gospel has been used to produce automated recommendations for attacking targets, such as the private homes of individuals suspected of being Hamas or Islamic Jihad operatives, and has played a critical role in building lists of individuals authorised to be assassinated.
- Each target has a file containing a collateral damage score that stipulates how many civilians are likely to be killed in a strike, raising concerns about the risks posed to civilians as advanced militaries expand the use of complex and opaque automated systems on the battlefield.
- Experts are sceptical of assertions that AI-based systems reduce civilian harm by encouraging more accurate targeting, with some warning of the risk of "automation bias" and the potential for humans to lose the ability to consider the risk of civilian harm in a meaningful way.