Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

‘The Gospel’: how Israel uses AI to select bombing targets in Gaza

Dec 01, 2023 - theguardian.com
The Israel Defense Forces (IDF) have been using artificial intelligence (AI) to select targets in their bombing campaign in Gaza, according to The Guardian. The IDF has deployed an AI target-creation platform called “the Gospel”, which has accelerated the production of targets. The platform analyses large sets of information from various sources, such as drone footage, intercepted communications, and surveillance data, to produce automated recommendations for attacking targets. However, there are concerns about the risks posed to civilians as advanced militaries expand the use of complex and automated systems on the battlefield.

The IDF has claimed that the AI-based system reduces civilian harm by encouraging more accurate targeting. However, experts are sceptical of these assertions, stating there is little empirical evidence to support such claims. When a strike is authorised, target researchers reportedly know in advance the number of civilians expected to be killed, with each target having a collateral damage score. Critics argue that as humans come to rely on these systems, they lose the ability to consider the risk of civilian harm in a meaningful way.

Key takeaways:

  • The Israel Defense Forces (IDF) has been using artificial intelligence (AI) to select targets in its bombing campaign in Gaza, specifically an AI target-creation platform called "the Gospel".
  • The Gospel has been used to produce automated recommendations for attacking targets, such as the private homes of individuals suspected of being Hamas or Islamic Jihad operatives, and has played a critical role in building lists of individuals authorised to be assassinated.
  • Each target has a file containing a collateral damage score that stipulates how many civilians are likely to be killed in a strike, raising concerns about the risks posed to civilians as advanced militaries expand the use of complex and opaque automated systems on the battlefield.
  • Experts are sceptical of assertions that AI-based systems reduce civilian harm by encouraging more accurate targeting, with some warning of the risk of "automation bias" and the potential for humans to lose the ability to consider the risk of civilian harm in a meaningful way.
View Full Article

Comments (0)

Be the first to comment!