Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Israel lets AI decide what it should bomb, increasing the number of targets

Dec 01, 2023 - businessinsider.com
The Israel Defense Forces (IDF) has been using an artificial intelligence (AI) system, known as the "target factory," to identify targets in Gaza. The system, which has been operational for several years, has increased the number of potential strike locations by over 70,000 percent. The IDF claims the AI system rapidly produces recommended targets, exceeding what a human might suggest, and ensures precision in attacks on Hamas infrastructure, minimizing harm to non-combatants.

However, a recent investigation by +972 Magazine and Local Call reveals that the AI system, named "Gospel," contributes to extensive targeting that has resulted in the destruction of entire neighborhoods and the death of nearly 15,000 people. The report also suggests that the IDF knows in advance the potential civilian casualties from these strikes. Critics argue that the system creates a "mass assassination factory," focusing on quantity over quality, and there are concerns about the potential for "automation bias" in AI decision-making.

Key takeaways:

  • The Israel Defense Forces (IDF) has been using an artificial intelligence system to help determine where in Gaza it should bomb. This system is part of an operation known as the "target factory," which has increased the number of strike locations available to the military by over 70,000 percent since it first became functional.
  • The AI system absorbs intelligence and rapidly produces recommended targets for a researcher, aiming to exceed what a human might suggest. The IDF stresses that it maintains a high standard for the targets that are produced.
  • Before the AI system was activated, Israel could produce 50 targets in Gaza in a year. Once the AI system was activated, it could generate as many as 100 targets in a single day — half of which would be attacked.
  • Experts on AI and international humanitarian law have raised concerns about the use of AI in warfare, including the risk of "automation bias" and the potential for AI to replace human decision-making altogether.
View Full Article

Comments (0)

Be the first to comment!