The use of AI has reportedly resulted in 37,000 Palestinians being marked for assassination and thousands of women and children being killed as collateral damage. Despite the Israeli army denying the use of AI to select human targets, the high death toll and the AI-generated decisions have sparked international criticism and charges of genocide before the International Court of Justice. Critics argue that the use of AI in warfare can lead to moral complacency, prompt users toward action over non-action, and prioritize speed over ethical reasoning.
Key takeaways:
- Israel has reportedly been using AI systems, including one called "The Gospel", to guide its war in Gaza, with the AI deciding whom to target for killing.
- The AI systems work in concert, with "Gospel" marking buildings used by Hamas militants, "Lavender" rating each person's likelihood of being a militant based on surveillance data, and "Where's Daddy?" tracking these targets and informing the army when they're in their family homes.
- Despite the AI system making errors in approximately 10 percent of cases, Israeli soldiers reportedly treated the AI's output as a human decision, sometimes only spending 20 seconds to review a target before bombing.
- The use of AI in warfare raises ethical questions about moral responsibility, with the speed and scale of AI systems potentially leading to moral complacency and a lack of deliberative ethical reasoning.