The article also discusses the potential for AI to be purposefully misused to harm individuals, such as through deepfake pornography. The limited legal protection for victims of such misuse is a growing concern. In response to these issues, seven leading AI companies agreed in July to adopt voluntary safeguards, such as publicly reporting their systems' limitations. The Federal Trade Commission is also investigating whether ChatGPT, an AI developed by OpenAI, has harmed consumers.
Key takeaways:
- Artificial Intelligence (AI) can sometimes create and spread false information about individuals, leading to potential harm to their reputations and offering them little recourse for protection.
- AI's struggles with accuracy have led to a range of issues, from fake legal decisions and pseudo-historical images to false information about specific individuals.
- Legal precedent involving AI is slim, and the few laws that govern the technology are mostly new, but some people are starting to confront AI companies in court over false information spread about them.
- AI can also be purposefully abused to attack real people, with deepfake technology being used to insert a person’s likeness into a sexual situation without their consent, leading to calls for more legislation to protect individuals.