The article also discusses upcoming copyright battles, with the New York Times suing OpenAI and Microsoft, arguing that OpenAI’s models were trained on their content and offer a competing product. Other lawsuits have been launched against AI developers by various artists and authors. Some experts argue for legislative intervention to ensure AI developers pay for content, while others believe current copyright protections should suffice. Misinformation concerns were also raised, with AI-generated misinformation posing a threat to journalism and increasing the burden on newsrooms to verify content.
Key takeaways:
- Experts have warned Congress about the threat AI poses to journalism, with concerns about intellectual property issues, the decline of local news due to big tech, and AI-powered misinformation.
- There is a growing global trend of legislation requiring tech companies to pay news outlets for content featured on their platforms, with such laws already in place in Canada and Australia, and proposed in the U.S.
- High-profile copyright cases have been launched against AI developers, including a lawsuit from the New York Times against OpenAI and Microsoft, arguing that their AI models were trained on the Times' work and offer a competing product.
- There are concerns about the dangers of AI-generated misinformation, with the use of AI to manipulate or misappropriate the likeness of trusted personalities seen as a risk to spreading misinformation or perpetuating fraud.