In addition, the BBC has blocked web crawlers from OpenAI and Common Crawl from accessing its websites, aligning with other major news organizations like CNN, The New York Times, and Reuters. This move is intended to protect the interests of license fee payers and prevent unauthorized use of BBC data for training AI models. The BBC is also exploring the broader implications of generative AI on the media industry, including potential effects on website traffic patterns and the spread of disinformation.
Key takeaways:
- The BBC has outlined its guiding principles for the use of generative AI in areas such as journalism, archiving, and personalized experiences, emphasizing public interest, artists' rights, and transparency.
- BBC plans to work with tech companies, other media organizations and regulators to safely develop generative AI and maintain trust in the news industry.
- The broadcaster has blocked web crawlers from OpenAI and Common Crawl to access its websites, aligning with other major news organizations like CNN, The New York Times, and Reuters to protect copyrighted content.
- BBC is planning a series of projects to explore the potential applications of Generative AI across various domains, while also examining its broader implications on the media industry.