Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

How journalism should face the unchecked threats of generative AI - Poynter

Sep 12, 2023 - poynter.org
The article discusses the potential dangers of artificial intelligence (AI) in journalism, particularly the risk of AI-generated misinformation or "hallucinations". The author suggests that as AI becomes more advanced, it could produce false information that could have serious consequences, especially in fields like health and medicine. The article also highlights the risk of AI undermining trust in research and news, and exacerbating existing societal divisions.

The author suggests several measures to mitigate these risks, including tempering the use of AI chatbots, hiring more copy editors, emphasizing fact-checking, and creating guidelines for the use of AI. The article also suggests the creation of "truth beats" where reporters focus on debunking false information. The author argues that newsrooms must be equipped to defend fact before it falls prey to AI-generated misinformation.

Key takeaways:

  • Artificial Intelligence (AI) may pose a threat to journalism due to the potential for AI hallucinations, fabrications, and illogical deductions, which could further erode trust in news media.
  • Four types of AI are being used or under development: Reactive AI, Limited Memory AI, Theory of Mind (General Intelligence) AI, and Self-Aware (Superintelligence) AI.
  • Newsrooms should temper their use of AI, hire more copy editors, emphasize fact-checking, establish “truth beats” and create or update guidelines about machine applications to safeguard the sanctity of fact.
  • AI hallucinations, or the generation of false information by AI models, can lead to serious consequences, especially in fields like health and medicine, and can further erode trust in research and news reporting.
View Full Article

Comments (0)

Be the first to comment!