The author criticizes Hoodline for using LLMs not to improve journalism, but to create a semblance of local journalism, thereby undermining public trust in an already struggling sector. The article also points out the potential benefits of LLMs in journalism, such as editing, transcription, and data analysis, but questions the value proposition considering the environmental impact of AI. The author concludes by criticizing companies like Google for not maintaining quality control over their news and search services, making it easier for pseudo-news outlets to succeed.
Key takeaways:
- Media outlets are using AI and language learning models (LLMs) to automate journalism, often resulting in low-quality content and scandals involving fake journalists and misinformation.
- Hoodline, a media outlet created in 2014, is facing criticism for using AI to produce low-quality aggregated content without adequately informing its readers.
- The death of local news has led to a rise in right-wing propaganda and a lack of genuine local journalism, resulting in a more ignorant and divided public and impacting electoral outcomes.
- While AI and LLMs can assist journalists in tasks such as editing, transcription, and data analysis, the value of these benefits must be weighed against the environmental impact of AI and the potential for misuse in producing low-quality or misleading content.