The issue extends beyond just academic papers, with undisclosed use of ChatGPT found in conference papers and preprints as well. Experts warn that the rise of AI tools like ChatGPT could exacerbate the problem of fake manuscripts, making it harder to distinguish between human-written and AI-generated content. The issue underscores a deeper problem in academic publishing, where peer reviewers often lack the time to thoroughly check manuscripts for red flags.
Key takeaways:
- Researchers have been using the AI chatbot, ChatGPT, to write academic papers without disclosing it, leading to retractions and investigations by publishers.
- Guillaume Cabanac, a computer scientist, has flagged more than a dozen journal articles with telltale ChatGPT phrases, indicating that the actual number of undisclosed AI-assisted papers could be much higher.
- While publishers like Elsevier and Springer Nature allow the use of AI tools like ChatGPT for manuscript production, they require authors to declare it.
- The rise of AI tools like ChatGPT could potentially exacerbate the problem of fake manuscripts, as they can produce fluent text that is almost impossible to distinguish from human-written text.