The rise of AI in academic writing has led to a patchwork of policies among journals, with some requiring disclosure of AI use and others banning AI-generated content without editor's permission. There are also concerns about the use of AI by paper mills to churn out scientific papers. Some researchers are calling for tools to screen for undisclosed AI writing, similar to how plagiarism is detected. A recent study demonstrated a tool that can differentiate between human and AI-produced science writing with 99% accuracy. However, more work is needed to expand the tool's capabilities across different journals.
Key takeaways:
- The academic journal _Resources Policy_ published a study that included a sentence suggesting the use of AI in its creation, sparking an investigation by Elsevier, the journal's publisher.
- While Elsevier does not prohibit the use of AI in writing, it requires disclosure of such use, raising questions about the ethics of AI use in academic publishing.
- Experts have raised concerns about the potential for AI to generate inaccurate or misleading content, and the need for rigorous vetting and disclosure when AI is used in academic work.
- There is currently no foolproof way to detect the use of AI in academic writing, and some researchers are calling for tools to screen for AI-generated content in the same way that plagiarism is detected.