However, the use of AI in writing scientific literature is not without controversy. Some consider it scientific misconduct, as AI models can produce inaccurate text and even fabricate quotations and citations. Both Gray and the Stanford researchers have raised concerns about the integrity of research and the potential risks to the security and independence of scientific practice. They suggest that authors using LLM-generated text should disclose this or reconsider its appropriateness.
Key takeaways:
- Scientific articles are increasingly being written by generative AI, with estimates ranging from 1% to 17.5% of all papers published in 2023, depending on the topic.
- Two studies have identified this trend by analyzing the use of certain words that large language models (LLMs) habitually use, such as 'intricate,' 'pivotal,' and 'meticulously.'
- Computer science and electrical engineering lead in using AI-preferred language, while disciplines like mathematics, physics, and papers published by the journal Nature saw smaller increases.
- There are concerns about the use of AI in writing scientific literature, with some considering it scientific misconduct due to the risk of producing inaccurate text and fabricating quotations and citations.