The use of AI language models has been associated with lower quality peer reviews and a lack of transparency, as scientific journals do not require authors to declare the use of such tools. Gray warns of a potential "vicious circle," where AI tools are trained with articles written by previous versions of the same tool, leading to increasingly commendable, intricate, meticulous, and insubstantial studies.
Key takeaways:
- Librarian Andrew Gray analyzed five million scientific studies and found a rise in the use of certain words, attributing this to the use of AI tools like ChatGPT by researchers to write or polish their studies.
- At least 60,000 scientific studies, more than 1% of those analyzed in 2023, were written with the help of ChatGPT or similar tools, according to Gray's estimates.
- AI language models tend to use words with positive connotations disproportionately, and their use has been associated with lower quality peer reviews.
- There is a risk of a 'vicious circle' where subsequent versions of ChatGPT are trained with scientific articles written by the old versions, leading to increasingly commendable, intricate, meticulous, and insubstantial studies.