Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

ChatGPT is transforming peer review — how can we use it responsibly?

Nov 10, 2024 - nature.com
The article discusses the increasing use of artificial intelligence (AI), specifically large language models (LLMs), in writing peer reviews for research papers in computer science. The author, James Zou, and his colleagues at Stanford University found that up to 17% of peer reviews are now written by AI, often characterized by a formal tone, verbosity, and a lack of specific references. However, these AI-generated reviews tend to be superficial and generalized, and often lack depth in technical critique.

Zou argues that while AI can help with certain tasks such as correcting language and grammar, and identifying relevant information, it cannot replace expert human reviewers. He calls for the scientific community to establish norms for responsible use of AI in the peer-review process. He suggests that AI could be used to assist reviewers and editors, but its outputs should be cross-checked. He also advocates for transparency in the use of AI in reviews, and for more research on how AI can responsibly assist with peer-review tasks.

Key takeaways:

  • Up to 17% of the peer reviews at major computer-science publication venues are now written by artificial intelligence (AI), particularly large language models (LLMs).
  • LLMs often produce superficial and generalized reviews, lacking references and specific mentions of sections in the submitted paper.
  • Despite their capabilities, LLMs cannot replace expert human reviewers as they lack in-depth scientific reasoning and can overlook mistakes in a research paper.
  • The scientific community needs to establish norms on how to use these models responsibly in the academic peer-review process, including transparent disclosure of LLM use and limiting their tasks.
View Full Article

Comments (0)

Be the first to comment!