Zou argues that while AI can help with certain tasks such as correcting language and grammar, and identifying relevant information, it cannot replace expert human reviewers. He calls for the scientific community to establish norms for responsible use of AI in the peer-review process. He suggests that AI could be used to assist reviewers and editors, but its outputs should be cross-checked. He also advocates for transparency in the use of AI in reviews, and for more research on how AI can responsibly assist with peer-review tasks.
Key takeaways:
- Up to 17% of the peer reviews at major computer-science publication venues are now written by artificial intelligence (AI), particularly large language models (LLMs).
- LLMs often produce superficial and generalized reviews, lacking references and specific mentions of sections in the submitted paper.
- Despite their capabilities, LLMs cannot replace expert human reviewers as they lack in-depth scientific reasoning and can overlook mistakes in a research paper.
- The scientific community needs to establish norms on how to use these models responsibly in the academic peer-review process, including transparent disclosure of LLM use and limiting their tasks.