Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

AI researchers have started reviewing their peers using AI assistance

Mar 19, 2024 - theregister.com
A group of researchers from Stanford University, NEC Labs America, and UC Santa Barbara have used generative AI to analyze peer reviews of papers submitted to leading AI conferences. They found a small but consistent increase in the use of large language models (LLMs) for reviews submitted three days or less before the deadline. The difficulty in distinguishing between human- and machine-written text led the researchers to call for the development of ways to evaluate real-world data sets containing AI-authored content.

The researchers found that LLMs tend to use adjectives like "commendable," "innovative," and "comprehensive" more frequently than human authors. They estimated that between 6.5% and 16.9% of text submitted as peer reviews to these conferences could have been substantially modified by LLMs. The researchers argued for more transparency about the use of LLMs in scientific writing and warned that AI feedback risks a homogenization effect that skews towards AI model biases and away from meaningful insight.

Key takeaways:

  • Researchers have used generative AI to review machine learning work of peers, and found a small but consistent increase in apparent LLM usage for reviews submitted three days or less before the deadline.
  • The difficulty of distinguishing between human- and machine-written text has led to an urgent need to develop ways to evaluate real-world data sets that contain some indeterminate amount of AI-authored content.
  • The researchers focused on the use of adjectives in a text, which they found to be more reliable in distinguishing between human and machine-written content.
  • The researchers argue that the scientific community needs to be more transparent about the use of LLMs, as their usage potentially deprives those whose work is being reviewed of diverse feedback from experts and risks a homogenization effect that skews toward AI model biases.
View Full Article

Comments (0)

Be the first to comment!