The article also highlights that while AI-generated text is already permitted by many journals under certain circumstances, the use of such tools for creating images or data is less likely to be viewed as acceptable. There are suspicions that data and images fabricated using generative AI are already widespread in the literature. The challenge lies in detecting these AI-produced images as they are often almost impossible to distinguish from real ones. However, companies like Imagetwin and Proofig are developing AI tools to detect integrity issues in scientific figures and weed out images created by generative AI.
Key takeaways:
- Generative artificial intelligence (AI) is becoming a powerful tool for fraudsters in the scientific community, raising concerns about the integrity of scientific literature.
- AI tools can create text, images, and data that are difficult for humans to spot, leading to an arms race between integrity specialists, publishers, and technology companies to develop AI tools for detecting deceptive, AI-generated elements in papers.
- There is a growing suspicion that data and images fabricated using generative AI are already widespread in scientific literature, with paper mills potentially using AI tools to mass-produce manuscripts.
- Companies like Imagetwin and Proofig are developing AI tools to detect integrity issues in scientific figures and weed out images created by generative AI. However, the reliability of these tools is yet to be fully established.