Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

It's true, LLMs are better than people – at creating convincing misinformation

Jan 31, 2024 - theregister.com
Researchers Canyu Chen and Kai Shu from the Illinois Institute of Technology have found that misinformation generated by large language models (LLMs) is harder to detect than false claims made by humans. In their study, they used eight LLM detectors to evaluate content created by LLMs like ChatGPT, Llama, and Vicuna, based on human-generated misinformation datasets. They found that LLMs can craft misinformation that maintains the same meaning as a source sample by varying the style, making it more difficult to detect.

Chen suggests that LLMs can have more deceptive styles than human authors due to their strong capacity to follow user instructions. The researchers argue that the difficulty in detecting LLM-authored misinformation means it can cause greater harm, posing serious threats to online safety and public trust. They call for collective efforts from various stakeholders to combat LLM-generated misinformation.

Key takeaways:

  • Misinformation generated by large language models (LLMs) is more difficult to detect than false claims created by humans, according to researchers Canyu Chen and Kai Shu.
  • The researchers examined whether LLM-generated misinformation can cause more harm than human-generated infospam, using eight LLM detectors to evaluate human and machine-authored samples.
  • LLMs can use four types of controllable misinformation generation prompting strategies to craft misinformation, and can also be instructed to write an arbitrary piece of misinformation without a reference source.
  • The difficulty of detecting LLM-authored misinformation means it can do greater harm, posing serious threats to online safety and public trust.
View Full Article

Comments (0)

Be the first to comment!