The study also suggested that ChatGPT could be rewarding plagiarism, citing an instance where the bot erroneously cited a website that had plagiarized a New York Times article as the source of the story. The researchers argue that OpenAI's technology treats journalism as decontextualized content, with little regard for the circumstances of its original production. They concluded that publishers have little control over what happens to their content when it is used by ChatGPT. OpenAI responded to the findings by stating that the researchers had run an "atypical test" of their product.
Key takeaways:
- A study by the Tow Center for Digital Journalism found that OpenAI's ChatGPT often misrepresents or invents information when citing publishers' content, regardless of whether the publishers have licensing deals with OpenAI.
- The researchers found numerous instances where ChatGPT inaccurately cited publishers' content, and the chatbot rarely admitted when it was unable to produce an answer.
- The study suggests that ChatGPT could be rewarding plagiarism, as it sometimes cited websites that had plagiarized content as the source of the original content.
- The researchers argue that OpenAI's technology treats journalism as decontextualized content, with little regard for the circumstances of its original production, and that publishers have little meaningful agency over what happens to their content when ChatGPT uses it.