The accusations against Perplexity involve two key concepts: the Robots Exclusion Protocol, which websites use to indicate they don’t want their content accessed by web crawlers, and fair use in copyright law, which allows the use of copyrighted material without permission in certain circumstances. Perplexity argues that summarizing a URL isn’t the same as crawling, and that it's just responding to a user's request to go to that URL. However, critics argue that this is a distinction without a difference, as visiting a URL and pulling information to summarize the text looks like scraping if done thousands of times a day. The startup is also accused of plagiarizing articles, but it argues that providing a summary of an article is within the bounds of fair use.
Key takeaways:
- Perplexity AI, a startup that combines a search engine with a large language model, has been accused of unethical practices including plagiarism and illicit web scraping.
- Forbes and Wired have accused Perplexity of plagiarizing their articles, while Wired also accused the company of ignoring the Robots Exclusion Protocol to scrape website content.
- Perplexity maintains it is operating within the bounds of fair use copyright laws and has not done anything wrong. The company is also working on advertising revenue-sharing deals with publishers.
- The situation highlights the complexities and nuances of fair use and the Robots Exclusion Protocol in the age of AI, with potential implications for the future of content creation and monetization.