Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

OpenAI accused of trying to profit off AI model inspection in court

Nov 15, 2024 - arstechnica.com
The article discusses the ongoing legal battle between OpenAI and The New York Times (NYT) over alleged copyright infringement by the AI model, ChatGPT. The NYT claims that OpenAI's model was trained on copyrighted content and is seeking to inspect the model to gather evidence. However, OpenAI has set a cap on the number of queries the NYT can make through an application programming interface, charging retail prices for additional queries. The outcome of this case could set a precedent for future cases where the public seeks to inspect AI models for alleged harms.

The article also highlights the broader issue of AI safety testing. While some companies, including OpenAI, have agreed to voluntary safety testing by the Artificial Intelligence Safety Institute (AISI), not all have done so. The AISI's future is uncertain due to potential funding issues, leaving the public largely reliant on AI companies' internal safety testing. This situation could increase the risk of harmful AI outputs and make it more difficult and expensive for the public to hold companies accountable for irresponsible AI releases.

Key takeaways:

  • OpenAI, the maker of ChatGPT, has been accused of trying to profit from discovery by charging litigants retail prices to inspect AI models alleged to cause harm.
  • The New York Times has raised a lawsuit against OpenAI over copyright concerns and has alleged that OpenAI is hiding its infringement by charging an undue expense for model inspection.
  • The outcome of this court debate could potentially deter future lawsuits from plaintiffs who can't afford to pay for model inspection.
  • The AI Safety Institute (AISI) is supposed to protect the US from risky AI models by conducting safety testing, but its future is unclear and it may be under-resourced to achieve its broad goals.
View Full Article

Comments (0)

Be the first to comment!