The article also highlights the broader issue of AI safety testing. While some companies, including OpenAI, have agreed to voluntary safety testing by the Artificial Intelligence Safety Institute (AISI), not all have done so. The AISI's future is uncertain due to potential funding issues, leaving the public largely reliant on AI companies' internal safety testing. This situation could increase the risk of harmful AI outputs and make it more difficult and expensive for the public to hold companies accountable for irresponsible AI releases.
Key takeaways:
- OpenAI, the maker of ChatGPT, has been accused of trying to profit from discovery by charging litigants retail prices to inspect AI models alleged to cause harm.
- The New York Times has raised a lawsuit against OpenAI over copyright concerns and has alleged that OpenAI is hiding its infringement by charging an undue expense for model inspection.
- The outcome of this court debate could potentially deter future lawsuits from plaintiffs who can't afford to pay for model inspection.
- The AI Safety Institute (AISI) is supposed to protect the US from risky AI models by conducting safety testing, but its future is unclear and it may be under-resourced to achieve its broad goals.