This comes as over 150 researchers and experts, including professors from MIT and Stanford University, published an open letter calling for more effective AI safety research. The letter argues that current policies of AI companies can hinder independent evaluation and suggests that AI developers should indemnify good faith independent AI safety research. It also recommends that companies should rely on independent reviewers to assess AI safety experts' evaluation applications.
Key takeaways:
- A Microsoft engineer has raised concerns about the company’s Copilot Designer tool, stating that it can be used to generate harmful images, including those depicting violence and drugs.
- The engineer, Shane Jones, has written to the U.S. Federal Trade Commission and Microsoft’s board of directors detailing his concerns and suggesting that the company should take more steps to address the issues.
- More than 100 academics, tech executives and other experts have published an open letter calling for better research into the risks posed by advanced AI models and for AI companies to support independent research into their models’ safety more effectively.
- The signatories of the open letter, which include professors from MIT and Stanford University, are recommending that AI developers indemnify good faith independent AI safety, security, and trustworthiness research, provided it is conducted in accordance with well-established vulnerability disclosure rules.