Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Microsoft engineer flags Copilot Designer concerns as academics call for better AI risk research - SiliconANGLE

Mar 06, 2024 - siliconangle.com
A Microsoft engineer, Shane Jones, has raised concerns about the company's Copilot Designer tool, an image generator powered by OpenAI’s DALL-E 3 system, in a letter to the U.S. Federal Trade Commission. Jones claims the tool can generate harmful images, including those depicting violence and drugs, and potentially copyright-infringing content. He argues that Microsoft should change the "E for Everyone" rating of the tool's Android app and add disclosures to the interface. Jones also calls for an independent review of Microsoft's responsible AI incident reporting processes.

This comes as over 150 researchers and experts, including professors from MIT and Stanford University, published an open letter calling for more effective AI safety research. The letter argues that current policies of AI companies can hinder independent evaluation and suggests that AI developers should indemnify good faith independent AI safety research. It also recommends that companies should rely on independent reviewers to assess AI safety experts' evaluation applications.

Key takeaways:

  • A Microsoft engineer has raised concerns about the company’s Copilot Designer tool, stating that it can be used to generate harmful images, including those depicting violence and drugs.
  • The engineer, Shane Jones, has written to the U.S. Federal Trade Commission and Microsoft’s board of directors detailing his concerns and suggesting that the company should take more steps to address the issues.
  • More than 100 academics, tech executives and other experts have published an open letter calling for better research into the risks posed by advanced AI models and for AI companies to support independent research into their models’ safety more effectively.
  • The signatories of the open letter, which include professors from MIT and Stanford University, are recommending that AI developers indemnify good faith independent AI safety, security, and trustworthiness research, provided it is conducted in accordance with well-established vulnerability disclosure rules.
View Full Article

Comments (0)

Be the first to comment!