Microsoft has responded by stating its commitment to addressing employee concerns and appreciating Jones' efforts to enhance the safety of their technology. Jones, however, has pointed out that while the core issue lies with OpenAI's DALL-E model, from which Copilot Designer is derived, users of OpenAI's ChatGPT won't encounter the same harmful outputs due to different safeguards. The incident highlights the potential dangers of AI image-generators, including the creation of harmful "deepfake" images.
Key takeaways:
- A Microsoft engineer, Shane Jones, has raised concerns about harmful and offensive imagery generated by the company’s AI image-generator tool, Copilot Designer.
- Jones has written to U.S. regulators and Microsoft's board of directors, urging them to take action and has also met with U.S. Senate staffers to share his concerns.
- Microsoft has responded by stating that it is committed to addressing employee concerns and appreciates Jones' efforts in studying and testing the technology to enhance its safety.
- Jones has urged Microsoft to take Copilot Designer off the market until it is safer, highlighting that the tool can generate harmful content such as violence, political bias, and inappropriate sexual objectification, among others.