Microsoft engineer Shane Jones has been warning about the inappropriate images generated by Microsoft's OpenAI-powered systems for months. He found that even seemingly harmless prompts could lead to disturbing images, such as demons eating infants or Darth Vader threatening a baby, when the term “pro-choice" was entered. Jones has written to the FTC and Microsoft's board of directors about his concerns. In response, Microsoft stated that it is continuously monitoring and adjusting the system to strengthen safety filters and prevent misuse.
Key takeaways:
- Microsoft has blocked several prompts in its Copilot tool that led the AI to generate violent, sexual and other inappropriate images.
- The changes were made after an engineer at the company raised serious concerns about Microsoft's Generative AI technology with the Federal Trade Commission.
- Despite the changes, CNBC found that it was still possible to generate violent imagery through certain prompts, and users can still convince the AI to create images of copyrighted works.
- Microsoft has stated that it is continuously monitoring and making adjustments to strengthen safety filters and prevent misuse of the system.