The recommendations come after two high-profile cases where explicit, AI-generated images of public figures posted on Instagram and Facebook landed Meta in controversy. The company only acted after the Oversight Board intervened. The Board also highlighted that many victims of deepfake intimate images are not in the public eye and are either forced to accept the spread of their non-consensual depictions or report every instance. In response to the board’s observation, Meta said that it will review these recommendations.
Key takeaways:
- The Oversight Board, Meta's semi-independent observer body, has urged the company to refine its policies on AI-generated explicit images, including changing the terminology it uses and moving its policies to a different section.
- The Board has also recommended that Meta should not require nonconsensual imagery to be "non-commercial or produced in a private setting" to remove or ban images generated by AI or manipulated without consent.
- These recommendations follow two high-profile cases where explicit, AI-generated images of public figures posted on Instagram and Facebook caused controversy for Meta.
- Meta has responded to the board’s observation by stating that it will review these recommendations.