Adobe has issued a statement clarifying that it does not train its Firefly AI models on unpublished user content. However, the company's policy change has not conclusively shown to protect users from potential privacy invasions. The new Terms of Service do not explicitly mention Firefly or AI training data, but they do state that Adobe may need to access user content to address fraud, security, legal, or technical issues. This has led to concerns that Adobe's AI models may have access to users' private work.
Key takeaways:
- Adobe's new Terms of Service have sparked outrage and confusion among users, who fear their unpublished and in-progress works may be used to train AI models.
- The updated terms mention the use of 'automated systems' and 'machine learning' to improve Adobe's services and software, raising concerns about users' creative work being used as training data for Adobe's AI tools without credit or compensation.
- Adobe has clarified that it does not train its Firefly AI models on unpublished user content, and only uses content stored in the Creative Cloud or content that users make public.
- Despite Adobe's clarification, users remain concerned about potential privacy invasions and the broad language used in the new Terms of Service, which taps into concepts the privacy-minded have become wary of.