In response, a Salesforce spokesperson stated that the company will update its principles to clarify that Slack does not use customer data to develop or train generative models, and that customer data never leaves Slack's trust boundary. However, these changes do not address the issue of users not having explicitly consented to their chats and other content being used for AI training.
Key takeaways:
- Slack users have expressed concern over the company's policy that allows for the use of customer data, including messages and files, to train its AI models.
- Slack engineer Aaron Maurer has stated that the company does not train its large language models (LLMs) on customer data, but acknowledged that the policy may need to be updated for clarity.
- There is a discrepancy between Slack's privacy principles, which state that customer data is used to develop AI models, and the Slack AI page, which claims that customer data is not used to train Slack AI.
- Salesforce, the owner of Slack, has agreed to update the privacy principles to clarify the relationship between customer data and generative AI in Slack, stating that customer data is not used to develop LLMs or other generative models, and that customer data never leaves Slack's trust boundary.