Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Slack users horrified to discover messages used for AI training

May 17, 2024 - arstechnica.com
Slack users have expressed concern over the company's policy of using customer data, including messages and files, to train its AI models. The company's privacy principles state that they use customer data to develop AI/ML models, while Slack AI's page claims they do not use customer data to train Slack AI. This discrepancy has led to calls for Slack to clarify its privacy principles regarding the use of data for AI training.

In response, a Salesforce spokesperson stated that the company will update its principles to clarify that Slack does not use customer data to develop or train generative models, and that customer data never leaves Slack's trust boundary. However, these changes do not address the issue of users not having explicitly consented to their chats and other content being used for AI training.

Key takeaways:

  • Slack users have expressed concern over the company's policy that allows for the use of customer data, including messages and files, to train its AI models.
  • Slack engineer Aaron Maurer has stated that the company does not train its large language models (LLMs) on customer data, but acknowledged that the policy may need to be updated for clarity.
  • There is a discrepancy between Slack's privacy principles, which state that customer data is used to develop AI models, and the Slack AI page, which claims that customer data is not used to train Slack AI.
  • Salesforce, the owner of Slack, has agreed to update the privacy principles to clarify the relationship between customer data and generative AI in Slack, stating that customer data is not used to develop LLMs or other generative models, and that customer data never leaves Slack's trust boundary.
View Full Article

Comments (0)

Be the first to comment!