Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

The AI trust crisis

Dec 14, 2023 - simonwillison.net
The article discusses the recent controversy surrounding Dropbox's new AI features, which some users believe are sending their private data to OpenAI for training purposes. Despite Dropbox's denial of these claims and their AI principles document stating that customer data will not be used to train AI models without consent, users are skeptical. The author suggests that this skepticism stems from a broader crisis of trust in AI, with many people not believing companies when they say they won't use their data for certain purposes.

The author argues that transparency about how AI models are trained could help improve trust. They also suggest that local models, which run on users' own devices, could be a more privacy-friendly alternative. However, they caution against dismissing the benefits of larger, cloud-hosted models based on potentially unfounded privacy concerns. They stress the importance of understanding and trusting how companies handle user data, and call on companies to earn this trust.

Key takeaways:

  • Dropbox's new AI features, which allegedly send user data to OpenAI for training, have sparked privacy concerns and criticism.
  • Despite assurances from Dropbox and OpenAI that user data is not used for training AI models without consent, many users remain skeptical and fear their private data is being misused.
  • The author suggests that AI companies should be more transparent about their training processes to build trust with users and dispel privacy concerns.
  • Local AI models, which run on users' own devices, are seen as a more trustworthy alternative to cloud-based models, and their quality and efficiency are improving.
View Full Article

Comments (0)

Be the first to comment!