The author argues that transparency about how AI models are trained could help improve trust. They also suggest that local models, which run on users' own devices, could be a more privacy-friendly alternative. However, they caution against dismissing the benefits of larger, cloud-hosted models based on potentially unfounded privacy concerns. They stress the importance of understanding and trusting how companies handle user data, and call on companies to earn this trust.
Key takeaways:
- Dropbox's new AI features, which allegedly send user data to OpenAI for training, have sparked privacy concerns and criticism.
- Despite assurances from Dropbox and OpenAI that user data is not used for training AI models without consent, many users remain skeptical and fear their private data is being misused.
- The author suggests that AI companies should be more transparent about their training processes to build trust with users and dispel privacy concerns.
- Local AI models, which run on users' own devices, are seen as a more trustworthy alternative to cloud-based models, and their quality and efficiency are improving.