The article also mentions that Elon Musk, owner of X, has encouraged users to upload their medical imagery to Grok, stating that the AI model is still in the early stages but will improve over time. However, it is unclear who has access to this data, and Grok's privacy policy states that it shares some users' personal information with an unspecified number of related companies. The article concludes by reminding readers that once information is uploaded to the internet, it never truly leaves.
Key takeaways:
- People are increasingly using AI chatbots like OpenAI’s ChatGPT and Google’s Gemini to understand their health concerns and are even encouraged to upload their medical data, such as X-rays and MRIs, to these platforms.
- Security and privacy advocates warn that any uploaded sensitive data can be used to train AI models, potentially exposing private and sensitive information.
- Most consumer apps, including these AI chatbots, are not covered under the U.S. healthcare privacy law HIPAA, offering no protections for the uploaded data.
- It's not always clear how and for what purposes the uploaded data is being used, or whom the data is shared with, and companies can change their policies at any time.