The report also points out that the use of data to train AI is not new, with companies like Netflix and Meta using user data to generate recommendations and order news feeds. However, the scale of data required for today's AI technologies is much larger, leading to unique privacy risks. For instance, AI can learn intimate details about individuals, and sometimes leaks personal information. The report also criticizes the lack of transparency from big tech companies about their data usage practices, and calls for stricter regulations and greater user control over their data.
Key takeaways:
- Big Tech companies such as Google, Meta, and Microsoft are using user data, including conversations, photos, and documents, to train their AI systems, often without explicit permission.
- These practices pose potential privacy risks, as AI systems can learn intimate details about individuals and sometimes leak this data.
- Companies are often vague or misleading about when and how they use user data, making it difficult for users to make informed decisions about their privacy.
- There is a growing call for clearer regulations and laws to protect user data and privacy in the face of AI development.