Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Anthropic confirms it suffered a data leak

Jan 26, 2024 - venturebeat.com
AI startup Anthropic, known for its large language models and chatbots, recently experienced a data leak when a contractor accidentally sent a file containing non-sensitive customer information to a third party. The file contained customer names and open credit balances as of the end of 2023. The company has notified affected customers and clarified that the incident was due to human error and not a system breach. This incident coincides with the Federal Trade Commission's announcement of an investigation into Anthropic's strategic partnerships with Amazon and Google.

In addition to the data leak, Anthropic is also under scrutiny from the FTC regarding its partnerships with Amazon and Google. The FTC has issued 6(b) orders to Amazon, Microsoft, OpenAI, Anthropic, and Alphabet, requesting detailed information on their multi-billion-dollar relationships. The agency is investigating whether these partnerships could undermine fair competition. Anthropic's relationships with AWS and Google have been substantial since its inception, with Amazon investing up to $4 billion and Google providing security services and database support.

Key takeaways:

  • AI startup Anthropic experienced a data leak when a contractor inadvertently sent a file containing non-sensitive customer information to a third party. The company has notified affected customers and stated that the incident was due to human error, not a breach of their systems.
  • The Federal Trade Commission (FTC) is investigating Anthropic's strategic partnerships with Amazon and Google, as well as rival OpenAI's partnership with Microsoft, in relation to market competition regulations.
  • Anthropic has emphasized that the data leak is not related to the FTC probe, and has advised customers to be alert to any suspicious communications appearing to come from the company.
  • The incident comes at a time when data breaches are at an all-time high, with 95% traced to human error, raising concerns among enterprises using third-party large language models (LLMs) like Anthropic's Claude.
View Full Article

Comments (0)

Be the first to comment!