Sign up to save tools and stay up to date with the latest in AI
bg
bg
1

Microsoft AI Researchers Accidentally Exposed Terabytes of Internal Sensitive Data - Slashdot

Sep 18, 2023 - yro.slashdot.org
Microsoft's AI researchers inadvertently exposed tens of terabytes of sensitive data while publishing a storage bucket of open-source training data on GitHub. The exposed data included private keys, passwords, and the personal backups of two Microsoft employees' computers. The data also contained passwords to Microsoft services, secret keys, and over 30,000 internal Microsoft Teams messages from hundreds of employees.

The exposure was discovered by cloud security startup Wiz during its ongoing investigation into accidental data exposure on cloud-hosted platforms. The GitHub repository, which provided open-source code and AI models for image recognition, instructed users to download the models from an Azure Storage URL. However, this URL was incorrectly configured to grant permissions on the entire storage account, leading to the unintentional exposure of additional private data.

Key takeaways:

  • Microsoft AI researchers accidentally exposed tens of terabytes of sensitive data while publishing a storage bucket of open source training data on GitHub.
  • The exposed data included private keys, passwords, and personal backups of two Microsoft employees' personal computers.
  • The data was exposed due to a misconfigured Azure Storage URL that granted permissions on the entire storage account.
  • The exposed data also contained passwords to Microsoft services, secret keys, and more than 30,000 internal Microsoft Teams messages from hundreds of Microsoft employees.
View Full Article

Comments (0)

Be the first to comment!