On September 19th, the latest research from a cybersecurity company showed that Microsoft’s artificial intelligence research team accidentally exposed a large amount of private data on the software development platform GitHub. More than 30,000 internal messages communicated by Microsoft employees through Teams were leaked. Microsoft quickly deleted the exposure data after receiving the warning.
A team at cloud security company Wiz discovered that the data, which was hosted on the cloud platform, was exposed through a misconfigured link. According to Wiz, Microsoft's artificial intelligence research team inadvertently leaked this data when it released open source training content on GitHub.
According to reports, users of relevant repositories have been notified that they can download artificial intelligence models through the cloud storage URL link. However, according to a Wiz blog post, this link was misconfigured, resulting in the user having permissions for the entire storage account and also being granted full control over the entire repository, rather than just read-only permissions. This means users can delete and overwrite existing files at will. According to Wiz, the leaked data in the repository includes Microsoft employees' personal computer backup information, which contains Microsoft service passwords, keys, and more than 30,000 internal Microsoft Teams messages from 359 Microsoft employees
Wiz's Researchers say open data sharing is a key component of AI training efforts. But if used incorrectly, sharing large amounts of data can put companies at greater risk. Wiz CTO and co-founder Ami Luttwak said Wiz shared the news with Microsoft in June, and Microsoft acted quickly to remove the exposed data. He added that the incident "could have been much worse".
When asked to comment on the data breach, a Microsoft spokesperson said: "We have confirmed that no customer data was compromised and no other internal services were affected."
In a blog post published on Monday, Microsoft said that the incident involved a Microsoft employee sharing a URL from a GitHub public repository to an open source artificial intelligence learning model, and that the company has investigated and implemented remediation measures. . Microsoft said the data exposed in the storage account included backups of computer configuration files of two former employees, as well as internal Microsoft Teams messages between the two employees and their colleagues.
According to the blog, the data exposure was discovered by the Wiz research team while scanning the Internet for misconfigured storage as part of their efforts to target accidental exposure of cloud-hosted data. (Chenchen)
The above is the detailed content of Microsoft AI researchers accidentally exposed a large amount of internal data due to misconfiguration of cloud storage links. For more information, please follow other related articles on the PHP Chinese website!