- Microsoft’s December 2024 complaint involves 10 anonymous defendants
- A “hacking-as-a-service operation” stole legitimate users’ API keys and circumvented content protection measures.
- A complaint from the District of Virginia led to the removal of a Github repository and website.
Microsoft has accused an anonymous collective of developing tools to intentionally bypass security programming in its Azure OpenAI service that powers AI tool ChatGPT.
In December 2024, the tech giant filed a lawsuit in the U.S. District Court for the Eastern District of Virginia against 10 anonymous defendants, whom it accuses of violating the Computer Fraud and Abuse Act, the Digital Millennium Copyright Act, as well as federal racketeering law.
Microsoft claims its servers were accessed to facilitate the creation of “offensive”, “harmful and illegal” content. Although it did not provide further details on the nature of this content, it was clearly sufficient for rapid action; a Github repository was taken offline and claimed in a blog post that the court allowed them to seize a website linked to the operation.
ChatGPT API Keys
In the complaint, Microsoft said it first discovered users misusing the Azure OpenAI service API keys used to authenticate them to produce illicit content in July 2024. It then discussed an internal investigation that found that the API keys in question had been stolen from legitimate customers.
“Precisely how Defendants obtained all of the API keys used to commit the misconduct described in this complaint is unknown, but it appears that Defendants engaged in a systematic theft of API keys that allowed them to steal keys Microsoft API to multiple Microsoft clients,” the complaint reads.
Microsoft claims that, with the ultimate goal of launching a hacking-as-a-service product, the defendants created de3u, a client-side tool, to steal these API keys, along with additional software that allowed de3u to communicate with servers Microsoft.
De3u also worked to bypass content filters built into Azure OpenAI services and subsequent review of user prompts, allowing DALL-E, for example, to generate images that OpenAI would not normally allow.
“These features, combined with Defendants’ illegal programmatic access to the Azure OpenAI service, enabled Defendants to reverse engineer their way around Microsoft’s content and abuse measures,” he wrote in the complaint.
Via TechCrunch