Here’s the sequel to Anthropic’s most powerful AI model leaked via insecure data cache

Anthropic is testing the most powerful AI model ever built, and the world wasn’t supposed to know it yet.

A data leak reported by Fortune on Thursday revealed that the AI ​​lab behind Claude has trained a new model called “Mythos,” which it describes internally as “by far the most powerful AI model we’ve ever developed.”

The pattern was discovered in a draft blog post left in an unsecured, publicly viewable data cache, alongside nearly 3,000 other unpublished assets, according to cybersecurity researchers who reviewed the material.

Anthropic confirmed the model’s existence after Fortune’s investigation, calling it a “step change” in AI performance and “the most capable we’ve built to date.” The company said it was being tested by “early access customers” and acknowledged that “human error” in its content management system caused the leak.

The draft blog post introduced a new model tier called “Capybara”, described as larger and more capable than Anthropic’s existing Opus models, which were previously the most powerful.

“Compared to our previous best model, Claude Opus 4.6, Capybara scores significantly higher on tests of software coding, academic reasoning, and cybersecurity, among others,” the project states.

This is the dimension of cybersecurity that matters most to the crypto industry. The draft blog post states that the model “poses unprecedented cybersecurity risks,” a framework that has direct implications for blockchain security, smart contract auditing, and the escalating arms race between DeFi attackers and defenders.

Just this week, Ripple announced an AI-powered security overhaul of the XRP Ledger after an AI-assisted red team discovered more than 10 vulnerabilities in its 13-year-old codebase. Ethereum has launched a hub dedicated to post-quantum security, backed by eight years of research.

And the Resolv stablecoin lost its foothold after an attacker exploited a minting contract without Oracle control or single-key access control, the kind of infrastructure failure that better AI tools could potentially identify before an attacker, or exploit faster than defenders can react.

For the AI ​​token market, the leak raises a different question. Bittensor’s decentralized network recently released Covenant-72B, a model that competes with Meta’s Llama 2 70B, sparking a 90% TAO rally and propelling the subnet tokens to a combined market cap of $1.47 billion.

A “step change” from a centralized lab like Anthropic resets the benchmark that decentralized AI projects must match. The competitive distance between what a well-funded corporate lab can build and what a permissionless network can produce has widened further.

Anthropic said the model’s release was “deliberate” given its capabilities. The draft blog noted that the model was expensive to operate and was not yet ready for general availability. The company removed public access to the data cache after Fortune contacted it.

The escape itself is its own cautionary tale. A company building what it describes as an AI model with unprecedented cybersecurity capabilities left the model’s announcement in an unsecured, publicly viewable data store due to human error. The irony does not need to be developed.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top