
In 2009, centrifuges at Iran’s Natanz nuclear facility began behaving abnormally, damaging equipment and setting back Iran’s uranium enrichment program.
The culprit was a computer worm later dubbed Stuxnet by cybersecurity researchers. Due to the very sophisticated way the worm was able to infiltrate its target industrial machines, researchers widely believed that only powerful nation-states had the capability to create Stuxnet. Fingers were quickly pointed at Israel and the United States and both nations still officially deny their involvement in the cyberattack.
Enter Anthropic’s Mythos
Artificial intelligence company Anthropic announced on April 7 the latest version of its Claude large language model, codenamed Mythos Preview. However, unlike previous versions, Anthropic refused to release this model to the public and instead created Project Glasswing, an initiative to provide Mythos only to a select group of companies, including Google, NVIDIA, JPMorganChase, Apple, Cisco, and the Linux Foundation.
The reason for this exclusivity? Anthropic believes that Mythos was too dangerous to be given to the public. In their own words: “AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.”
Anthropic claims that Claude Mythos is a significant step-up in coding capability and that, in the wrong hands, could be used to hack systems and wreak havoc on global systems.
Consider this: what if anybody with access to Mythos could create their own Stuxnet and target just about anyone? Capability previously reserved to nation-states is potentially in everybody’s reach.
According to Anthropic’s internal testing, Mythos had already identified security flaws in numerous operating systems, web browsers, and apps, including a 27-year-old bug in the popular OpenBSD operating system, and a complex vulnerability in the Linux kernel—used in millions of internet servers and Android devices—that chained together multiple weaknesses that only the most experienced security developers could have figured out.
Project Glasswing aimed to give companies the time to identify and fix vulnerabilities in their systems before Mythos is released to the public. The alarm has even reached the United States government leading to a meeting between Treasury Secretary Scott Bessent, Federal Reserve Chair Jerome Powell, and various major bank CEOs.
Is this real or just marketing hype?
Given that Anthropic, reasonably, does not give out details, this Claude Mythos danger appears to be a case of “trust me, bro!” And while lots of industry experts are taking Anthropic seriously, many others remain quite skeptical.
Maybe Anthropic is just capitalizing on their reputation for being a “safety first” AI company and is using their improved model to generate hype, secure contracts, and gain more investors.
Maybe Mythos is not actually that much better and we already know that cybercriminals are using earlier models of Claude, GPT, and Gemini to launch countless scams and hacks.
Maybe Anthropic is only ahead of other companies by a few months, and it is just a (very short) matter of time before the likes of OpenAI and DeepSeek can roll out their own mythological models. In fact, OpenAI is actually contemplating a comparable move to Anthropic’s but with far less fanfare.
And speaking of OpenAI, several people remember that they made a similar move way back in 2019 by withholding the general release of GPT-2 because supposedly it was too dangerous.
Security-first as a philosophy
Whether the Anthropic announcement is a real danger or overblown hype, it reminds us that security should always be taken seriously by any company or organization that deploys systems for public consumption.
Too often in many companies, security is viewed as an afterthought competing with the relentless drive to chase profits, release new features, and retain customers. And regardless of how one may personally feel about AI itself, LLMs, even older models, can actually be used to augment the capabilities of security researchers in identifying and patching vulnerabilities.
And for the software industry at large, the lack of support given to many open source software and libraries that the entire world depends on is embarrassing. Exploits like Heartbleed in 2014 highlighted the fact that trillion-dollar companies use free open source software without giving back. This means investing in audits, funding critical open-source projects, and designing systems with security as a default—not an afterthought.
Maybe Mythos, despite the hype, can be a real wake up call.