back to top
HomeTechAnthropic's Mythos Just Helped Find macOS vulnerability That Could Break Apple's Security...

Anthropic’s Mythos Just Helped Find macOS vulnerability That Could Break Apple’s Security Protections

- Advertisement -

Anthropic has been explicit about why Mythos isn’t public. The model is too good at finding security flaws repeatedly, in production systems that some of the best engineers in the world have been maintaining for years.

So instead of a public release, Anthropic built Project Glasswing. Around 40 organizations get controlled access. Anthropic committed $100 million in usage credits to support the effort. The list includes Apple, Google and Microsoft, companies that aren’t exactly short on security talent themselves.

One of those organizations is Calif, a Palo Alto cybersecurity firm. In April their researchers used techniques derived from Mythos to find two previously undocumented vulnerabilities in macOS. They chained them together into a privilege escalation exploit capable of bypassing Apple’s memory integrity enforcement, the part of the system that’s supposed to be completely off-limits to normal processes. Then they flew to Cupertino and handed Apple a 55-page report in person.

Apple is reviewing it. Patches are expected. And Mythos just added macOS to a list that already includes a 27-year-old OpenBSD bug and multiple Linux vulnerabilities nobody had caught before.

Not Mythos alone

Calif CEO Thai Dong was direct with the Wall Street Journal: the attack “couldn’t have been pulled off by Mythos alone and leveraged the very human cybersecurity expertise of some of Calif’s hackers.”

That distinction matters in both directions. It’s not a story about AI replacing security researchers, the exploit required serious human expertise layered on top of what the model produced. But it’s also not nothing. Mythos narrowed the search space, surfaced the vulnerabilities, and gave researchers a starting point that would have taken significantly longer to reach on their own. The combination found something Apple missed.

An unavoidable track record

Before Calif walked into Apple’s headquarters, Mythos had already surfaced a vulnerability in OpenBSD that had gone undetected for 27 years. It found exploitable weaknesses in Linux that human researchers had walked past for years without noticing.

That’s not a coincidence or a lucky find. That’s a pattern. And it’s exactly why Anthropic won’t release the model publicly, because it works consistently enough that putting it in the wrong hands is a genuinely serious consideration.

The $100 million in usage credits Anthropic committed to Project Glasswing starts to make more sense in that context. It’s an attempt to extract real defensive value from a capability that exists whether Anthropic monetizes it or not. Better to have Apple and Google finding their own flaws with it than to wonder who else might find those flaws first.

You May Like: OpenMythos: The Closest Thing to Claude Mythos You Can Run (And It’s Open Source)

The question this raises for the rest of the industry

Forty organizations have controlled access to Mythos. The rest of the security research industry doesn’t.

That gap is going to widen. If a model under strict access controls is already finding decades-old bugs in the most scrutinized codebases on the planet, the natural question is what happens when similar capabilities become more broadly available, whether through Anthropic eventually loosening access, a competitor releasing something comparable, or less scrupulous actors building toward the same destination through different means.

Calif’s CEO thinks Apple will patch these bugs quickly. That’s probably true. But the more durable question isn’t about these two specific vulnerabilities. It’s about what the security research industry looks like when this kind of capability stops being rare.

The 55-page report is sitting on a desk in Cupertino right now. Full technical details won’t be released until Apple ships patches. When they do, this will become one of the cleaner documented examples of what AI-assisted vulnerability research actually produces in practice, two real bugs in a real operating system that nobody found until a model that Anthropic won’t let the public touch helped look for them.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
YouTube Will Search for AI Fakes of You. All It Needs Is a Video of Your Face

YouTube Will Search for Deepfakes of You. All It Needs Is a Video of...

0
For most of YouTube's history, if someone uploaded a convincing fake of you, your options were limited. File a report, hope someone reviewed it, wait. The tools that actually worked, the ones that proactively scanned for your likeness across millions of uploads were reserved for verified creators, then politicians, then journalists and then celebrities. As of now, that changes. YouTube is opening likeness detection to anyone over 18. Zero subscribers, no verification badge or public profile required. If your face ends up in an AI-generated video you never agreed to, YouTube will now look for it. That's the good part but there is another part you should know before you enroll.
ChatGPT Wants Access to Your Bank Account

ChatGPT Wants Access to Your Bank Account

0
OpenAI did this with your health data in January. Now it wants your financial data too. The company announced today that ChatGPT users can connect their bank accounts through Plaid, the financial bridging platform used by 12,000 institutions including Chase, Fidelity, Capital One, and Schwab. Once connected, ChatGPT gets a full view of your balances, transaction history, active subscriptions, investment portfolio, and liabilities like mortgages and credit card debt. In return you get a spending dashboard, personalized financial advice, and a chatbot that can flag unusual changes in your habits. It's launching in preview for Pro subscribers at $200 a month. OpenAI says Plus and eventually everyone else comes later.
OpenAI Wanted Distribution on the iPhone. Apple Had Other Plans

OpenAI Wanted Distribution on the iPhone. Apple Had Other Plans.

0
OpenAI was supposed to become part of the iPhone experience. Apple would finally have an AI answer for Siri. ChatGPT would sit in front of hundreds of millions of users. It didn't work out that way. According to Bloomberg, OpenAI has brought in outside legal counsel to explore its options, including a formal breach-of-contract notice against Apple. The integration that was supposed to funnel billions in new subscriptions toward ChatGPT instead got buried deep enough that most iPhone users probably don't know it exists. One OpenAI executive put it plainly: "They basically said, 'OpenAI needs to take a leap of faith and trust us.' It didn't work out well."

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy