A Discord Group Just Stole the World's Most Dangerous AI Model. It Took Them One Day.

Two weeks ago, Anthropic built an AI model so powerful they refused to release it to the public.

They said it could hack almost anything. They restricted it to twelve partners, Apple, JPMorgan, Microsoft, Cisco, CrowdStrike. They briefed the US Treasury.

A Discord group got their hands on it in 24 hours.

No zero-day. No nation-state. No sophisticated attack. A leaked URL and a contractor password, and they walked in. They've been using the model freely since April 7th, the day it launched.

This is the biggest AI security story of the year. And almost nobody is telling it correctly.

What Claude Mythos actually does

Forget chatbots. Forget writing assistants. This is a different thing entirely.

Claude Mythos finds security vulnerabilities in software; automatically, at machine speed, in systems humans have been trying to break for decades.

During internal testing, Anthropic pointed it at OpenBSD. That's an operating system famous for being obsessively secure. Mythos found a severe flaw that had gone undetected for 27 years.

It has reportedly found thousands of zero-day vulnerabilities across Windows, Linux, macOS, Chromium, and Safari.

99% of what it's found is still unpatched.

In plain English: Mythos is a master key for the internet. In the right hands, it helps fix everything. In the wrong hands, it breaks everything.

Anthropic knew this. That's why they didn't release it.

How the Discord group got in

This is the part that should make every CISO lose sleep.

Here's the actual sequence:

  1. An AI training company called Mercor, a contractor that does work for Anthropic, had a separate data breach. That breach leaked Anthropic's internal URL patterns.

  2. A Discord group that calls itself "model hunters" collected those leaked patterns. Their hobby is finding unreleased AI models.

  3. One member of the group worked at a different Anthropic contractor. He had active credentials to a developer portal.

  4. They guessed where Mythos was hosted using the leaked patterns. The contractor credentials got them through the door.

  5. They've been using the model for two weeks.

A leaked URL. A contractor password. A guess. That's it. That's the whole attack.

Anthropic, a company with some of the best security engineers in the industry, built the most dangerous AI model ever made. And left it behind a door that didn't lock properly.

My take

Four things I actually believe about this.

One: this wasn't a hack. It was a supply chain failure.

Nobody broke in. A contractor's login worked. A leaked URL told them where to point it. The difference matters, because "hack" implies sophistication and "contractor credential hygiene" implies avoidable negligence. This was avoidable.

Two: the 24-hour timeline tells you everything.

Anthropic announced Mythos on April 7th. The unauthorised access started April 7th. That's not a breach that happened after weeks of probing defences. That's a breach that happened in the time it takes to read the press release.

Either the group had been preparing for this specific moment. Or the perimeter around Mythos was so weak that ordinary credentials and a URL guess were enough. Neither option is reassuring.

Three: Anthropic's silence is the real problem.

Their statement so far is standard corporate damage control. "We're investigating. No core systems compromised. Rotating credentials."

That's fine for a mid-sized SaaS company with a leaked customer list. It is not fine for the company that has spent three years asking governments and customers to trust them with frontier AI safety.

We need a real post-mortem. Which contractor. What credentials. What monitoring failed. What the Discord group did with two weeks of unsupervised access. Without that, the trust Anthropic has built starts to erode.

Four: "we're just playing around" is not a security strategy.

The Discord group says they have no malicious intent. Maybe that's true. But here's the uncomfortable part:

If a Discord group of hobbyists got in this easily, a nation-state can too. A ransomware operator can. A criminal syndicate can.

The attackers who got caught are the ones who wanted to be seen. The ones we should be worried about are the ones we're not hearing from.

What this means for you

If you don't work in AI or security, you might be wondering why any of this matters to your life.

Here's why.

Every piece of software you use , your bank's app, your hospital's systems, your government's services, is now potentially discoverable by a model that can find vulnerabilities faster than any human team can patch them.

The "wrong hands" that Anthropic has been warning about for years? Those hands now reportedly have the model. Through a leaked URL and a contractor password.

This is not theoretical. It is happening right now, in production, in the wild.

The bottom line

Anthropic built a weapon. They put it behind a door that didn't lock. A Discord group walked through in 24 hours.

The attackers we know about say they're harmless. The attackers we don't know about don't send press releases.

Anthropic will fix this specific breach. They'll audit, rotate, tighten. I'm not worried about the aftermath.

I'm worried about the next one.

Because there will be a next one. It will come sooner than anyone expects. And the next attackers won't be teenagers on Discord.

Practical writing on shipping, securing, and leading AI — from a product leader who's built AI into media, MSP, cybersecurity, and ecommerce.

Practical writing on shipping, securing, and leading AI — from a product leader who's built AI into media, MSP, cybersecurity, and ecommerce.

Practical writing on shipping, securing, and leading AI — from a product leader who's built AI into media, MSP, cybersecurity, and ecommerce.

Newsletter

Get real-world takes on AI—what works, what doesn’t, and what actually ships.

By signing up, you agree to our Privacy Policy

© 2026 NABEEL ANSAR.

Practical writing on shipping, securing, and leading AI — from a product leader who's built AI into media, MSP, cybersecurity, and ecommerce.

Newsletter

Get real-world takes on AI—what works, what doesn’t, and what actually ships.

By signing up, you agree to our Privacy Policy

© 2026 NABEEL ANSAR.

Practical writing on shipping, securing, and leading AI — from a product leader who's built AI into media, MSP, cybersecurity, and ecommerce.

Newsletter

Get real-world takes on AI—what works, what doesn’t, and what actually ships.

By signing up, you agree to our Privacy Policy

© 2026 NABEEL ANSAR.