back to top
HomeTechClaude Got Blacklisted Over Two Words Anthropic Refused to Remove

Claude Got Blacklisted Over Two Words Anthropic Refused to Remove

President Trump ordered all federal agencies to stop using Anthropic's Claude on February 28, 2026, after the company refused Pentagon demands to remove safety restrictions on mass surveillance and autonomous weapons. Anthropic says it will fight the designation in court.

- Advertisement -

Anthropic just got hit with a designation the US government usually reserves for Chinese tech companies like Huawei.

Not for a data breach. Just for refusing to let the military use its AI for mass domestic surveillance and autonomous weapons without human oversight.

That’s the short version. The longer version is messier, more interesting, and honestly a little hard to believe is happening in 2026.

On Friday, President Trump ordered every federal agency to immediately stop using Claude, according to The Guardian. Defence Secretary Pete Hegseth followed up by labelling Anthropic a “supply chain risk to national security”, a tag that bars any military contractor from doing business with the company.

The same label America uses on Huawei. Applied, for the first time ever, to an American company, India Today reports.

Anthropic’s response was short and direct. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” the company said in a statement Friday night.

So that’s where we are.

Why Anthropics’s Claude Got Blacklisted

The crisis didn’t come out of nowhere. Anthropic has had a $200 million contract with the Pentagon since July 2025, and Claude has been actively used by the CIA and NSA for intelligence analysis on classified networks — the first frontier AI model ever deployed there, according to India Today.

But earlier this week, Defence Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a hard deadline: Friday afternoon. The demand was simple. Remove the safety guardrails on Claude or lose the contract.

Specifically, the Pentagon wanted what Hegseth called “full, unrestricted access” to Claude’s capabilities. Anthropic had two conditions it wouldn’t budge on — Claude could not be used for mass domestic surveillance of American citizens, and it could not power fully autonomous weapons systems that kill without human input.

The Pentagon publicly claimed it had “no interest” in either. But its own demands told a different story.

Anthropic didn’t blink. The Friday deadline passed. Trump posted on Truth Social within the hour, directing all federal agencies to “IMMEDIATELY CEASE” all use of Anthropic technology, calling the company “woke, radical left.” Hegseth followed minutes later, designating Anthropic a supply chain risk — a legal designation that forces every military contractor, supplier and partner to cut commercial ties with the company.

Former Trump AI adviser Dean Ball called it “attempted corporate murder.”

The Role of OpenAI

Hours after the deadline passed, OpenAI CEO Sam Altman announced his company had struck its own deal with the Pentagon to deploy AI models on classified military networks, according to The Guardian.

On the surface it looked like OpenAI had done what Anthropic refused to. But the details tell a different story.

Altman was clear that OpenAI’s deal kept the same red lines Anthropic had fought for. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote, adding that the Pentagon “agrees with these principles.”

He also called on the Pentagon to offer the same terms to all AI companies as a way to “de-escalate away from legal and governmental actions.”

In other words, OpenAI got a deal with the exact same conditions Anthropic was blacklisted for demanding. The difference, at least publicly, is that OpenAI signed while Anthropic held out.

The solidarity didn’t stop there. According to The Guardian, around 70 OpenAI employees and 175 Google staffers signed an open letter backing Anthropic’s stance. “The Pentagon is trying to divide each company with fear that the other will give in,” the letter read. “We will not be divided.”

Even Altman, whose company directly benefited from Anthropic’s ouster, said publicly he wanted to help “de-escalate things.”

The two words that started it all

At the center of this entire standoff are two conditions Anthropic refused to remove from its terms of service. Just two hard limits on how Claude could be used.

The first was mass domestic surveillance. Anthropic wouldn’t allow Claude to be used for the automated tracking and monitoring of American citizens. The second was fully autonomous weapons, systems that can select targets without a human making the final call.

That’s it. Those were the lines Anthropic drew.

The Pentagon’s position was that it had “no interest” in either, according to The Guardian. But it still demanded what Hegseth called “full, unrestricted access” to Claude’s capabilities — meaning no conditions, no exceptions, no private company telling the US military what it can and can’t do with a tool it’s paying $200 million for.

Anthropic’s argument was simpler. These two restrictions had never blocked a single government mission. Not one. The company said so directly in its Friday night statement — “to the best of our knowledge, these exceptions have not affected a single government mission to date.”

So the Pentagon was willing to blow up a $200 million contract, designate an American company a national security threat, and force every military contractor to cut ties with Anthropic — over restrictions that by their own admission had never actually caused a problem.

Make of that what you will.

Also Read: 5 Open-Source Discord Alternatives That Don’t Care Who You Are

What the “supply chain risk” label actually means

This is the part that matters if you use Claude for anything work related.

The supply chain risk designation sounds technical but the practical effect is straightforward. Any company that does business with the US military — and that includes Amazon, Google, and Microsoft who are all major defense contractors — cannot conduct commercial activity with Anthropic while the designation stands.

That’s a big circle. A lot of businesses sit inside it without realizing.

But here’s what Anthropic was quick to clarify. The designation, if it holds, only affects Claude being used on Department of War contract work specifically. Individual users, commercial API customers, and developers building on Claude are completely unaffected, the company said Friday night.

So if you’re using Claude through Claude.ai, through the API, or through a third party app — nothing changes for you today.

The six month transition period also means nothing stops immediately. Anthropic is already planning to challenge the designation in court, calling it “unprecedented” and “legally unsound.” University of Minnesota law professor Alan Rozenshtein told The Guardian the label “clearly was not designed for an American company that has a contract dispute with the government.”

The bigger threat is to Anthropic’s business at scale. The company is valued at $380 billion and was planning an IPO this year. A national security designation hanging over your head is not great timing for going public.

Where this goes next?

Anthropic is heading to court. The Pentagon has six months before the full cutoff kicks in. OpenAI has a deal. And Claude — the AI model the CIA was using to analyze classified intelligence just weeks ago — is now officially labeled a national security threat by the same government that paid $200 million for it.

The two words Anthropic refused to remove are still in their terms of service.

Probably staying there.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
Gen-Searcher An Open Source AI That Searches the Web Before Generating Images

Gen-Searcher: An Open Source AI That Searches the Web Before Generating Images

0
Your image generator has never seen today. It was trained months ago, maybe longer, and everything it draws comes from that frozen snapshot of the world. Ask it to generate a current news moment, a product that launched last month, or anything that requires knowing what's happening right now and it fills in the gaps with a confident guess. Sometimes that guess is close. Often it isn't. Gen-Searcher does something none of the mainstream tools do. Before it draws a single pixel, it goes and looks things up. It searches the web. It browses sources. It pulls visual references. Then it generates. The result is an image grounded in actual current information. It's open source, the weights are on Hugging Face, and the team released everything including code, training data, benchmark, the lot.
MiniMax M2.7 The Agentic Model That Helped Build Itself

MiniMax M2.7: The Agentic Model That Helped Build Itself

0
MiniMax handed an internal version of M2.7 a programming scaffold and let it run unsupervised. Over 100 rounds it analyzed its own failures, modified its own code, ran evaluations, and decided what to keep and what to revert. The result was a 30% performance improvement with nobody directing each step. That is not a benchmark result. That is a different way of thinking about how AI models get built. M2.7 is now available on HuggingFace with weights you can download and deploy. NVIDIA is offering free API access if you want to try it without the hardware overhead. The license has a commercial limitation worth knowing about, we will get to that.
marco LLM nano and mini

Marco MoE Uses 5% of Its Parameters but Outperforms Models 3× Its Size

0
Most AI models are what they appear to be. A 12B parameter model uses 12B parameters. What you see is what runs. Marco MoE does not work that way. Alibaba built two models, Marco Nano and Marco Mini, that carry billions of parameters but wake up only a tiny fraction of them for each request. Marco Nano activates 0.6 billion out of 8 billion. Marco Mini activates 0.86 billion out of 17.3 billion. Less than 5% of either model is actually working at any moment. The part that makes this worth paying attention to is what that 5% manages to do against models running at full capacity.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy