back to top
HomeTechClaude Got Blacklisted Over Two Words Anthropic Refused to Remove

Claude Got Blacklisted Over Two Words Anthropic Refused to Remove

President Trump ordered all federal agencies to stop using Anthropic's Claude on February 28, 2026, after the company refused Pentagon demands to remove safety restrictions on mass surveillance and autonomous weapons. Anthropic says it will fight the designation in court.

- Advertisement -

Anthropic just got hit with a designation the US government usually reserves for Chinese tech companies like Huawei.

Not for a data breach. Just for refusing to let the military use its AI for mass domestic surveillance and autonomous weapons without human oversight.

That’s the short version. The longer version is messier, more interesting, and honestly a little hard to believe is happening in 2026.

On Friday, President Trump ordered every federal agency to immediately stop using Claude, according to The Guardian. Defence Secretary Pete Hegseth followed up by labelling Anthropic a “supply chain risk to national security”, a tag that bars any military contractor from doing business with the company.

The same label America uses on Huawei. Applied, for the first time ever, to an American company, India Today reports.

Anthropic’s response was short and direct. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons,” the company said in a statement Friday night.

So that’s where we are.

Why Anthropics’s Claude Got Blacklisted

The crisis didn’t come out of nowhere. Anthropic has had a $200 million contract with the Pentagon since July 2025, and Claude has been actively used by the CIA and NSA for intelligence analysis on classified networks — the first frontier AI model ever deployed there, according to India Today.

But earlier this week, Defence Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a hard deadline: Friday afternoon. The demand was simple. Remove the safety guardrails on Claude or lose the contract.

Specifically, the Pentagon wanted what Hegseth called “full, unrestricted access” to Claude’s capabilities. Anthropic had two conditions it wouldn’t budge on — Claude could not be used for mass domestic surveillance of American citizens, and it could not power fully autonomous weapons systems that kill without human input.

The Pentagon publicly claimed it had “no interest” in either. But its own demands told a different story.

Anthropic didn’t blink. The Friday deadline passed. Trump posted on Truth Social within the hour, directing all federal agencies to “IMMEDIATELY CEASE” all use of Anthropic technology, calling the company “woke, radical left.” Hegseth followed minutes later, designating Anthropic a supply chain risk — a legal designation that forces every military contractor, supplier and partner to cut commercial ties with the company.

Former Trump AI adviser Dean Ball called it “attempted corporate murder.”

The Role of OpenAI

Hours after the deadline passed, OpenAI CEO Sam Altman announced his company had struck its own deal with the Pentagon to deploy AI models on classified military networks, according to The Guardian.

On the surface it looked like OpenAI had done what Anthropic refused to. But the details tell a different story.

Altman was clear that OpenAI’s deal kept the same red lines Anthropic had fought for. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman wrote, adding that the Pentagon “agrees with these principles.”

He also called on the Pentagon to offer the same terms to all AI companies as a way to “de-escalate away from legal and governmental actions.”

In other words, OpenAI got a deal with the exact same conditions Anthropic was blacklisted for demanding. The difference, at least publicly, is that OpenAI signed while Anthropic held out.

The solidarity didn’t stop there. According to The Guardian, around 70 OpenAI employees and 175 Google staffers signed an open letter backing Anthropic’s stance. “The Pentagon is trying to divide each company with fear that the other will give in,” the letter read. “We will not be divided.”

Even Altman, whose company directly benefited from Anthropic’s ouster, said publicly he wanted to help “de-escalate things.”

The two words that started it all

At the center of this entire standoff are two conditions Anthropic refused to remove from its terms of service. Just two hard limits on how Claude could be used.

The first was mass domestic surveillance. Anthropic wouldn’t allow Claude to be used for the automated tracking and monitoring of American citizens. The second was fully autonomous weapons, systems that can select targets without a human making the final call.

That’s it. Those were the lines Anthropic drew.

The Pentagon’s position was that it had “no interest” in either, according to The Guardian. But it still demanded what Hegseth called “full, unrestricted access” to Claude’s capabilities — meaning no conditions, no exceptions, no private company telling the US military what it can and can’t do with a tool it’s paying $200 million for.

Anthropic’s argument was simpler. These two restrictions had never blocked a single government mission. Not one. The company said so directly in its Friday night statement — “to the best of our knowledge, these exceptions have not affected a single government mission to date.”

So the Pentagon was willing to blow up a $200 million contract, designate an American company a national security threat, and force every military contractor to cut ties with Anthropic — over restrictions that by their own admission had never actually caused a problem.

Make of that what you will.

Also Read: 5 Open-Source Discord Alternatives That Don’t Care Who You Are

What the “supply chain risk” label actually means

This is the part that matters if you use Claude for anything work related.

The supply chain risk designation sounds technical but the practical effect is straightforward. Any company that does business with the US military — and that includes Amazon, Google, and Microsoft who are all major defense contractors — cannot conduct commercial activity with Anthropic while the designation stands.

That’s a big circle. A lot of businesses sit inside it without realizing.

But here’s what Anthropic was quick to clarify. The designation, if it holds, only affects Claude being used on Department of War contract work specifically. Individual users, commercial API customers, and developers building on Claude are completely unaffected, the company said Friday night.

So if you’re using Claude through Claude.ai, through the API, or through a third party app — nothing changes for you today.

The six month transition period also means nothing stops immediately. Anthropic is already planning to challenge the designation in court, calling it “unprecedented” and “legally unsound.” University of Minnesota law professor Alan Rozenshtein told The Guardian the label “clearly was not designed for an American company that has a contract dispute with the government.”

The bigger threat is to Anthropic’s business at scale. The company is valued at $380 billion and was planning an IPO this year. A national security designation hanging over your head is not great timing for going public.

Where this goes next?

Anthropic is heading to court. The Pentagon has six months before the full cutoff kicks in. OpenAI has a deal. And Claude — the AI model the CIA was using to analyze classified intelligence just weeks ago — is now officially labeled a national security threat by the same government that paid $200 million for it.

The two words Anthropic refused to remove are still in their terms of service.

Probably staying there.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
YOU MAY ALSO LIKE
Reka Edge is The 7B Multimodal AI Model That Beats Gemini 3 Pro on Object Detection

Reka Edge: The 7B Multimodal AI Model That Beats Gemini 3 Pro on Object...

0
Most people assume beating a Google model requires another massive frontier model. More parameters. More compute. That is just how the hierarchy usually works. Reka Edge is a 7-billion-parameter model. Yet it manages to outperform Gemini 3 Pro on object detection benchmarks, and with quantization it can even run on devices like the Samsung S25. That combination should not exist. A model small enough to fit on a phone outperforming a frontier AI system from Google on a specific but genuinely useful task is not something you expect to see in 2026. Yet here we are. This is not a model that beats Gemini at everything. It does not. But where it wins it wins convincingly.
Helios 14B AI Model That Generates Minute-Long Videos in Real Time

Helios: The 14B AI Model That Generates Minute-Long Videos in Real Time

0
Most open source video generation models make you wait. You write a prompt, hit generate, and then sit there hoping the output is what you imagined. If it is not you tweak the prompt and wait again. That loop gets old fast. Helios works differently. It generates video in real time at 19.5 frames per second on a single GPU. You can see it being created, interrupt mid generation if something looks off, tweak and continue. Up to a full minute of video without starting over every time something does not look right. With group offloading it runs on around 6GB of VRAM. Consumer GPU territory.
Open Source LLMs That Rival ChatGPT and Claude

7 Open Source LLMs That Rival ChatGPT and Claude

0
Two years ago if you wanted a genuinely capable AI model your options were basically ChatGPT, Claude, Gemini or Grok. Open source existed but the gap was real and everyone knew it. That gap is closing faster than most people expected. In some areas it is already gone. Today open source models do not just compete with closed source. Some of them beat closed source on specific benchmarks that actually matter. And the list of categories where that is true keeps getting longer. If you are curious about what open source AI actually looks like at full power or you are building something serious and evaluating your options this list is for you. One thing worth saying upfront, these are not consumer GPU friendly models. You will need serious hardware to run them at full capacity. Quantized versions exist for most of them but expect performance and quality to reflect that. I went through a lot of options to put this list together. These seven are the ones that actually made me stop and pay attention.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy