back to top
HomeTechGoogle and Anthropic Are Banning OpenClaw Users: 4 Reasons Behind the Crackdown

Google and Anthropic Are Banning OpenClaw Users: 4 Reasons Behind the Crackdown

- Advertisement -

You know something’s wrong when companies start banning their own paying customers without explanation.

Last week, Google restricted access for some AI Ultra users (those paying $250/month). Anthropic made similar moves with Claude Pro subscribers around the same time. The connection? Both were targeting people using OpenClaw.

OpenClaw, if you haven’t heard of it, is this third-party tool that turns AI chatbots into automation agents. Instead of just asking Claude questions, you can have it control your computer, run tasks, fill out forms—stuff like that. Developers loved it. Until both companies suddenly decided it violated their terms of service.

What’s frustrating is how vague both companies have been about the actual reasons. Google cited “misuse of OAuth authentication.” Anthropic updated its terms to prohibit third-party “harnesses.” But neither explained what specific security issues, if any, triggered the sudden enforcement.

I started digging into what might be behind the crackdown. Some security researchers have raised concerns about how OpenClaw handles permissions and credentials. There are questions about the plugin ecosystem. And there’s been discussion in developer communities about whether the tool’s architecture creates risks that the AI companies couldn’t ignore.

So here’s what we know, what’s still unclear & the four reasons that likely pushed AI companies to draw the line.

Reason 1: The Loophole That Cost Millions

Google and Anthropic both cited “OAuth violations” in their ban notices. But multiple reports suggest the real driver was economics.

OpenClaw lets you run automation using consumer subscriptions—Claude Pro at $20/month or Google AI Ultra at $249.99/month. Those plans are meant for casual personal use: writing emails, chatting, light research. Some users weren’t using them that way. They were running industrial-scale operations—processing thousands of documents, keeping agents running 24/7, automating workflows that would normally cost $5,000 to $20,000 per month through official APIs, according to The Register.

Anthropic internally called this “token arbitrage,” per The Register’s reporting. Users found a gap between cheap consumer pricing and expensive API rates, then drove a truck through it.

Google had the same problem. AI Ultra at $249.99/month was designed for power users, not enterprises. But some subscribers were extracting compute value equivalent to business contracts costing tens of thousands more.

Both companies had to pick either keep absorbing losses or shut down third-party access. They chose to shut it down.

What’s frustrating is the lack of transparency. Neither company explained the economics piece publicly. They just cited vague terms-of-service violations and started locking accounts—even for people who’d been paying subscribers for over a year.

Reason 2: Security Concerns

According to reports from Kaspersky, Snyk, and Palo Alto Networks, OpenClaw’s design creates what they call a “lethal combination”: it can read your local files, communicate with any website, and run terminal commands on your computer. All three at once.

On January 30, 2026, researchers disclosed CVE-2026-25253—a vulnerability that lets attackers take control of OpenClaw with a single malicious link. According to SOCRadar and other security firms, clicking the link is enough. The attacker steals your authentication token and can execute commands on your system.

OpenClaw patched it in version 2026.1.29. But scans by Censys found over 21,000 exposed instances online, many still running vulnerable versions.

For Google and Anthropic, this created a liability problem. A compromised OpenClaw instance means a compromised user account on their platforms. Attackers could use those accounts to spread phishing or scrape data, all tied to their infrastructure.

The companies didn’t explain this publicly. But Anthropic’s ban started February 18, less than three weeks after the vulnerability disclosure. Google followed shortly after.

Reason 3: The Supply Chain Attack That Made OpenClaw a Security Red Flag

This one’s where it gets genuinely weird.

On February 17, 2026, someone hijacked a popular developer tool called Cline, a CLI package downloaded by thousands of developers & used it to silently install OpenClaw on their machines. No warning, Just you updated your tools, and now you have OpenClaw.

According to The Hacker News, the attacker pulled this off by stealing Cline’s npm publishing credentials through a prompt injection, tricking an AI agent into handing over the keys. Around 4,000-5,000 installs happened in an eight-hour window before anyone noticed.

OpenClaw itself wasn’t doing anything malicious in this case. But that’s almost beside the point.

This matters for our story: when Google and Anthropic see OpenClaw show up on a user’s account, they can’t easily tell why it’s there. Was it installed intentionally? Or did someone’s machine get compromised in a supply chain attack?

From their perspective, that’s not a risk worth taking.

Also Read: 5 Open-Source Discord Alternatives That Don’t Care Who You Are

Reason 4: The “ClawHub” Malware Problem

OpenClaw isn’t just a tool, it’s an ecosystem. And that ecosystem has a marketplace called ClawHub where anyone can publish “skills” that give your AI agent new abilities.

That sounds great but there is a catch.

Snyk researchers recently dug into ClawHub and found something that would make any security team nervous: malicious skills designed to trick you into installing malware.

The attack works by having your AI agent read a skill’s instruction file, then politely ask you to run a setup command. You trust your agent. You paste the command. Done.

One active campaign was distributing a fake Google integration skill. The skill looked legitimate. The AI read it, followed the instructions, and told users to download a required utility But that utility was malware.

ClawHub has since added some safeguards, new accounts have a waiting period, flagged skills get auto-hidden. But the researchers found that clones reappear within hours of takedowns.

For Google and Anthropic, this wasn’t something they could look away from. A tool with an open marketplace actively being used to distribute malware, connected to accounts on their platforms, that’s not a “wait and see” situation. That’s a liability you cut off.

So what actually happened?

Honestly? We still don’t know for sure.

Google & Anthropic never gave a proper explanation. Paying customers got locked out, terms quietly updated, and that was it. No breakdown.

Everything in this piece comes from security researchers, developer forums, and reporting from places like The Hacker News and The Register trying to connect dots after the fact. I’m doing the same thing. The token arbitrage, the CVE, the supply chain attack, the ClawHub mess — any one of these could’ve been enough. My guess is several landed at once and the decision made itself.

What bothers me most isn’t the ban. It’s that people who’d been paying subscribers for over a year got no explanation. Just a policy update and silence.

Maybe that’s the part worth being annoyed about.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

YOU MAY ALSO LIKE
MOSS-TTS-Nano Real-Time Voice AI on CPU

MOSS-TTS-Nano: Real-Time Voice AI on CPU, Part of an Open-Source Stack Rivaling Gemini

0
Most text-to-speech tools fall into two camps. The ones that sound good need serious hardware. The ones that run on anything sound robotic. MOSS-TTS-Nano is trying to be neither. It's a 100 million parameter model that runs on a regular CPU and it actually sounds good. Good enough that the team behind it built an entire family of speech models around the same core technology, one of which has gone head to head with Gemini 2.5 Pro and ElevenLabs and come out ahead on speaker similarity. It just dropped on April 10th and it's the newest addition to the MOSS-TTS family, a collection of five open source speech models from MOSI.AI and the OpenMOSS team. The family doesn't just cover lightweight local deployment. One of its models MOSS-TTSD outperforms Gemini 2.5 Pro and ElevenLabs on speaker similarity in benchmarks. Another generates voices purely from text descriptions with no reference audio needed. And one is built specifically for real-time voice agents with a 180ms first-byte latency. Nano is the entry point. The family is the story.
Gen-Searcher An Open Source AI That Searches the Web Before Generating Images

Gen-Searcher: An Open Source AI That Searches the Web Before Generating Images

0
Your image generator has never seen today. It was trained months ago, maybe longer, and everything it draws comes from that frozen snapshot of the world. Ask it to generate a current news moment, a product that launched last month, or anything that requires knowing what's happening right now and it fills in the gaps with a confident guess. Sometimes that guess is close. Often it isn't. Gen-Searcher does something none of the mainstream tools do. Before it draws a single pixel, it goes and looks things up. It searches the web. It browses sources. It pulls visual references. Then it generates. The result is an image grounded in actual current information. It's open source, the weights are on Hugging Face, and the team released everything including code, training data, benchmark, the lot.
MiniMax M2.7 The Agentic Model That Helped Build Itself

MiniMax M2.7: The Agentic Model That Helped Build Itself

0
MiniMax handed an internal version of M2.7 a programming scaffold and let it run unsupervised. Over 100 rounds it analyzed its own failures, modified its own code, ran evaluations, and decided what to keep and what to revert. The result was a 30% performance improvement with nobody directing each step. That is not a benchmark result. That is a different way of thinking about how AI models get built. M2.7 is now available on HuggingFace with weights you can download and deploy. NVIDIA is offering free API access if you want to try it without the hardware overhead. The license has a commercial limitation worth knowing about, we will get to that.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy