back to top
HomeTechGoogle and Anthropic Are Banning OpenClaw Users: 4 Reasons Behind the Crackdown

Google and Anthropic Are Banning OpenClaw Users: 4 Reasons Behind the Crackdown

- Advertisement -

You know something’s wrong when companies start banning their own paying customers without explanation.

Last week, Google restricted access for some AI Ultra users (those paying $250/month). Anthropic made similar moves with Claude Pro subscribers around the same time. The connection? Both were targeting people using OpenClaw.

OpenClaw, if you haven’t heard of it, is this third-party tool that turns AI chatbots into automation agents. Instead of just asking Claude questions, you can have it control your computer, run tasks, fill out forms—stuff like that. Developers loved it. Until both companies suddenly decided it violated their terms of service.

What’s frustrating is how vague both companies have been about the actual reasons. Google cited “misuse of OAuth authentication.” Anthropic updated its terms to prohibit third-party “harnesses.” But neither explained what specific security issues, if any, triggered the sudden enforcement.

I started digging into what might be behind the crackdown. Some security researchers have raised concerns about how OpenClaw handles permissions and credentials. There are questions about the plugin ecosystem. And there’s been discussion in developer communities about whether the tool’s architecture creates risks that the AI companies couldn’t ignore.

So here’s what we know, what’s still unclear & the four reasons that likely pushed AI companies to draw the line.

Reason 1: The Loophole That Cost Millions

Google and Anthropic both cited “OAuth violations” in their ban notices. But multiple reports suggest the real driver was economics.

OpenClaw lets you run automation using consumer subscriptions—Claude Pro at $20/month or Google AI Ultra at $249.99/month. Those plans are meant for casual personal use: writing emails, chatting, light research. Some users weren’t using them that way. They were running industrial-scale operations—processing thousands of documents, keeping agents running 24/7, automating workflows that would normally cost $5,000 to $20,000 per month through official APIs, according to The Register.

Anthropic internally called this “token arbitrage,” per The Register’s reporting. Users found a gap between cheap consumer pricing and expensive API rates, then drove a truck through it.

Google had the same problem. AI Ultra at $249.99/month was designed for power users, not enterprises. But some subscribers were extracting compute value equivalent to business contracts costing tens of thousands more.

Both companies had to pick either keep absorbing losses or shut down third-party access. They chose to shut it down.

What’s frustrating is the lack of transparency. Neither company explained the economics piece publicly. They just cited vague terms-of-service violations and started locking accounts—even for people who’d been paying subscribers for over a year.

Reason 2: Security Concerns

According to reports from Kaspersky, Snyk, and Palo Alto Networks, OpenClaw’s design creates what they call a “lethal combination”: it can read your local files, communicate with any website, and run terminal commands on your computer. All three at once.

On January 30, 2026, researchers disclosed CVE-2026-25253—a vulnerability that lets attackers take control of OpenClaw with a single malicious link. According to SOCRadar and other security firms, clicking the link is enough. The attacker steals your authentication token and can execute commands on your system.

OpenClaw patched it in version 2026.1.29. But scans by Censys found over 21,000 exposed instances online, many still running vulnerable versions.

For Google and Anthropic, this created a liability problem. A compromised OpenClaw instance means a compromised user account on their platforms. Attackers could use those accounts to spread phishing or scrape data, all tied to their infrastructure.

The companies didn’t explain this publicly. But Anthropic’s ban started February 18, less than three weeks after the vulnerability disclosure. Google followed shortly after.

Reason 3: The Supply Chain Attack That Made OpenClaw a Security Red Flag

This one’s where it gets genuinely weird.

On February 17, 2026, someone hijacked a popular developer tool called Cline, a CLI package downloaded by thousands of developers & used it to silently install OpenClaw on their machines. No warning, Just you updated your tools, and now you have OpenClaw.

According to The Hacker News, the attacker pulled this off by stealing Cline’s npm publishing credentials through a prompt injection, tricking an AI agent into handing over the keys. Around 4,000-5,000 installs happened in an eight-hour window before anyone noticed.

OpenClaw itself wasn’t doing anything malicious in this case. But that’s almost beside the point.

This matters for our story: when Google and Anthropic see OpenClaw show up on a user’s account, they can’t easily tell why it’s there. Was it installed intentionally? Or did someone’s machine get compromised in a supply chain attack?

From their perspective, that’s not a risk worth taking.

Also Read: 5 Open-Source Discord Alternatives That Don’t Care Who You Are

Reason 4: The “ClawHub” Malware Problem

OpenClaw isn’t just a tool, it’s an ecosystem. And that ecosystem has a marketplace called ClawHub where anyone can publish “skills” that give your AI agent new abilities.

That sounds great but there is a catch.

Snyk researchers recently dug into ClawHub and found something that would make any security team nervous: malicious skills designed to trick you into installing malware.

The attack works by having your AI agent read a skill’s instruction file, then politely ask you to run a setup command. You trust your agent. You paste the command. Done.

One active campaign was distributing a fake Google integration skill. The skill looked legitimate. The AI read it, followed the instructions, and told users to download a required utility But that utility was malware.

ClawHub has since added some safeguards, new accounts have a waiting period, flagged skills get auto-hidden. But the researchers found that clones reappear within hours of takedowns.

For Google and Anthropic, this wasn’t something they could look away from. A tool with an open marketplace actively being used to distribute malware, connected to accounts on their platforms, that’s not a “wait and see” situation. That’s a liability you cut off.

So what actually happened?

Honestly? We still don’t know for sure.

Google & Anthropic never gave a proper explanation. Paying customers got locked out, terms quietly updated, and that was it. No breakdown.

Everything in this piece comes from security researchers, developer forums, and reporting from places like The Hacker News and The Register trying to connect dots after the fact. I’m doing the same thing. The token arbitrage, the CVE, the supply chain attack, the ClawHub mess — any one of these could’ve been enough. My guess is several landed at once and the decision made itself.

What bothers me most isn’t the ban. It’s that people who’d been paying subscribers for over a year got no explanation. Just a policy update and silence.

Maybe that’s the part worth being annoyed about.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
YOU MAY ALSO LIKE
Open-Source Discord Alternatives That Don’t Care Who You Are

5 Open-Source Discord Alternatives That Don’t Care Who You Are

0
Discord has been catching heat over its new verification rules. You’ve probably seen the threads everywhere. At some point I stopped scrolling and asked myself, Why does hanging out with friends online suddenly feel like paperwork? So I started looking for better alternatives to discord that don’t care who you are. They respect your privacy and still deliver a Discord-level experience. Some are great for gamers who want low-latency voice and familiar server layouts. Others are better for smaller communities that want tighter control or even self-hosting instead of ranking them from best to worst, I’ve grouped them by what they’re actually good at.
Open-Source AI Text-to-Speech Models You Can Run Locally That Sound Realistic

5 Open-Source AI Text-to-Speech Generators You Can Run Locally for Natural, Human-Like Voice

0
If you’re creating content or building products then relying entirely on cloud APIs isn’t your only option anymore. Open-source text-to-speech models have improved dramatically. Some now produce voices that sound surprisingly natural with lower long-term cost, and full ownership over your deployment. If you’re generating narration for YouTube, building an AI assistant, or integrating voice into your next app, running a powerful TTS model locally can give you flexibility the cloud simply can’t. Here are five open-source AI voice models worth knowing.
Best AI Music Generators That Create Studio-Quality Songs

5 Open-Source AI Music Generators That Create Studio-Quality Songs

0
Most AI music generators live in the cloud now. you generate a Song, download the file, & hope your credits don’t run out next week. It’s convenient but what if the pricing changes or the model gets restricted? you’re back to square one. I wanted to see what happens if you flip that around. So I spent some time running open-source music models locally. Just a GPU, some patience, and a lot of test prompts. The results surprised me. A couple of these models are genuinely impressive. I mean tracks with structure, transitions, and a level of realism that matches Studio level Music. Others in the list are more experimental. You’ll hear rough edges. Sometimes the mix feels flat or composition drifts. I’m including them anyway because they do one or two things really well, and because they’re open. You can inspect them, tweak them, fine-tune them, and build on top of them. If you’ve got a decent GPU even something in the 6–12GB range, you can run at least some of these yourself. So this isn’t a list for someone who just wants a quick background track for Instagram. It’s for builders, Producers & Developers who are curious what’s possible when the model is actually sitting on their own machine. Let’s get into the ones that are worth your time

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy