back to top
HomeTechGoogle and Anthropic Are Banning OpenClaw Users: 4 Reasons Behind the Crackdown

Google and Anthropic Are Banning OpenClaw Users: 4 Reasons Behind the Crackdown

- Advertisement -

You know something’s wrong when companies start banning their own paying customers without explanation.

Last week, Google restricted access for some AI Ultra users (those paying $250/month). Anthropic made similar moves with Claude Pro subscribers around the same time. The connection? Both were targeting people using OpenClaw.

OpenClaw, if you haven’t heard of it, is this third-party tool that turns AI chatbots into automation agents. Instead of just asking Claude questions, you can have it control your computer, run tasks, fill out forms—stuff like that. Developers loved it. Until both companies suddenly decided it violated their terms of service.

What’s frustrating is how vague both companies have been about the actual reasons. Google cited “misuse of OAuth authentication.” Anthropic updated its terms to prohibit third-party “harnesses.” But neither explained what specific security issues, if any, triggered the sudden enforcement.

I started digging into what might be behind the crackdown. Some security researchers have raised concerns about how OpenClaw handles permissions and credentials. There are questions about the plugin ecosystem. And there’s been discussion in developer communities about whether the tool’s architecture creates risks that the AI companies couldn’t ignore.

So here’s what we know, what’s still unclear & the four reasons that likely pushed AI companies to draw the line.

Reason 1: The Loophole That Cost Millions

Google and Anthropic both cited “OAuth violations” in their ban notices. But multiple reports suggest the real driver was economics.

OpenClaw lets you run automation using consumer subscriptions—Claude Pro at $20/month or Google AI Ultra at $249.99/month. Those plans are meant for casual personal use: writing emails, chatting, light research. Some users weren’t using them that way. They were running industrial-scale operations—processing thousands of documents, keeping agents running 24/7, automating workflows that would normally cost $5,000 to $20,000 per month through official APIs, according to The Register.

Anthropic internally called this “token arbitrage,” per The Register’s reporting. Users found a gap between cheap consumer pricing and expensive API rates, then drove a truck through it.

Google had the same problem. AI Ultra at $249.99/month was designed for power users, not enterprises. But some subscribers were extracting compute value equivalent to business contracts costing tens of thousands more.

Both companies had to pick either keep absorbing losses or shut down third-party access. They chose to shut it down.

What’s frustrating is the lack of transparency. Neither company explained the economics piece publicly. They just cited vague terms-of-service violations and started locking accounts—even for people who’d been paying subscribers for over a year.

Reason 2: Security Concerns

According to reports from Kaspersky, Snyk, and Palo Alto Networks, OpenClaw’s design creates what they call a “lethal combination”: it can read your local files, communicate with any website, and run terminal commands on your computer. All three at once.

On January 30, 2026, researchers disclosed CVE-2026-25253—a vulnerability that lets attackers take control of OpenClaw with a single malicious link. According to SOCRadar and other security firms, clicking the link is enough. The attacker steals your authentication token and can execute commands on your system.

OpenClaw patched it in version 2026.1.29. But scans by Censys found over 21,000 exposed instances online, many still running vulnerable versions.

For Google and Anthropic, this created a liability problem. A compromised OpenClaw instance means a compromised user account on their platforms. Attackers could use those accounts to spread phishing or scrape data, all tied to their infrastructure.

The companies didn’t explain this publicly. But Anthropic’s ban started February 18, less than three weeks after the vulnerability disclosure. Google followed shortly after.

Reason 3: The Supply Chain Attack That Made OpenClaw a Security Red Flag

This one’s where it gets genuinely weird.

On February 17, 2026, someone hijacked a popular developer tool called Cline, a CLI package downloaded by thousands of developers & used it to silently install OpenClaw on their machines. No warning, Just you updated your tools, and now you have OpenClaw.

According to The Hacker News, the attacker pulled this off by stealing Cline’s npm publishing credentials through a prompt injection, tricking an AI agent into handing over the keys. Around 4,000-5,000 installs happened in an eight-hour window before anyone noticed.

OpenClaw itself wasn’t doing anything malicious in this case. But that’s almost beside the point.

This matters for our story: when Google and Anthropic see OpenClaw show up on a user’s account, they can’t easily tell why it’s there. Was it installed intentionally? Or did someone’s machine get compromised in a supply chain attack?

From their perspective, that’s not a risk worth taking.

Also Read: 5 Open-Source Discord Alternatives That Don’t Care Who You Are

Reason 4: The “ClawHub” Malware Problem

OpenClaw isn’t just a tool, it’s an ecosystem. And that ecosystem has a marketplace called ClawHub where anyone can publish “skills” that give your AI agent new abilities.

That sounds great but there is a catch.

Snyk researchers recently dug into ClawHub and found something that would make any security team nervous: malicious skills designed to trick you into installing malware.

The attack works by having your AI agent read a skill’s instruction file, then politely ask you to run a setup command. You trust your agent. You paste the command. Done.

One active campaign was distributing a fake Google integration skill. The skill looked legitimate. The AI read it, followed the instructions, and told users to download a required utility But that utility was malware.

ClawHub has since added some safeguards, new accounts have a waiting period, flagged skills get auto-hidden. But the researchers found that clones reappear within hours of takedowns.

For Google and Anthropic, this wasn’t something they could look away from. A tool with an open marketplace actively being used to distribute malware, connected to accounts on their platforms, that’s not a “wait and see” situation. That’s a liability you cut off.

So what actually happened?

Honestly? We still don’t know for sure.

Google & Anthropic never gave a proper explanation. Paying customers got locked out, terms quietly updated, and that was it. No breakdown.

Everything in this piece comes from security researchers, developer forums, and reporting from places like The Hacker News and The Register trying to connect dots after the fact. I’m doing the same thing. The token arbitrage, the CVE, the supply chain attack, the ClawHub mess — any one of these could’ve been enough. My guess is several landed at once and the decision made itself.

What bothers me most isn’t the ban. It’s that people who’d been paying subscribers for over a year got no explanation. Just a policy update and silence.

Maybe that’s the part worth being annoyed about.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

- Advertisment -
YOU MAY ALSO LIKE
Reka Edge is The 7B Multimodal AI Model That Beats Gemini 3 Pro on Object Detection

Reka Edge: The 7B Multimodal AI Model That Beats Gemini 3 Pro on Object...

0
Most people assume beating a Google model requires another massive frontier model. More parameters. More compute. That is just how the hierarchy usually works. Reka Edge is a 7-billion-parameter model. Yet it manages to outperform Gemini 3 Pro on object detection benchmarks, and with quantization it can even run on devices like the Samsung S25. That combination should not exist. A model small enough to fit on a phone outperforming a frontier AI system from Google on a specific but genuinely useful task is not something you expect to see in 2026. Yet here we are. This is not a model that beats Gemini at everything. It does not. But where it wins it wins convincingly.
Helios 14B AI Model That Generates Minute-Long Videos in Real Time

Helios: The 14B AI Model That Generates Minute-Long Videos in Real Time

0
Most open source video generation models make you wait. You write a prompt, hit generate, and then sit there hoping the output is what you imagined. If it is not you tweak the prompt and wait again. That loop gets old fast. Helios works differently. It generates video in real time at 19.5 frames per second on a single GPU. You can see it being created, interrupt mid generation if something looks off, tweak and continue. Up to a full minute of video without starting over every time something does not look right. With group offloading it runs on around 6GB of VRAM. Consumer GPU territory.
Open Source LLMs That Rival ChatGPT and Claude

7 Open Source LLMs That Rival ChatGPT and Claude

0
Two years ago if you wanted a genuinely capable AI model your options were basically ChatGPT, Claude, Gemini or Grok. Open source existed but the gap was real and everyone knew it. That gap is closing faster than most people expected. In some areas it is already gone. Today open source models do not just compete with closed source. Some of them beat closed source on specific benchmarks that actually matter. And the list of categories where that is true keeps getting longer. If you are curious about what open source AI actually looks like at full power or you are building something serious and evaluating your options this list is for you. One thing worth saying upfront, these are not consumer GPU friendly models. You will need serious hardware to run them at full capacity. Quantized versions exist for most of them but expect performance and quality to reflect that. I went through a lot of options to put this list together. These seven are the ones that actually made me stop and pay attention.

Don’t miss any Tech Story

Subscribe To Firethering NewsLetter

You Can Unsubscribe Anytime! Read more in our privacy policy