Anthropic Briefly Suspends OpenClaw Founder’s Claude Access, Quickly Reinstates Account

Anthropic Briefly Suspends OpenClaw Founder’s Claude Access, Quickly Reinstates Account
X

Anthropic briefly banned OpenClaw founder Peter Steinberger’s Claude account over suspected policy violations, but reinstated access within hours after review.

In a surprising turn of events, Anthropic temporarily suspended the account of Peter Steinberger, the creator of the AI agent platform OpenClaw, before restoring access within hours on Friday. The brief ban has sparked conversations about how AI companies manage third-party integrations and platform usage.

Steinberger, who is also known for developing Moltbook, shared on social media that he had lost access to Claude AI. Along with his post, he included a screenshot of the email sent by Anthropic notifying him about the suspension. The message cited “suspicious” activity linked to his account and indicated a potential breach of usage policies.

“An internal investigation of suspicious signals associated with your account indicates a violation of our Usage Policy. As a result, we have revoked your access to Claude,” read the email from Anthropic shared by Steinberger.

Reacting to the sudden restriction, Steinberger suggested that the development of OpenClaw’s compatibility with Claude models could face challenges moving forward. “Yeah folks, it's gonna be harder in the future to ensure OpenClaw still works with Anthropic models,” he wrote.



However, the situation was resolved quickly. Just a few hours later, Steinberger confirmed that his account had been restored. “My account got reinstated. Thanks, folks!” he posted, signaling a swift reversal by the company.

Anthropic has not publicly disclosed the exact reason behind the temporary suspension. Still, Steinberger hinted that his ongoing work to integrate certain Claude features into OpenClaw may have triggered the issue. In a follow-up response online, he elaborated on the possible cause.

“I've been working on getting the claude -p fallback feature working after Boris confirmed that it's a classifier bug and not intentional. We're still blocked and it seems that got me banned too,” said Steinberger.

The incident comes shortly after Anthropic announced stricter measures against third-party AI agent platforms like OpenClaw. According to the company, such platforms can put significant pressure on its infrastructure, prompting it to limit or block their access to Claude models.

OpenClaw has gained popularity in recent months for its ability to function across multiple platforms, including WhatsApp, Telegram, Slack, Discord, iMessage, Signal, and Microsoft Teams. Unlike traditional chatbots, it acts as an AI agent capable of delivering summaries, reminders, and personalized briefings across user-preferred communication channels.

Despite his association with OpenAI—where he works on next-generation personal AI agents—Steinberger continues to actively develop OpenClaw. Addressing questions about his use of competing AI models, he clarified his dual roles.

“You need to separate two things. My work at the OpenClaw Foundation, where we wanna make OpenClaw work great for any model provider, and my job at OpenAI to help them with future product strategy.”

The episode highlights the growing tension between AI providers and independent developers building tools on top of their systems, as demand and infrastructure constraints continue to rise.

Next Story
Share it