
Claude Code and Moltbot Hit
Claude Code and Moltbot Hit by Malicious AI Skills
Researchers have uncovered malicious AI skills aimed at Claude Code and Moltbot users. As AI tools become more customizable, the real risk is moving from the models to the tools we plug into them.
Security researchers have identified malicious AI skills targeting users of Claude Code and Moltbot (now OpenClaw). No dramatic exploit chain. Just skills that looked helpful enough to install, then behaved in ways users didn’t expect.
A skill is basically a mini-app or plugin for the AI. It extends what the assistant can do, often connecting it to external services, files, or workflows. The AI itself is still fine, the skill is what’s doing the extra work, for better or worse.
Claude Code is Anthropic’s AI coding assistant that helps developers write, refactor, and test code. Moltbot is another developer-focused assistant that automates tasks, manages scripts, and connects to various tools. Both tools sit in the growing class of AI assistants built for developers. They write code, automate tasks, and connect to other services. Once you trust them, they’re allowed to do work on your behalf. That trust is what made this possible.
What’s Actually Happening
The reported skills were packaged as routine productivity helpers. Things like code utilities, workflow shortcuts, or integrations meant to speed up common developer tasks. On the surface, they behaved normally enough to avoid suspicion.
Once installed, those skills relied on the permissions they were granted at setup. Access to local files, project directories, repositories, environment variables, or connected APIs. With that access in place, the skills could read data they didn’t strictly need, trigger actions indirectly, or pass information to external endpoints without obvious signals to the user.
This wasn’t about exploiting vulnerabilities in Claude or Moltbot themselves. The models responded as designed. The issue lived in the surrounding ecosystem, where skills and integrations operate with delegated authority. After that authority is granted, enforcement depends largely on whether the skill behaves as advertised.
Why This Keeps Happening
There’s precedent here. Browser extensions followed this path. So did mobile apps. Platforms open themselves up for customization, a marketplace forms, and some contributors test how far the trust boundaries stretch.
AI assistants add a new wrinkle. They don’t just display information. They act. They write, modify, and move things around. They connect to services and operate asynchronously. That makes them programmable surfaces that sit closer to real workflows than most extensions ever did.
What This Means for Users
This isn’t a reason to abandon AI assistants or assume everything is hostile. It’s a reminder that extensibility always changes the threat model, even when the core system remains solid.
- Treat AI skills and plugins like software, not features. Review what they’re allowed to access and why.
- Be cautious with tools that request broad permissions for narrow tasks.
- Watch for unexpected behavior after installing new skills, especially background actions or network activity.
- Limit AI tool access in sensitive environments where repositories, credentials, or customer data are involved.
- Assume that convenience layers become security layers once delegated authority enters the picture.
None of this is new advice for developers. It’s just showing up in a new place. As AI tools take on more responsibility, the line between assistant and infrastructure keeps getting thinner.
Tags
Join the Discussion
Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open, join us.
Published January 31, 2026 • Updated January 31, 2026
published
Latest in Data Defense

Elasticsearch Misconfigurations Expose 43M+ Records Online
Feb 18, 2026

Moltbook Exposed Millions of API Keys and Personal Data
Feb 4, 2026

Claude Code and Moltbot Hit by Malicious AI Skills
Jan 31, 2026

149 Million Login Credentials Exposed in Massive Leak
Jan 24, 2026

VS Code Is Being Used in Active Cyberattacks
Jan 22, 2026
Right Now in Tech

Court Tosses Musk’s Claim That OpenAI Stole xAI Trade Secrets
Feb 26, 2026

Meta’s Age Verification Push Reignites Online Anonymity Debate
Feb 23, 2026

Substack Adds Polymarket Tools. Journalists Have Questions.
Feb 20, 2026

Netflix Ends Support for PlayStation 3 Streaming App
Feb 18, 2026

The Internet Archive Is Getting Caught in the AI Scraping War
Feb 5, 2026