Logo
READLEARNKNOWCONNECT
Back to posts
claude-code-and-moltbot-hit

Claude Code and Moltbot Hit

ChriseJanuary 31, 2026 at 8 AM WAT

Claude Code and Moltbot Hit by Malicious AI Skills

Researchers have uncovered malicious AI skills aimed at Claude Code and Moltbot users. As AI tools become more customizable, the real risk is moving from the models to the tools we plug into them.

Security researchers have identified malicious AI skills targeting users of Claude Code and Moltbot (now OpenClaw). No dramatic exploit chain. Just skills that looked helpful enough to install, then behaved in ways users didn’t expect.

A skill is basically a mini-app or plugin for the AI. It extends what the assistant can do, often connecting it to external services, files, or workflows. The AI itself is still fine, the skill is what’s doing the extra work, for better or worse.

Claude Code is Anthropic’s AI coding assistant that helps developers write, refactor, and test code. Moltbot is another developer-focused assistant that automates tasks, manages scripts, and connects to various tools. Both tools sit in the growing class of AI assistants built for developers. They write code, automate tasks, and connect to other services. Once you trust them, they’re allowed to do work on your behalf. That trust is what made this possible.

What’s Actually Happening

The reported skills were packaged as routine productivity helpers. Things like code utilities, workflow shortcuts, or integrations meant to speed up common developer tasks. On the surface, they behaved normally enough to avoid suspicion.

Once installed, those skills relied on the permissions they were granted at setup. Access to local files, project directories, repositories, environment variables, or connected APIs. With that access in place, the skills could read data they didn’t strictly need, trigger actions indirectly, or pass information to external endpoints without obvious signals to the user.

This wasn’t about exploiting vulnerabilities in Claude or Moltbot themselves. The models responded as designed. The issue lived in the surrounding ecosystem, where skills and integrations operate with delegated authority. After that authority is granted, enforcement depends largely on whether the skill behaves as advertised.

Why This Keeps Happening

There’s precedent here. Browser extensions followed this path. So did mobile apps. Platforms open themselves up for customization, a marketplace forms, and some contributors test how far the trust boundaries stretch.

AI assistants add a new wrinkle. They don’t just display information. They act. They write, modify, and move things around. They connect to services and operate asynchronously. That makes them programmable surfaces that sit closer to real workflows than most extensions ever did.

What This Means for Users

This isn’t a reason to abandon AI assistants or assume everything is hostile. It’s a reminder that extensibility always changes the threat model, even when the core system remains solid.

  • Treat AI skills and plugins like software, not features. Review what they’re allowed to access and why.
  • Be cautious with tools that request broad permissions for narrow tasks.
  • Watch for unexpected behavior after installing new skills, especially background actions or network activity.
  • Limit AI tool access in sensitive environments where repositories, credentials, or customer data are involved.
  • Assume that convenience layers become security layers once delegated authority enters the picture.

None of this is new advice for developers. It’s just showing up in a new place. As AI tools take on more responsibility, the line between assistant and infrastructure keeps getting thinner.

Tags

#ai-tools#anthropic#data-defense#developers#malware#security

Join the Discussion

Enjoyed this? Ask questions, share your take (hot, lukewarm, or undecided), or follow the thread with people in real time. The community’s open, join us.

Published January 31, 2026Updated January 31, 2026

published

Claude Code and Moltbot Hit by Malicious AI Skills | VeryCodedly