top of page

Fake Moltbot AI Coding Assistant: Malware Threat in VS Code Marketplace

  • Jan 29
  • 2 min read

Key Findings


  • A malicious Microsoft Visual Studio Code (VS Code) extension named "ClawdBot Agent - AI Coding Assistant" has been discovered on the official Extension Marketplace.

  • The extension claims to be a free artificial intelligence (AI) coding assistant for the popular open-source project Moltbot, but it stealthily drops a malicious payload on compromised hosts.

  • The extension was published by a user named "clawdbot" on January 27, 2026 and has since been taken down by Microsoft.

  • The malicious extension is designed to automatically execute every time the integrated development environment (IDE) is launched, retrieving a file named "config.json" from an external server to execute a binary named "Code.exe" that deploys a legitimate remote desktop program like ConnectWise ScreenConnect.

  • The extension also incorporates a fallback mechanism that retrieves a DLL listed in "config.json" and sideloads it to obtain the same payload from Dropbox.

  • This is not the only backup mechanism, as the fake Moltbot extension also embeds hard-coded URLs to get the executable and the DLL to be sideloaded, as well as a batch script to obtain the payloads from a different domain.


Background


Moltbot (formerly Clawdbot) is an open-source project created by Austrian developer Peter Steinberger that allows users to run a personal AI assistant powered by a large language model (LLM) locally on their own devices and interact with it over already established communication platforms like WhatsApp, Telegram, Slack, Discord, Google Chat, Signal, iMessage, Microsoft Teams, and WebChat.


The most important aspect to note here is that Moltbot does not have a legitimate VS Code extension, meaning the threat actors behind the activity capitalized on the rising popularity of the tool to trick unsuspecting developers into installing it.


Security Risks with Moltbot


Security researcher and Dvuln founder Jamieson O'Reilly found hundreds of unauthenticated Moltbot instances online due to a "classic" reverse proxy misconfiguration, exposing configuration data, API keys, OAuth credentials, and conversation histories from private chats to unauthorized parties.


The issue stems from a combination of Moltbot auto-approving "local" connections and deployments behind reverse proxies causing internet connections to be treated as local – and therefore trusted and automatically approved for unauthenticated access.


This, in turn, opens the door to a scenario where an attacker can impersonate the operator to their contacts, inject messages into ongoing conversations, modify agent responses, and exfiltrate sensitive data without their knowledge. More critically, an attacker could distribute a backdoored Moltbot "skill" via MoltHub (formerly ClawdHub) to stage supply chain attacks and siphon sensitive data.


Conclusion


The disclosure comes as a stark reminder to developers to be cautious when installing new AI tools, especially those that have gained significant popularity. The fake Moltbot extension highlights how threat actors are capitalizing on the hype around AI assistants to deploy malware under the guise of legitimate software.


Users who are running Clawdbot with default configurations are recommended to audit their configurations and take appropriate measures to secure their deployments.


Sources


  • https://thehackernews.com/2026/01/fake-moltbot-ai-coding-assistant-on-vs.html

  • https://securityonline.info/fake-ai-assistant-malicious-clawdbot-extension-hides-trojan-in-vs-code/

  • https://x.com/TheCyberSecHub/status/2016571284256342431

  • https://www.linkedin.com/posts/dlross_fake-moltbot-ai-coding-assistant-on-vs-code-activity-7422469791593971712-YZHE

  • https://www.cypro.se/2026/01/28/fake-moltbot-ai-coding-assistant-on-vs-code-marketplace-drops-malware/

Recent Posts

See All
Defeating AI with AI

Key Findings Generative AI and agentic AI are increasingly used by threat actors to conduct faster and more targeted attacks. One capability that AI improves for threat actors is the ability to profil

 
 
 

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page