Researchers Uncover Critical Vulnerabilities in AI Coding Tools Exposing Data Theft and Remote Execution Risks
- Dec 6, 2025
- 2 min read
Key Findings
Over 30 security vulnerabilities have been disclosed in various AI-powered Integrated Development Environments (IDEs)
The vulnerabilities combine prompt injection primitives with legitimate IDE features to achieve data exfiltration and remote code execution
The security issues have been collectively named "IDEsaster" by security researcher Ari Marzouk (MaccariTA)
The vulnerabilities affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline
24 of the vulnerabilities have been assigned CVE identifiers
Background
The core of these issues chains three different vectors that are common to AI-driven IDEs:
Bypass a large language model's (LLM) guardrails to hijack the context and perform the attacker's bidding (aka prompt injection)
Perform certain actions without requiring any user interaction via an AI agent's auto-approved tool calls
Trigger an IDE's legitimate features that allow an attacker to break out of the security boundary to leak sensitive data or execute arbitrary commands
The highlighted issues are different from prior attack chains that have leveraged prompt injections in conjunction with vulnerable tools (or abusing legitimate tools to perform read or write actions) to modify an AI agent's configuration to achieve code execution or other unintended behavior.
Vulnerability Details
Some of the identified attacks made possible by the new exploit chain include:
CVE-2025-49150 (Cursor), CVE-2025-53097 (Roo Code), CVE-2025-58335 (JetBrains Junie), GitHub Copilot (no CVE), Kiro.dev (no CVE), and Claude Code (addressed with a security warning):
Using a prompt injection to read a sensitive file using either a legitimate ("read_file") or vulnerable tool ("search_files" or "search_project") and writing a JSON file via a legitimate tool ("write_file" or "edit_file)) with a remote JSON schema hosted on an attacker-controlled domain, causing the data to be leaked when the IDE makes a GET request
CVE-2025-53773 (GitHub Copilot), CVE-2025-54130 (Cursor), CVE-2025-53536 (Roo Code), CVE-2025-55012 (Zed.dev), and Claude Code (addressed with a security warning):
Using a prompt injection to edit IDE settings files (".vscode/settings.json" or ".idea/workspace.xml") to achieve code execution by setting "php.validate.executablePath" or "PATH_TO_GIT" to the path of an executable file containing malicious code
CVE-2025-64660 (GitHub Copilot), CVE-2025-61590 (Cursor), and CVE-2025-58372 (Roo Code):
Using a prompt injection to edit workspace configuration files (*.code-workspace) and override multi-root workspace settings to achieve code execution
Recommendations
Ari Marzouk, the security researcher who uncovered these vulnerabilities, offers the following recommendations to mitigate the risks:
Only use AI IDEs (and AI agents) with trusted projects and files, as malicious rule files, instructions hidden inside source code or other files (README), and even file names can become prompt injection vectors.
Only connect to trusted MCP servers and continuously monitor these servers for changes, as even a trusted server can be breached.
Review and understand the data flow of MCP tools (e.g., a legitimate MCP tool might pull information from an attacker-controlled source, such as a GitHub PR).
Manually review sources you add (such as via URLs) for hidden instructions (comments in HTML / CSS-hidden text / invisible Unicode characters, etc.).
Sources
https://thehackernews.com/2025/12/researchers-uncover-30-flaws-in-ai.html
https://bvtech.org/researchers-uncover-30-flaws-in-ai-coding-tools-enabling-data-theft-and-rce-attacks/
https://www.cypro.se/2025/12/06/researchers-uncover-30-flaws-in-ai-coding-tools-enabling-data-theft-and-rce-attacks/


Comments