Claude Opus Generated a Chrome Exploit for $2,283
- 17 hours ago
- 4 min read
Key Findings
Claude Opus 4.6 successfully generated a functional Chrome exploit chain for $2,283 in API costs across 2.33 billion tokens
The exploit targeted Discord's bundled Chrome version 138, which lagged nine major versions behind current upstream releases
Exploit development required approximately 20 hours of human guidance, with the AI frequently getting stuck and requiring operator intervention
Bug bounty programs like Google's v8CTF offer $5,000-$10,000 per valid exploit, making the investment immediately profitable
Widespread use of outdated Chromium versions in Electron apps like Discord, Slack, and Teams creates persistent "patch gaps" where known vulnerabilities remain exploitable
Background
Anthropic's decision to withhold its more advanced Mythos model due to safety concerns has drawn attention to the vulnerability weaponization capabilities of already-available AI systems. While Mythos represents a theoretical leap forward, the experiment demonstrates that current models like Opus 4.6—already being superseded by Opus 4.7—can already convert security vulnerabilities into working attack code. The risk is not hypothetical or distant; it exists in tools accessible to anyone with API credentials and modest funding.
The Economics of AI-Powered Exploitation
The $2,283 price tag breaks down across multiple models and token types, with Claude Opus 4.6 high accounting for the majority at $2,014. The cost structure reveals how accessible this capability has become. More significantly, the investment immediately pays for itself through legitimate channels. Google's v8CTF bounty program pays $10,000 per valid exploit submission, with previous submissions earning $5,000. Anthropic itself offers similar rewards for discovered vulnerabilities. In underground markets, the returns could be substantially higher, though the researchers focused on demonstrating the legitimate bug bounty angle.
How the Exploit Was Built
Mohan Pedhapati, CTO of Hacktron, targeted Discord specifically because it runs Chrome 138 in a main window without sandboxing, requiring only two bugs for a complete exploit chain rather than the multiple vulnerabilities needed against modern, protected browsers. The team leveraged a V8 out-of-bounds vulnerability from Chrome 146—the same version running in Anthropic's own Claude Desktop—and instructed Claude Opus to build a full exploit chain around it.
The process was far from automated. Across approximately 20 hours and 1,765 requests, the AI frequently became stuck, made incorrect guesses, lost context, and sometimes abandoned the original objective when unable to solve a problem. Human intervention was required repeatedly to debug issues, verify outputs, and redirect the model toward productive paths. Despite these limitations, the AI successfully generated code that "popped calc"—security researcher shorthand for achieving arbitrary code execution.
The Patching Gap Problem
The core vulnerability enabling this attack isn't a failure of security research or patch development, but rather deployment lag. Electron applications bundle their own Chromium versions, often running weeks or months behind upstream releases. Discord running Chrome 138 when version 146 existed represents a nine-version gap—substantial considering Google releases major Chrome versions every four weeks.
This creates a window where security patches exist publicly but remain ineffective against bundled versions. The situation affects widely used applications like Discord, Slack, Microsoft Teams, and countless others. Many still lack sandboxing protections, making exploitation chains simpler once the V8 vulnerability is weaponized.
AI's Current Limitations and Future Trajectory
Claude Opus still requires significant human expertise to function as an exploit development tool. It cannot recover from dead ends without operator intervention, it frequently requires environmental setup management, and it loses context across long development sessions. These factors mean skilled operators are essential to the process; untrained users cannot simply ask an AI to build an exploit and wait for results.
However, the researchers emphasized that this represents an inflection point rather than a ceiling. Future models will require less supervision. Each generation of AI systems shows improved reasoning, better context retention, and reduced need for human intervention. The trajectory is clear: exploit development will accelerate, human guidance will decrease, and the time between patch publication and weaponized exploit will compress.
The Asymmetry Problem
The fundamental issue lies in information asymmetry. Security patches themselves function as exploit roadmaps—they reveal exactly what vulnerability was fixed. Reverse-engineering patches traditionally required significant skill and time. AI eliminates much of that friction, quickly analyzing public patches and generating working exploit code.
This advantage compounds for organized attackers. One skilled operator can now manage multiple concurrent AI-driven exploit efforts against different targets, greatly multiplying the impact of a small team compared to previous attack models. The bar for entry has lowered substantially; the barrier to scaling up has lowered even further.
Recommendations and Future Outlook
The researchers highlighted that current defensive approaches are insufficient. Telling organizations to "patch faster" ignores the structural delays in software distribution. Instead, they propose building security into development practices from inception, tracking all dependencies to understand what systems actually run, and implementing automatic updates to eliminate manual delays.
More controversially, they suggest rethinking how and when security patches get published. Since public fixes become AI-powered exploit blueprints almost immediately, perhaps open-source projects should delay patch publication until updates reach a critical mass of users. This trade-off between transparency and security represents an uncomfortable conversation the industry will need to have.
The reality is that progress will not slow. AI capabilities will improve, costs will decline, and models will require less operator skill. Eventually, even low-capability attackers with access to AI tools will weaponize unpatched software at scale. Whether Mythos lives up to the hype matters less than the certainty that similar capabilities are coming, and defenders need to adapt accordingly.
Sources
https://securityaffairs.com/191018/ai/ai-model-claude-opus-turns-bugs-into-exploits-for-just-2283.html
https://forums.theregister.com/forum/all/2026/04/17/claude_opus_wrote_chrome_exploit/
https://www.reddit.com/r/cybersecurity/comments/1sodjux/claude_opus_wrote_a_chrome_exploit_for_2283/

Comments