top of page

Researchers Show Copilot and Grok Can Be Abused as Malware Proxies

  • 2 days ago
  • 2 min read

Key Findings


  • Cybersecurity researchers have demonstrated that AI assistants with web browsing or URL fetching capabilities, such as Microsoft Copilot and xAI Grok, can be abused as covert command-and-control (C2) relays by attackers.

  • This technique allows attackers to blend their malicious communications into legitimate-looking AI assistant traffic, making detection and blocking significantly more challenging.

  • The attack method, dubbed "AI as a C2 proxy," leverages the web access and browsing/summarization features of these AI tools to retrieve attacker-controlled URLs and tunnel victim data back to the attackers.

  • This approach is similar to living-off-trusted-sites (LOTS) attacks, where adversaries weaponize legitimate services for malware distribution and C2.

  • Attackers can potentially go beyond just command generation and use the AI agent to devise evasion strategies and determine the next course of action based on details about the compromised system.


Background


The development signals yet another evolution in how threat actors could abuse AI systems, not just to scale or accelerate different phases of the cyber attack cycle, but also leverage APIs to dynamically generate code at runtime that can adapt its behavior based on information gathered from the compromised host and evade detection.


AI tools already act as a force multiplier for adversaries, allowing them to delegate key steps in their campaigns, whether it be for conducting reconnaissance, vulnerability scanning, crafting convincing phishing emails, creating synthetic identities, debugging code, or developing malware. However, AI as a C2 proxy goes a step further by transforming the AI agent into a bidirectional communication channel to accept operator-issued commands and tunnel victim data out.


Exploitation Methodology


1. The threat actor must have already compromised a machine by some other means and installed malware.


2. The malware then uses Copilot or Grok as a C2 channel using specially crafted prompts that cause the AI agent to contact the attacker-controlled infrastructure and pass the response containing the command to be executed on the host back to the malware.


3. Attackers could also leverage the AI agent to devise an evasion strategy and determine the next course of action by passing details about the system and validating if it's even worth exploiting.


Affected Platforms


  • Microsoft Copilot

  • xAI Grok


Indicators of Compromise (IOCs)


No specific IOCs were provided in the summary.


Mitigation and Defense


  • Security teams should prioritize monitoring AI assistant usage patterns and implement behavioral analytics to detect anomalies in their network traffic.

  • Evaluate the necessity of web browsing/URL fetching features for these tools within their environments and consider restricting or closely monitoring their use.


Conclusion


The disclosed attack technique highlights the potential for AI assistants to be abused as part of stealthy malware operations, blending into legitimate enterprise communications and evading traditional security controls. As AI systems become more ubiquitous, security teams must remain vigilant and adapt their defenses to address these emerging threats.


Sources


  • https://thehackernews.com/2026/02/researchers-show-copilot-and-grok-can.html

  • https://www.reddit.com/r/SecOpsDaily/comments/1r7fbo9/researchers_show_copilot_and_grok_can_be_abused/

  • https://x.com/shah_sheikh/status/2023829558181409182

Recent Posts

See All
Defeating AI with AI

Key Findings Generative AI and agentic AI are increasingly used by threat actors to conduct faster and more targeted attacks. One capability that AI improves for threat actors is the ability to profil

 
 
 

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page