top of page

"Tech Giant Warns of Evolving AI Threats: The Perils of Self-Modifying Malware"

  • Nov 7, 2025
  • 2 min read

Background


  • Google's Threat Intelligence Group (GTIG) has identified a new generation of malware that is using AI during execution to mutate, adapt, and collect data in real-time, helping it evade detection more effectively.

  • Cybercriminals are increasingly using AI to build malware, plan attacks, and craft phishing lures.

  • Recent research shows AI-driven ransomware like PromptLock can adapt during execution.


Malware with Novel AI Capabilities


  • GTIG has identified malware families, such as PROMPTFLUX and PROMPTSTEAL, that use Large Language Models (LLMs) during execution.

  • These tools dynamically generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand, rather than hard-coding them into the malware.


PROMPTFLUX


  • Dropper written in VBScript that decodes and executes an embedded decoy installer to mask its activity.

  • Its primary capability is regeneration, which it achieves by using the Google Gemini API to prompt the LLM to rewrite its own source code, saving the new, obfuscated version to the Startup folder to establish persistence.

  • PROMPTFLUX also attempts to spread by copying itself to removable drives and mapped network shares.


PROMPTLOCK


  • Cross-platform ransomware written in Go, identified as a proof of concept.

  • It leverages an LLM to dynamically generate and execute malicious Lua scripts at runtime.

  • Its capabilities include filesystem reconnaissance, data exfiltration, and file encryption on both Windows and Linux systems.


PROMPTSTEAL


  • Data miner written in Python and packaged with PyInstaller.

  • It contains a compiled script that uses the Hugging Face API to query the LLM Qwen2.5-Coder-32B-Instruct to generate one-line Windows commands.

  • The prompts used to generate the commands indicate that it aims to collect system information and documents in specific folders.

  • PROMPTSTEAL then executes the commands and sends the collected data to an adversary-controlled server.


QUIETVAULT


  • Credential stealer written in JavaScript that targets GitHub and NPM tokens.

  • Captured credentials are exfiltrated via creation of a publicly accessible GitHub repository.

  • In addition to these tokens, QUIETVAULT leverages an AI prompt and on-host installed AI CLI tools to search for other potential secrets on the infected system and exfiltrate these files to GitHub as well.


Sources


  • https://securityaffairs.com/184275/malware/google-sounds-alarm-on-self-modifying-ai-malware.html

  • https://www.linkedin.com/posts/pierluigipaganini_google-sounds-alarm-on-self-modifying-ai-activity-7392256097857867776-cldO

Recent Posts

See All
Defeating AI with AI

Key Findings Generative AI and agentic AI are increasingly used by threat actors to conduct faster and more targeted attacks. One capability that AI improves for threat actors is the ability to profil

 
 
 

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page