top of page

The Whisper Leak: Exposing the Theft of AI Chat Topics from Encrypted Traffic

  • Nov 8, 2025
  • 2 min read

Key Findings


  • Microsoft has uncovered a novel side-channel attack, dubbed "Whisper Leak", that can identify AI chat topics in encrypted traffic

  • The attack allows an attacker to observe encrypted TLS traffic and use trained classifiers to infer whether the conversation topic matches a sensitive target category

  • This leakage of data exchanged between humans and streaming-mode language models could pose serious risks to the privacy of user and enterprise communications


Background


  • Model streaming in large language models (LLMs) is a technique that allows for incremental data reception as the model generates responses

  • Many side-channel attacks have been devised against LLMs in recent years, including the ability to infer the length of individual plaintext tokens from the size of encrypted packets


The Whisper Leak Attack


  • The Whisper Leak attack builds upon previous findings to explore the possibility that the sequence of encrypted packet sizes and inter-arrival times during a streaming language model response contains enough information to classify the topic of the initial prompt

  • Microsoft trained a binary classifier as a proof-of-concept that's capable of differentiating between a specific topic prompt and the rest (i.e., noise) using three different machine learning models: LightGBM, Bi-LSTM, and BERT

  • The results show that many models from Mistral, xAI, DeepSeek, and OpenAI have been found to achieve scores above 98%, allowing an attacker to reliably flag specific topics


Mitigations and Recommendations


  • OpenAI, Mistral, Microsoft, and xAI have deployed mitigations to counter the risk, such as adding a "random sequence of text of variable length" to each response

  • Microsoft recommends that users concerned about their privacy when talking to AI providers can avoid discussing highly sensitive topics on untrusted networks, utilize a VPN, use non-streaming models of LLMs, and switch to providers that have implemented mitigations


Broader Implications


  • The disclosure comes as a new evaluation of eight open-weight LLMs has found them to be highly susceptible to adversarial manipulation, specifically when it comes to multi-turn attacks

  • The Whisper Leak attack highlights the importance of addressing privacy and security concerns in the rapidly evolving field of large language models and AI-powered chatbots


Sources


  • https://thehackernews.com/2025/11/microsoft-uncovers-whisper-leak-attack.html

  • https://www.youtube.com/watch?v=742vDdf4jQc

  • https://www.reddit.com/r/SecOpsDaily/comments/1orrjw6/microsoft_uncovers_whisper_leak_attack_that/

Recent Posts

See All
Defeating AI with AI

Key Findings Generative AI and agentic AI are increasingly used by threat actors to conduct faster and more targeted attacks. One capability that AI improves for threat actors is the ability to profil

 
 
 

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page