top of page
ALL POSTS
Researchers Show Copilot and Grok Can Be Abused as Malware Proxies
Key Findings Cybersecurity researchers have demonstrated that AI assistants with web browsing or URL fetching capabilities, such as Microsoft Copilot and xAI Grok, can be abused as covert command-and-control (C2) relays by attackers. This technique allows attackers to blend their malicious communications into legitimate-looking AI assistant traffic, making detection and blocking significantly more challenging. The attack method, dubbed "AI as a C2 proxy," leverages the web ac
2 days ago2 min read
Firefox Introduces AI Kill Switch to Enhance User Privacy
Firefox Will Give Users an AI Kill Switch for Better Privacy Key Findings: Mozilla is releasing Firefox 148 on February 24, 2026, which introduces a dedicated AI controls section in the desktop settings. This includes a "global kill switch" that allows users to opt out of AI features entirely by flipping a single toggle. Turning off AI features stops the browser from sending data to external companies for processing through API calls. Users can also customize which AI tools t
Feb 72 min read
Former Google Engineer Convicted of Stealing AI Secrets for China
Key Findings: Former Google software engineer Linwei Ding (also known as Leon Ding) was convicted by a federal jury on 7 counts of economic espionage and 7 counts of theft of trade secrets. Ding stole over 2,000 confidential documents containing Google's trade secrets related to artificial intelligence (AI) technology. The stolen information included details about Google's custom Tensor Processing Unit (TPU) chips, Graphics Processing Unit (GPU) systems, software orchestratin
Jan 302 min read
Fake Moltbot AI Coding Assistant: Malware Threat in VS Code Marketplace
Key Findings A malicious Microsoft Visual Studio Code (VS Code) extension named "ClawdBot Agent - AI Coding Assistant" has been discovered on the official Extension Marketplace. The extension claims to be a free artificial intelligence (AI) coding assistant for the popular open-source project Moltbot, but it stealthily drops a malicious payload on compromised hosts. The extension was published by a user named "clawdbot" on January 27, 2026 and has since been taken down by Mic
Jan 292 min read
VoidLink: The AI-Powered Linux Malware Framework
Key Findings VoidLink is a sophisticated Linux malware framework, built largely by a single developer with assistance from an artificial intelligence (AI) model. The malware reached over 88,000 lines of code in a short timeframe, showcasing the efficiency enabled by AI-driven development. Operational security failures by the developer exposed development artifacts, providing clear evidence that VoidLink was produced predominantly through AI-driven processes. VoidLink includes
Jan 212 min read
Title: Panorays 2026 Study: 85% of CISOs Unable to Detect Third-Party Threats Amid Rising Supply Chain Attacks
Key Findings and Insights Preparedness is dangerously low: While 77% of CISOs see third-party risk as a major threat, only 21% have tested crisis response plans in place. Most organizations are blind to vendors: Although 60% report rising third-party breaches, just 41% monitor risk beyond direct suppliers. Shadow AI is creating new attack paths: Despite rapid AI adoption, only 22% of CISOs have formal vetting processes, leaving unmanaged third-party AI tools embedded in core
Jan 142 min read
The Atomic Age: Meta Secures 6.6 GW of Nuclear Power to Fuel its AI Future
Key Findings Meta has secured up to 6.6 GW of nuclear power through landmark deals with Vistra, TerraPower, and Oklo to fuel its growing AI infrastructure and the "Prometheus" supercomputing cluster in Ohio. The collaboration with TerraPower involves financing the construction of two sodium-cooled reactors utilizing proprietary "Natrium" technology, providing 690 MW initially, with plans to expand to 2.1 GW by 2035. Meta has also entered an agreement with Oklo, a startup back
Jan 102 min read
The $3 Trillion Opportunity: SpaceX, OpenAI, and Anthropic's Anticipated 2026 IPOs
Key Findings SpaceX, OpenAI, and Anthropic are reportedly preparing for IPOs in 2026 that could collectively exceed $3 trillion in valuation. SpaceX is targeting a $1.5 trillion IPO, fueled by Starlink's profitability and plans to accelerate Starship's Mars colonization and develop space-based AI data centers. OpenAI is eyeing a $1 trillion IPO to fund the development of GPT-6 and the Stargate supercomputing infrastructure. Anthropic, the dark horse, may leapfrog OpenAI by go
Jan 22 min read
AI Agents Uncover Critical Zero-Day in Global Networking Gear
Key Findings Autonomous AI agents discovered a critical, unpatched vulnerability (CVE-2025-54322) in networking gear manufactured by Xspeeder, a Chinese vendor known for routers and SD-WAN appliances. The vulnerability is a pre-authentication Remote Code Execution (RCE) flaw with a CVSS score of 10. This is the first remotely exploitable zero-day vulnerability discovered by an automated AI platform, according to the report. The vulnerable firmware, SXZOS, powers Xspeeder's SD
Dec 29, 20252 min read
Nomani Investment Scam Surges 62% Using AI Deepfake Ads on Social Media
Key Findings The fraudulent investment scheme known as Nomani has witnessed a 62% increase, according to ESET. Nomani campaigns have expanded beyond Facebook to include other social media platforms, such as YouTube. ESET blocked over 64,000 unique URLs associated with the Nomani threat this year, with the majority of detections originating from Czechia, Japan, Slovakia, Spain, and Poland. Nomani leverages social media malvertising, company-branded posts, and AI-powered video
Dec 24, 20252 min read
Backdoor in NVIDIA AI Systems: Critical 9.8 Severity Flaws Grant Total Control
Key Findings NVIDIA has issued a critical security update for its Isaac Launchable software, patching three vulnerabilities with a CVSS score of 9.8. The most severe flaw, CVE-2025-33222, involves hard-coded credentials that allow attackers to bypass authentication and gain complete control of affected systems. The remaining two vulnerabilities, CVE-2025-33223 and CVE-2025-33224, stem from improper privilege management, enabling attackers to execute code with elevated permiss
Dec 24, 20252 min read
Link11 Identifies Five Cybersecurity Trends Set to Shape European Defense Strategies in 2027
Key Findings DDoS attacks will increasingly be used as diversion tactics to draw attention away from more damaging activities API-first architectures will increase exposure to misconfigurations and business logic abuse Integrated WAAP platforms will overtake fragmented web security architectures AI-driven DDoS mitigation will become essential against hyper-scale attacks Regulatory pressure will intensify as cybersecurity oversight expands across Europe Background Cybersecurit
Dec 16, 20253 min read
Advanced Phishing Kits Leverage AI and MFA Bypass Tactics
Key Findings Four new phishing kits named BlackForce, GhostFrame, InboxPrime AI, and Spiderman are capable of facilitating credential theft at scale. BlackForce is designed to steal credentials and perform Man-in-the-Browser (MitB) attacks to capture one-time passwords (OTPs) and bypass multi-factor authentication (MFA). GhostFrame uses an iframe-based approach to hide its malicious behavior and easily switch out phishing content. InboxPrime AI leverages artificial intelligen
Dec 12, 20253 min read
Researchers Uncover Critical Vulnerabilities in AI Coding Tools Exposing Data Theft and Remote Execution Risks
Key Findings Over 30 security vulnerabilities have been disclosed in various AI-powered Integrated Development Environments (IDEs) The vulnerabilities combine prompt injection primitives with legitimate IDE features to achieve data exfiltration and remote code execution The security issues have been collectively named "IDEsaster" by security researcher Ari Marzouk (MaccariTA) The vulnerabilities affect popular IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Cop
Dec 6, 20252 min read
AI Adoption Outpaces Governance as Shadow Identity Risks Grow
Key Findings 83% of organizations use AI in daily operations Only 13% have strong visibility into how AI systems handle sensitive data AI increasingly behaves as an ungoverned identity, with a non-human user that reads faster, accesses more, and operates continuously 67% have caught AI tools over-accessing sensitive information 23% admit they have no controls for AI prompts or outputs Background The report, produced by Cybersecurity Insiders with research support from Cyera R
Dec 2, 20252 min read
Anthropic: China-Backed Hackers Unleash First Large-Scale Autonomous AI Cyberattack
Key Findings China-linked threat actors used Anthropic's AI system, Claude, to automate and execute a sophisticated espionage campaign in September 2025. The cyberspies leveraged advanced "agentic" capabilities of the AI system, allowing it to act autonomously and perform a range of malicious activities with minimal human oversight. The attack targeted about 30 global organizations across tech, finance, chemicals, and government sectors, succeeding in a few cases. This incide
Nov 16, 20252 min read
Serious AI Bugs Found Exposing Vulnerabilities in Meta, Nvidia, and Microsoft Inference Frameworks
Key Findings Cybersecurity researchers have uncovered critical remote code execution vulnerabilities in major AI inference engines, including those from Meta, Nvidia, Microsoft, and open-source projects like vLLM and SGLang. The vulnerabilities stem from the unsafe use of ZeroMQ (ZMQ) and Python's pickle deserialization, a pattern dubbed "ShadowMQ." The root cause is a vulnerability in Meta's Llama large language model (LLM) framework (CVE-2024-50050) that was patched by the
Nov 15, 20252 min read
Chinese Hackers Exploit Anthropic AI to Orchestrate Automated Cyber Attacks
Key Findings Chinese state-sponsored hackers successfully used Anthropic's AI coding tool, Claude Code, to automate a large-scale cyber espionage campaign targeting about 30 global organizations The hackers manipulated Claude Code to act as an "autonomous cyber attack agent," executing 80-90% of the tactical operations with minimal human involvement The campaign, codenamed GTG-1002, marks the first documented case of a foreign government leveraging AI to fully automate a cybe
Nov 14, 20252 min read
"Tech Giant Warns of Evolving AI Threats: The Perils of Self-Modifying Malware"
Background Google's Threat Intelligence Group (GTIG) has identified a new generation of malware that is using AI during execution to mutate, adapt, and collect data in real-time, helping it evade detection more effectively. Cybercriminals are increasingly using AI to build malware, plan attacks, and craft phishing lures. Recent research shows AI-driven ransomware like PromptLock can adapt during execution. Malware with Novel AI Capabilities GTIG has identified malware familie
Nov 7, 20252 min read
"Do robots dream of secure computing? Exploring cybersecurity for AI systems"
Background In the late 1960s, science fiction author Philip K. Dick explored the traits that distinguish humans from autonomous robots in his novel "Do Androids Dream of Electric Sheep." As advances in generative AI allow us to create autonomous agents that can reason and act on humans' behalf, we must consider the human traits and knowledge we must equip these agentic AI with to enable them to act autonomously, reasonably, and safely. One crucial skill we need to impart on o
Nov 6, 20252 min read
bottom of page

