Back to all articles

North Korean Threat Actors Apply AI to Scale Fraudulent IT Worker Schemes

Threat actors linked to North Korea are integrating artificial intelligence into their workflows to bypass hiring verification and maintain unauthorized access. By understanding how these groups use large language models and generated media, organizations can partner across security and human resources teams to strengthen their screening and identity validation processes.

Triage Security Media Team
3 min read

Threat actors continue to experiment with artificial intelligence (AI) to scale their operations. Recent analysis indicates that North Korean threat actors are applying this technology pragmatically to sustain and enhance fraudulent IT worker schemes.

In a recent report, Microsoft's threat intelligence team detailed how two clusters linked to the Democratic People's Republic of Korea (DPRK). Identified as "Jasper Sleet" and "Coral Sleet"—apply AI to increase the scale and precision of their fraudulent campaigns. These groups use AI to fabricate digital identities, socially engineer prospective employers, and establish sustained, unauthorized access to organizational networks.

While these tactics, techniques, and procedures (TTPs) represent an evolution of existing methods rather than a fundamental shift, they remain highly effective. Organizations benefit including understanding these methods and strengthen their hiring and identity verification workflows, supplementing broader law enforcement countermeasures.

Fabricating identities and navigating the hiring process

AI technologies factor into nearly every stage of these fraudulent employment schemes. Before submitting applications, threat actors use AI to analyze job postings on freelance platforms like Upwork. They extract specific terminology, required skills, and expected certifications to craft highly tailored profiles that match employer expectations.

To bridge cultural and linguistic gaps, groups like Jasper Sleet use large language models (LLMs) to generate culturally convincing names, email formats, and social media handles. LLMs also assist in drafting resumes and cover letters designed to pass initial screening phases.

These elements are combined to create cohesive digital personas that mimic legitimate IT professionals. These personas are then deployed across multiple job applications. In some instances, the visual components, such as headshots—are entirely AI-generated. In other cases, threat actors use commercial face-swapping applications to superimpose selected faces onto stolen identification documents belonging to real individuals. During remote interviews, they may also use voice-altering software to align their audio with the fabricated persona.

Maintaining the persona and executing tasks

Securing the position is only the initial phase. Once hired, the threat actors must sustain the fabricated persona and perform the expected duties. This requires matching the communication style, tone, and technical output they presented during the interview process.

To meet these expectations, DPRK threat actors use AI similarly to standard enterprise users. They generate code snippets, draft email responses, and complete routine tasks to maintain their employment status. Microsoft researchers have also observed early experimentation with agentic AI, systems designed to support iterative decision-making and task execution. While this has not yet been observed at a wide scale due to operational risks and reliability constraints, it points to a potential shift toward more adaptive methodologies that could complicate detection efforts.

Defending against secondary objectives

The primary objective of these schemes is revenue generation for the DPRK regime. However, misusing insider access to targeted organizations presents a significant secondary risk. Groups like Coral Sleet have used AI—sometimes bypassing safety controls, to rapidly develop web infrastructure, refine unauthorized code, and assist with further social engineering. In some instances, Coral Sleet has attempted to use agentic AI to string together automated workflows, including provisioning remote infrastructure and testing unauthorized software components.

Fortunately, organizations are adapting to these methods. Brian Hussey, senior vice president of Cyber Fusion at Cyderes, observes that heightened awareness among hiring teams is positively impacting defense efforts.

"Many organizations are now incorporating verification questions during remote interviews, such as asking applicants about local landmarks or activities in the city they claim to live in," Hussey notes. Some teams also ask specific cultural or localized questions that covert operators may struggle to answer naturally. While no single method is foolproof, these verification steps indicate a growing vigilance among employers.

Hussey adds that his team has seen fewer investigations related to this specific activity over the past six months. While this may not capture the entire threat situation, it suggests a potential temporary slowdown or a pivot in the threat actors' methodologies. By understanding these AI-assisted techniques, security and human resources teams can work together to implement stronger validation measures and protect their organizations from fraudulent employment risks.