Back to all articles

State-sponsored group adopts AI-assisted code generation for malware operations

The Pakistan-linked threat group APT36 is leveraging AI-assisted "vibe-coding" to generate high volumes of malicious software in niche programming languages. While the resulting code is often logically flawed, this automated approach aims to overwhelm standard detection baselines, highlighting the need for foundational network security and active monitoring.

Triage Security Media Team
3 min read

The Pakistan-linked state-sponsored threat group APT36 has started using AI coding tools to generate large volumes of malicious software. Rather than focusing on technical sophistication, the group appears to rely on sheer volume to overwhelm organizational defenses.

Security vendor Bitdefender observed the threat actor deploying this AI-generated, or "vibe-coded," software in recent operations directed at Indian government entities, global embassies, and other South Asian organizations. Researchers refer to this high-volume methodology as "Distributed Denial of Detection."

The mechanics of distributed denial of detection

Researchers found that the generated software, referred to as "vibeware"—often contained fundamental logic errors. In one instance, a credential-harvesting tool contained a placeholder instead of a valid command-and-control (C2) server address, rendering data exfiltration impossible. In another example, a backdoor’s status-reporting function continuously reset its own tracking timestamp, causing the host system to always report as online regardless of its actual state.

Bitdefender researcher Radu Tudorica noted similar patterns across the evaluated samples, observing that components frequently failed when the required logic reached a moderate level of complexity. Tudorica described the code as "syntactically correct but logically unfinished," which is typical of AI-generated outputs lacking thorough human review.

However, organizations should not dismiss the risk posed by this methodology. Threat actors can still achieve unauthorized access if the software is written in niche programming languages and uses legitimate cloud services to mask C2 communications.

APT36, also known as Transparent Tribe, employs vibe-coding, the process of using conversational, natural language prompts to direct AI tools—to generate malicious software in less common programming languages such as Nim, Zig, and Crystal. While developing software in multiple languages previously required significant time and expertise, AI tools now enable operators with foundational skills to produce varied code across different languages with minimal effort.

Evaluating the impact on detection baselines

This language diversification presents a continuous challenge for defense systems. Many endpoint detection engines are optimized to identify unauthorized activity in common languages like C++ or C#. As Tudorica noted, when a compiled binary arrives in a language that these engines rarely encounter, it "essentially reset[s] the detection baseline."

APT36 also uses AI tools to integrate legitimate cloud platforms for C2 routing. Researchers identified the group using services like Slack, Discord, Google Sheets, and Supabase to issue commands to compromised environments and receive data. This combination allows operators with otherwise basic tools to bypass standard network defenses.

Deploying parallel operational channels

In the analyzed operations, APT36 deployed multiple simultaneous modules to affected organizations. Each component was developed in a different programming language and utilized a distinct communication protocol. This redundancy is designed to maintain network access even if defenders identify and neutralize one of the channels. Bitdefender estimated that the group produces new variants daily using their automated coding process.

"The real danger for organizations is the industrialization of mediocrity," stated Martin Zugec, technical solutions director at Bitdefender. AI enables threat actors to generate operations at a volume that can be difficult for organizations to manage if they lack foundational security hygiene.

Zugec explained that while the security industry advocates for defense-in-depth strategies, many environments still rely on flat network architectures, maintain over-privileged user accounts, and operate without active Managed Detection and Response (MDR) or Security Operations Center (SOC) monitoring. "Vibeware does not rely on technical brilliance," he noted. "It relies on exploiting the false sense of security in organizations that have simply managed to fly under the radar until now."

Researchers assessed APT36's shift toward vibe-coding as a temporary technical regression for the group. However, the broader operational trend remains a concern for defenders as AI generation models continue to mature.

Not all state-sponsored groups operate with advanced, custom-built capabilities. Many function as bureaucratic departments staffed by junior operators who historically relied on modifying open-source projects or existing frameworks. For these actors, AI-assisted code generation provides a method to scale their current operational tactics.

APT36 has historically focused on India's aerospace and government sectors. The group's portfolio includes an evolving set of tools designed for Windows and Android environments. They frequently employ living-off-the-land binaries (LoLBins) and legitimate cloud services to conceal unauthorized activity. Organizations can strengthen their posture against these high-volume operations by implementing strict network segmentation, adhering to the principle of least privilege, and monitoring for anomalous outbound traffic to external cloud services.