Back to all articles

Mitigating Automated Threats and Physical Compromise in Legacy Infrastructure

Recent data indicates a convergence of physical security risks and AI-scaled automation targeting legacy systems. This analysis reviews 2025 trends—from ATM jackpotting to automated firewall reconnaissance—and outlines the fundamental controls required to protect critical infrastructure.

Triage Security Media Team
3 min read

Early 2025 data indicates a shift in security dynamics: legacy infrastructure vulnerabilities are being amplified by modern automation. While cloud security remains a priority, recent data points to a resurgence in physical compromises and the automated targeting of configuration gaps. From the unauthorized dispensing of cash at ATMs to AI-scaled firewall access, threat actors are bypassing complex defenses by targeting fundamental weaknesses in physical and perimeter security.

Physical security of banking infrastructure requires renewed attention. New FBI data reports a significant increase in ATM jackpotting, with 700 incidents recorded in 2025 where machines were forced to dispense cash without authorization. These events resulted in over $20 million in losses. Legal responses include charges against 93 individuals associated with the Tren de Aragua group for conspiring to deploy malware. These operations involve physical access. Using industrial endoscopes to manipulate internal hardware or replacing hard drives with pre-infected units.

Technical analysis shows these actors target the eXtensions for Financial Services (XFS) layer, the interface bridging banking software and hardware. By introducing malware variants such as Ploutus, actors issue commands directly to the dispenser, bypassing central authorization. This access often relies on generic ATM keys and legacy operating systems that lack modern endpoint protection. Retail ATMs are frequently targeted due to the privacy available for physical manipulation, such as accessing USB ports.

Beyond physical terminals, generative AI is being used to scale network perimeter access. Between January and February, a financially motivated campaign accessed over 600 Fortinet FortiGate devices across 55 countries. The methodology is notable for using AI to automate the identification of basic configuration failures rather than leveraging new vulnerabilities. Actors used GenAI to conduct reconnaissance and generate Python scripts for decrypting configuration files. They targeted exposed management ports and accounts lacking multifactor authentication. Once inside, priority was placed on accessing Veeam Backup & Replication servers, likely to hinder recovery efforts during future ransomware events.

AI-augmented operations are also observed in state-sponsored activity. The Iran-linked group MuddyWater (TA450) initiated "Operation Olalampo," affecting energy and marine services in the Middle East and Africa. Researchers identified a new Rust-based tool, "Char," containing artifacts of AI generation, such as debug strings with emojis—common when code segments are generated by large language models. MuddyWater continues to use spear-phishing as a primary entry vector, deploying tools like GhostFetch and HTTP_VIP to perform reconnaissance before installing legitimate remote monitoring and management (RMM) software like AnyDesk for persistent access.

These developments demonstrate that advanced tools are often applied to basic configuration gaps. In the FortiGate campaign, actors abandoned targets with restricted management ports or enforced MFA, moving to more accessible systems. AI scales the volume of attempts but does not necessarily overcome hardened defenses. The priority remains on fundamentals: restricting management interfaces to trusted IP ranges, eliminating shared administrative credentials, and enforcing strict application allowlisting on sensitive endpoints.

The relevance of these fundamentals is a key topic at this year’s RSAC, where security historians compare modern configuration failures to the Enigma machine. The decryption of Enigma resulted from identifying human and engineering vulnerabilities, such as lack of rigorous testing and procedural predictability, rather than purely mathematical breakthroughs. Similarly, whether actors scale operations with AI or use industrial tools on ATMs, security remains a system of engineering and human behavior. Reliance on design without verifying procedural adherence introduces risk.

The integration of AI into threat workflows will likely compress the timeline between initial access and network compromise. As automation parses configuration files and prioritizes targets, defensive strategies must include automated monitoring of VPN logs and RMM tool usage. The use of AI-generated code by groups like MuddyWater suggests an increase in custom, potentially "noisy" malware variants. Understanding these patterns, including physical tampering and AI-assisted scanning—is essential for building a defense-in-depth strategy.

The full extent of data exfiltration from the FortiGate campaign and the specific LLM versions used by MuddyWater remain under investigation. However, the barrier to entry for high-scale operations is lowering, making the hardening of foundational infrastructure essential.