The Chrome Web Store has recently hosted a significant number of unauthorized browser extensions that claim to offer AI assistant functionality but primarily serve to exfiltrate user data. Research from LayerX indicates that these extensions, while appearing useful, redirect sensitive user inputs to external servers controlled by threat actors.
LayerX researchers identified 30 specific Google Chrome extensions that share nearly identical codebases despite varying branding. These extensions have achieved significant reach, accumulating tens of thousands of downloads each. By mimicking popular AI assistants, they successfully capture email content, browsing history, and data entered into chat interfaces.
Natalie Zargarov, a security researcher at LayerX, notes that while the underlying tactics are not entirely new, their application has shifted. Threat actors are moving away from spoofing banking interfaces and are instead targeting developer tools and AI interfaces. These are environments where users frequently paste sensitive information, including API keys, authentication tokens, and proprietary data.
Mimicry of AI Interfaces
The primary risk factor lies in how these extensions leverage brand trust. Users looking for efficiency often download tools that appear associated with major AI providers. Zargarov explains that these applications capitalize on familiarity with established model names. The "AI assistant" label, combined with distribution through the official Chrome Web Store, creates a presumption of legitimacy.
Upon installation, the user experience appears standard. The extension adds an icon to the browser toolbar which, when activated, opens a chat interface. Users receive plausible, AI-generated responses to their prompts.
However, the technical implementation reveals the security risk. The chat interface is rendered as a full-screen iframe pointing to an external domain controlled by the extension operators. This iframe overlays the legitimate browser page. When a user submits a prompt, the data is transmitted directly to the operator's server. This server likely proxies the request to a legitimate Large Language Model (LLM) API to generate a response, maintaining the illusion of functionality while simultaneously capturing the input data.
This mechanism presents a significant data privacy concern. As AI usage normalizes, users increasingly input sensitive information into these tools.
Enterprise Data Risks
The implications for enterprise security are substantial. Consider an employee utilizing one of these extensions while working within a corporate CRM system. If the user requests a summary of the page, the extension reads the content—potentially including customer names, contact details, and transaction histories—and transmits it to the external server.
While the employee receives a summary, the full dataset remains with the threat actor. This transfer occurs outside corporate security controls, potentially leading to the exfiltration of trade secrets or regulated data. Such exposure can result in intellectual property loss, compliance violations, and subsequent security incidents.
Challenges in Store Detection
Despite the unauthorized nature of these applications, they have gained traction. Extensions such as "Gemini AI Sidebar," "ChatGPT Translate," "AI Sidebar," and "AI Assistant" have collectively reached over 260,000 downloads. Some even received "Featured" status within the store.
A significant number of these extensions remained available for download following the initial disclosure of the findings. Their persistence highlights a gap in static analysis detection methods. Zargarov notes that because the core logic resides in a remote web application loaded via iframe, the extension package itself may contain minimal code. It requests few permissions and lacks obvious malicious indicators during static review.
If the vetting process does not deeply analyze network endpoints, shared TLS certificates, or remotely loaded JavaScript, these extensions can bypass detection. This suggests that correlation across different extensions using the same backend infrastructure is a critical area for improvement in app store security.
Related Research on Data Exfiltration
This campaign aligns with similar findings from Ox Security, which detailed how unauthorized extensions pose as legitimate tools to steal data. Ox Security researchers analyzed extensions impersonating a company called AItopia. These rogue extensions, including "ChatGPT for Chrome with GPT-5, Claude Sonnet & DeepSeek AI," reportedly reached hundreds of thousands of users.
The analysis by Ox Security revealed that these extensions requested consent for "anonymous analytics" but used that permission to exfiltrate complete conversation histories and browsing activity to a command-and-control (C2) server. This data often includes proprietary source code, business strategies, and internal URLs.
Security teams should advise users to verify the publisher of any browser extension and limit the use of extensions that require broad access to page content. Monitoring for traffic to known C2 domains associated with these campaigns remains a prudent defense strategy.