Google is gearing up for a major shift in how people interact with the web. As its Gemini assistant moves beyond simple suggestions and gains the ability to perform actions directly inside Chrome, the browser is being reshaped to handle a new category of risk.
Letting an AI click, navigate, and complete tasks on a live website introduces threats that traditional browser protections were never built to manage. To prepare for this next phase, Google is rolling out a new safety layer called the User Alignment Critic model, a system designed to keep AI behaviour predictable, supervised, and firmly tied to user intent.
Also read: ChatGPT as a grocery store: From recipe to goods delivery, OpenAI’s chatbot is evolving
Agentic browsing allows an AI assistant to click buttons, fill forms, navigate menus, and complete tasks online without the user handling each step manually. This convenience brings a significant vulnerability known as indirect prompt injection. In simple terms, a malicious site can hide instructions inside text or code that attempts to steer the AI toward unwanted actions. These actions can be as mild as opening more pages or as serious as authorising payments or requesting sensitive data.
Google is treating this threat as a structural challenge rather than a minor glitch. The company understands that as AI grows more autonomous in the browser, attacks will target the assistant instead of the user. Chrome’s answer is a multilayered defence system meant to supervise and restrict AI behaviour.
At the centre of this system is the User Alignment Critic model. It functions as a reviewer that evaluates each action the AI proposes before the browser carries it out. Instead of giving the model full access to a webpage, Google only provides metadata about the action. This reduces the possibility that the model itself could be influenced by harmful content.
Also read: Google wants you to wear AI: The 2026 glasses plan explained
The critic checks whether the action fits the user’s request and whether it stays within allowed boundaries. If the action seems risky, confusing, or unrelated to the task, the model can block it. This creates an internal feedback loop where the AI must justify every step.
Google is adding further control through something it calls Agent Origin Sets. When a user gives Gemini a task, Chrome will restrict the agent to a defined set of domains. This prevents the AI from drifting to unrelated sites or following links that attackers might place deliberately. It keeps the task focused and easier to supervise.
The browser will also require human approval for any high risk operation. Payments, logins, form submissions involving personal data, and interaction with banking or government portals will remain under user control. Action logs will be accessible so users can track what the AI attempted, allowed, or rejected.
Google’s strategy signals a shift toward more autonomous AI features inside Chrome. But the company is aware that trust cannot be assumed. With this new system, Google aims to demonstrate that convenience and safety can coexist. The protections are not meant to slow down agentic AI. They are designed to ensure that every action taken on behalf of the user truly serves the user.
As AI driven browsing becomes mainstream, such checks will likely become a standard part of how browsers are built. Google’s approach suggests that autonomy must always come with accountability, and that the safest AI is the one that knows when to stop.
Also read: Why the microSD Card could make a huge comeback in 2026