When robots learn to reason before moving, the line between artificial intelligence and human-like agency begins to blur. With the launch of Gemini Robotics 1.5, DeepMind is positioning its latest AI systems as a bridge between language models and machines that can physically act in the real world.
The release introduces two complementary models: Gemini Robotics 1.5, a vision-language-action system, and Gemini Robotics-ER 1.5, an embodied reasoning model. Together, they embody DeepMind’s ambition to move from digital chatbots to agents that can think, plan, and manipulate objects in physical environments.
Also read: OpenAI’s GPT-5 matches human performance in jobs: What it means for work and AI
Unlike many robotics models that map instructions directly to motions, Gemini Robotics 1.5 is designed to pause and plan. It generates natural-language reasoning chains before choosing its next move, breaking down complex instructions into smaller, safer substeps. This makes its decision-making process more transparent and, crucially, easier for developers and operators to trust.
That reasoning capability is strengthened by Gemini Robotics-ER 1.5, which focuses on high-level planning. It can map spaces, weigh alternatives, call external tools like search, and orchestrate the VLA model’s movements. In essence, one model acts as the “mind,” while the other executes as the “body.”
A longstanding problem in robotics is that machines differ widely in shape and mechanics – an arm designed for factory assembly is nothing like a humanoid assistant. DeepMind says Gemini Robotics 1.5 can generalize across different robot embodiments, applying what it learns on one platform to another without needing to be retrained from scratch. Demonstrations include transferring skills from research robots like ALOHA 2 to humanoid prototypes and commercial robotic arms.
This adaptability could lower the barrier to deploying AI-driven robots in industries where diversity of hardware has slowed progress.
In testing, Gemini Robotics-ER 1.5 set state-of-the-art results across 15 embodied reasoning benchmarks, covering spatial understanding, planning, and interactive problem-solving. While benchmarks are only a proxy for real-world reliability, they suggest DeepMind’s models are advancing faster than earlier embodied AI systems.
Also read: Unitree R1: The $5,900 humanoid robot that may change everything
Bringing AI into the physical world raises risks far beyond the digital domain. DeepMind highlights safety as a core feature: Gemini Robotics systems are trained to reason about potential hazards before acting, while also constrained by low-level safeguards such as collision avoidance.
To push this further, DeepMind has expanded its ASIMOV benchmark, a test suite for evaluating semantic safety in robotics. The goal is to ensure that robots not only perform tasks correctly but also align with human values and physical safety standards.
Access to the models will be staged. Gemini Robotics-ER 1.5 is being rolled out via the Gemini API in Google AI Studio, allowing developers to start building embodied AI applications. The action-focused Gemini Robotics 1.5 is initially available to select partners, reflecting both its experimental status and the sensitivity of giving machines the power to act in the world.
The launch of Gemini Robotics 1.5 marks an inflection point in AI research. For years, large language models have dazzled in digital domains – answering questions, generating code, writing stories. DeepMind’s new effort shows the next frontier: merging those reasoning abilities with machines capable of interacting with the messy, unpredictable real world.
If successful, the technology could reshape industries from logistics to home assistance, and bring society closer to robots that can learn, adapt, and help in everyday life. But with greater agency comes greater responsibility, a fact DeepMind seems keenly aware of.
As the company frames it, Gemini Robotics isn’t just about smarter machines. It’s about laying the foundations for robots that can think, plan, and act like humans safely.
Also read: Battery life to demand: Humanoid robots industry faces key challenges