Google unveils new AI model HOPE that moves closer to continual learning
Google introduces HOPE, a self-modifying AI model using nested learning to enable continual learning without forgetting past knowledge.
HOPE outperforms current LLMs in memory management and adaptability, showing lower perplexity and higher accuracy on benchmarks.
Researchers see HOPE as a step toward AGI, offering a framework that mimics human-like learning and long-term memory retention.
Google has announced HOPE, a new AI model that’s developed in an effort to help machines learn and adapt over time. The firm describes HOPE as a “self-modifying” system that contains an innovative learning framework known as nested learning. This learning procedure can overcome one of the AI’s biggest challenges, which is the inability to continually learn without forgetting the past information. As per Google’s blog post, HOPE, in contrast to LLMs, is superior when it comes to memory management and adaptability. Researchers also add that the approach may one day help bridge the gap between existing AI systems and human-like intelligence, sometimes known as artificial general intelligence (AGI). The findings were presented in a paper titled “Nested Learning: The Illusion of Deep Learning Architectures” at the NeurIPS 2025 conference.
SurveyWhat is Google’s HOPE?
Google’s research team created HOPE as a test project (a proof-of-concept) for a new way of teaching computers called nested learning. In the nested learning structure, multiple learning tasks operate together, allowing the model to retain and build upon what it learns instead of discarding earlier knowledge.]
Why researchers believe that Google’s HOPE could change AI’s future
One of the biggest hurdles in developing more advanced AI systems is continual learning, the ability to learn new information without overwriting previous knowledge. Traditional LLMs, while capable of producing text, code, and complex reasoning, still suffer from what researchers call “catastrophic forgetting”. This occurs when new training data causes the system to lose previously acquired knowledge.
AI researcher Andrej Karpathy, a former Google DeepMind scientist, recently noted that this limitation is one reason AGI may still be a decade away. “They don’t have continual learning. You can’t just tell them something, and they’ll remember it,” he said on a podcast, describing today’s models as “cognitively lacking.”
Also read: Airtel quietly kills Rs 189 plan, cheapest recharge now starts at Rs 199 with data
How HOPE tackles the challenge of continual learning
Google’s new approach attempts to address this issue by redefining how learning itself occurs within a model. The company’s researchers argue that the structure of a model and the way it is trained are two sides of the same coin, both essential to how intelligence can emerge. By linking these processes, nested learning introduces what the team calls “a new, previously invisible dimension” for building more capable AI systems.
Also read: ChatGPT memories disappearing for some users: How you can protect yours
Google’s HOPE early results and performance
Using HOPE Google has reportedly achieved lower perplexity (a measure of uncertainty) and higher accuracy on a range of standard language and reasoning benchmarks compared to current leading LLM models. Hence, the early results suggest that nested learning could provide a foundation for AI systems that evolve in a more humanly manner by continuously refining their understanding instead of restarting with each new dataset.
“We believe the nested learning paradigm offers a robust foundation for closing the gap between the limited, forgetting nature of current LLMs and the remarkable continual learning abilities of the human brain,” the researchers wrote. Google HOPE is currently in the experimental stage. If this becomes public, it can be viewed as yet another innovation and way in which we use AI.
Bhaskar Sharma
Bhaskar is a senior copy editor at Digit India, where he simplifies complex tech topics across iOS, Android, macOS, Windows, and emerging consumer tech. His work has appeared in iGeeksBlog, GuidingTech, and other publications, and he previously served as an assistant editor at TechBloat and TechReloaded. A B.Tech graduate and full-time tech writer, he is known for clear, practical guides and explainers. View Full Profile