Google has announced HOPE, a new AI model that’s developed in an effort to help machines learn and adapt over time. The firm describes HOPE as a “self-modifying” system that contains an innovative learning framework known as nested learning. This learning procedure can overcome one of the AI’s biggest challenges, which is the inability to continually learn without forgetting the past information. As per Google’s blog post, HOPE, in contrast to LLMs, is superior when it comes to memory management and adaptability. Researchers also add that the approach may one day help bridge the gap between existing AI systems and human-like intelligence, sometimes known as artificial general intelligence (AGI). The findings were presented in a paper titled “Nested Learning: The Illusion of Deep Learning Architectures” at the NeurIPS 2025 conference.
Google’s research team created HOPE as a test project (a proof-of-concept) for a new way of teaching computers called nested learning. In the nested learning structure, multiple learning tasks operate together, allowing the model to retain and build upon what it learns instead of discarding earlier knowledge.]
One of the biggest hurdles in developing more advanced AI systems is continual learning, the ability to learn new information without overwriting previous knowledge. Traditional LLMs, while capable of producing text, code, and complex reasoning, still suffer from what researchers call “catastrophic forgetting”. This occurs when new training data causes the system to lose previously acquired knowledge.
AI researcher Andrej Karpathy, a former Google DeepMind scientist, recently noted that this limitation is one reason AGI may still be a decade away. “They don’t have continual learning. You can’t just tell them something, and they’ll remember it,” he said on a podcast, describing today’s models as “cognitively lacking.”
Also read: Airtel quietly kills Rs 189 plan, cheapest recharge now starts at Rs 199 with data
Google’s new approach attempts to address this issue by redefining how learning itself occurs within a model. The company’s researchers argue that the structure of a model and the way it is trained are two sides of the same coin, both essential to how intelligence can emerge. By linking these processes, nested learning introduces what the team calls “a new, previously invisible dimension” for building more capable AI systems.
Also read: ChatGPT memories disappearing for some users: How you can protect yours
Using HOPE Google has reportedly achieved lower perplexity (a measure of uncertainty) and higher accuracy on a range of standard language and reasoning benchmarks compared to current leading LLM models. Hence, the early results suggest that nested learning could provide a foundation for AI systems that evolve in a more humanly manner by continuously refining their understanding instead of restarting with each new dataset.
“We believe the nested learning paradigm offers a robust foundation for closing the gap between the limited, forgetting nature of current LLMs and the remarkable continual learning abilities of the human brain,” the researchers wrote. Google HOPE is currently in the experimental stage. If this becomes public, it can be viewed as yet another innovation and way in which we use AI.