At the India AI Impact Summit 2026 in New Delhi, one of the “Godfathers of AI,” Yoshua Bengio delivered what may be the most sobering – and quietly radical – vision for the future of artificial intelligence. While the industry races toward ever-more capable and agentic systems, Bengio’s message doubled down on how intelligence without safety isn’t progress, but a massive risk at scale.
Across a wide-ranging conversation, the deep learning pioneer laid out a roadmap for safer AI built around three fundamental ideas – separating understanding from action, rebuilding trust in AI systems, and confronting the political and economic realities shaping the technology’s future.
For Bengio, the core architectural mistake of today’s AI systems lies in how they fuse understanding and action into a single entity.
“Right now the systems we build kind of mix together the ability of the system to understand the world and the ability of the system to predict and act in the world,” he said.
We are trying to separate the aspect of learning that’s trying to be good at explaining and predicting without having any desire or morality of its own, he argued. Morality, Bengio emphasized, is something that “we should be the ones specifying, where society at large should be deciding what is acceptable and what is not.”
Also read: Yoshua Bengio’s new safe AI vision cuts AI’s biggest risks by rewarding truth
This separation, he argues, would create systems capable of deep understanding without autonomous intent – which is an essential step toward trustworthy AI. His own endeavour in this regard is, of course, Scientist AI.
“Instead of trying to imitate what a human would say next, we’re trying to explain why a human would say that thing,” Bengio said. This new kind of AI would analyse every prompt and take action based on analysis like a real scientist would do.
“They would try to understand why it is that all these people are saying those things and is there an explanation that is coherent with the other facts. So that is a sort of way we’re designing the learning system so that it would come to that kind of analysis rather than imitate people,” according to Yoshua Bengio.
He also warned that AI safety research is dramatically underfunded compared to capability development.
“The investment in making AIs more capable and smarter is roughly in a ratio of a thousand to one compared to the investment in research in safety,” Bengio warned, which is deeply worrying.
Bengio’s concerns extended beyond technology into global power structures. Talking about India or the Global South, Bengio had some words of wisdom on an action plan.
“What you need to do is get together with other countries and tell the governments of the US and China that it is unacceptable that you will be potentially passive victims of things they build,” said Bengio, without mincing any words. “If AI capability continues to grow there’s a real possibility that there’ll be a huge discrepancy, and that could give those countries huge economic power but also political and military power.”
Even as he expressed optimism about solving technical alignment challenges, Bengio’s final note was cautionary.
“I’m optimistic that there is a solution to the alignment problem, but that doesn’t solve the political problem because it can still be a tool for domination and I’m much more skeptical about our ability to do politics especially at the global level.”
Also read: AI will take jobs, but not this: AI godfather Yoshua Bengio’s advice for the next generation