The conversation around artificial intelligence safety couldn’t arrive any quicker for young kids and adolescents with impressionable and developing minds. At the India AI Impact Summit 2026, a coalition spanning IEEE, UNESCO, OECD and India’s own public technology institutions made one thing abundantly clear…
Protecting young users from AI’s unintended consequences is no longer a theoretical exercise. It is an urgent design and governance challenge unfolding in real time.
Amir Banifatemi, AI Commons, set the tone with a stark assessment of the structural gap shaping AI safety today. “The problem is that we’re facing a two-speed problem. On one side you have institutions and regulators coming up with frameworks based on principles… but at the same time we see an increasing growth of AI deployment at a very rapid scale. This creates a chasm because we don’t sync them together, and policy may fall behind instead of being preventive about it.”
For Banifatemi, the solution lies in alignment: “So if we really want a framework of trust, we need to sync innovation and policy and build systems that are trustworthy, accountable and transparent.”
That trust deficit becomes sharper when young users enter the frame. Banifatemi pointed to a rapidly evolving threat landscape where misinformation and algorithmic manipulation are no longer edge cases. “When it comes to misinformation, manipulation or deepfakes, there are really three issues. People can misuse systems, models can make mistakes because of poor data or transparency, and we’re entering a third wave where autonomous agents may change objectives.”
The result, he warned, is a deeper epistemic crisis. “So what is left for us humans — how can we trust information anymore? The answer is to build frameworks of trust and accountability that keep pace with innovation.”
Also read: Yotta to Adani: India building sovereign, frontier AI with Global South relevance
For policymakers and parents alike, the risks are already visible. Karine Perset, OECD, brought a personal lens to the debate, underscoring how quickly generative AI has outpaced digital literacy among younger users.
“I see what my teenagers do with their social media and chatbots, and many things that I don’t see. The things I do see are pretty scary because they show that they’re not prepared, they’re not equipped to deal with so much information.” She added that younger users lack the contextual filters adults take for granted. “They have no way to navigate this world, and they’re younger and more vulnerable. So they need environments where the trustworthiness of information is ensured,” Karine highlighted.
Yet the scale of the challenge makes isolated solutions ineffective. “The challenges are unprecedented in scale and complexity, and no single actor can address them alone,” Perset said. “We need collaborative cross-disciplinary efforts that combine policy innovation with technical ingenuity. Policies and technical solutions must move hand in hand.”
At UNESCO, the focus is on the deeper cognitive and societal implications of AI-driven information systems. Mariagrazia Squicciarini, UNESCO, captured the disorientation of a synthetic media environment.
“The inherent challenge of the AI era is the difficulty of distinguishing what is real from what is not. It’s like walking in a dark room without being able to see anything — yet we do this every day online.” For younger users navigating hyper-personalised feeds, that confusion compounds quickly. “When this is coupled with the quantity of information and hyper-personalization, the risks multiply. Youth is a priority group for UNESCO because they are already at the center of this ecosystem.”
Also read: India AI Impact Summit 2026: How road safety AI, ADAS tech are reducing accidents
Crucially, she argued that young users must be participants in shaping AI governance, not merely its subjects. “Young people were born digital and trust technologies because they see them as part of their lives. That is why they must not only be protected but included in shaping solutions.”
The stakes extend beyond literacy alone. “Education, cognitive skills and emotional skills are intertwined with AI literacy. There is nothing more dangerous than taking trustworthiness of information for granted,” Squicciarini summed up.
India’s approach offers a parallel track focused on infrastructure-level trust. Mohammed Misbahuddin, C-DAC India, framed AI safety as an extension of existing digital trust frameworks. “Information integrity today is about knowing whether what we see is real or fake. With deepfakes and synthetic media, it is becoming harder to distinguish authenticity.”
Misbauhuddin pointed to India’s experience with population-scale digital infrastructure as a template. “India has built trust layers like digital identity, digital signatures and UPI at population scale, showing that trust can be engineered into infrastructure. Exactly the same trust-based framework is required for AI.”
That framework is now being extended directly into education. “Safe and trusted AI is now a pillar under the IndiaAI Mission. We are introducing AI education for students from class 8 to 12 along with age-appropriate and trusted design principles,” Misbahuddin said. “Youth AI must be built with trust and accountability from the start. That is critical for the next generation.”
But the long-term developmental effects of constant AI exposure remain uncertain. Yuko Harayama, RIKEN, warned that the world is effectively running a real-time experiment on children. “We don’t yet know the long-term impact of using AI every day on children’s development. Even very young children are already using these tools, and it changes how they interact and form values.”
Also read: Global AI commons: India’s most ambitious tech diplomacy pitch yet
Waiting for perfect data is not an option. “We need scientific evidence and collaboration across countries and cultures to understand these impacts. We don’t have time to wait because they are growing up now.”
For standards bodies like IEEE, the gap between technological capability and governance maturity is now the central risk.
Alpesh Shah, IEEE Standards Association, was blunt: “The problem hasn’t been the technology — it’s everything else. Technology has outpaced how we think about governance and safety.” That mismatch demands inclusive frameworks.
“That’s why inclusion is critical, especially including youth who understand these systems better than anyone. Age-appropriate design and global standards must work together to protect them,” highlighted Shah.
The solution, he argued, lies in collective action rather than institutional silos. “No one can do this alone because no single institution has the context for every problem. Multiple governance models and standards are required to address misinformation and protect young users at scale. Partnerships are the only way forward,” summed up Shah.
If there was a unifying message from the summit, it was that safeguarding the next generation in an AI-saturated world will require nothing less than synchronized global cooperation. Where standards bodies, governments, educators and industry come together to ensure that trust, once lost, is not the price of technological progress.
Also read: From factories to bazaars, what the India AI Impact Summit’s skilling panel is really arguing for