India AI Impact Summit 2026: Trust in the age of synthetic media is turning into infrastructure

India AI Impact Summit 2026: Trust in the age of synthetic media is turning into infrastructure

At the India AI Impact Summit 2026, a session titled “Building trust in the age of synthetic media” tried to do something unusually concrete for a topic that often collapses into vibes and fear. Rather than arguing about whether AI-generated content is inherently “good” or “bad”, the panellists repeatedly came back to what it would take to make transparency about digital media legible, scalable, and compatible across platforms and jurisdictions.

Digit.in Survey
✅ Thank you for completing the survey!

Convened by the Coalition for Content Provenance and Authenticity and the Information Technology Industry Council, the panel featured Andy Parsons (Global Head of Content Authenticity at Adobe), John Miller (General Counsel and Senior Vice President of Policy at Information Technology Industry Council), Gail Kent (Global Public Policy Director at Google), Sameer Boray (Senior Policy Manager at Information Technology Industry Council), and Deepak Goyal from the Ministry of Electronics and Information Technology. Their shared premise: synthetic media is scaling fast, and trust is now a foundational requirement for everything from democratic discourse to consumer safety.

Provenance is being framed as “context”, not policing

Early in the session, John Miller positioned trust as the unifying thread running through modern digital policy, from privacy to cybersecurity to AI governance. His framing was careful: content provenance standards such as Coalition for Content Provenance and Authenticity are not meant to be a moderation tool, nor a mechanism for censorship. Instead, the pitch is closer to “verifiable context”, a way to attach tamper-resistant metadata to content so people and platforms can understand how something was made, edited, and shared.

That distinction matters because it tries to defuse a predictable backlash: that any system which labels, traces, or verifies media will become a lever for controlling speech. The panel kept reiterating the inverse idea: that provenance, at least in its ideal form, shifts decision-making to the viewer by providing information, not by making the decision for them.

This is where the “nutrition label” metaphor surfaced, a way to describe provenance as a standardised information panel. The ambition is not to decide truth, but to make questions like “is this a photograph?”, “was this edited?”, “was AI involved?”, and “what tool chain touched it?” answerable through objective signals.

Why Adobe and Google keep talking about Coalition for Content Provenance and Authenticity

As moderator, Andy Parsons outlined the coalition’s origin story: a multi-company effort, nearly five years in the making, intended to create a global standard that is “ready to adopt”. The key selling point is not that it solves every problem, but that it creates a shared, interoperable foundation. In other words, even if provenance does not answer every hard question about deception, it can at least standardise the basics of “where did this come from, and what happened to it”.

Gail Kent echoed that idea from the perspective of a company that sits on both distribution and creation surfaces. She pointed to long-standing product features that revolve around understanding images, reverse search, “about this image” style context, and newer multimodal workflows. The core argument was that AI increases creative capability, but also makes manipulation easier, which raises the value of embedding reliable signals into content at the point of creation.

She described two broad approaches. One is a marker that identifies AI-generated content. The other is richer provenance via content credentials that carry information about how a piece of media was created and edited. The subtext is important: labelling “AI-made” is not enough on its own, because the interesting questions are often about what changed, by whom, and whether the context is being misrepresented, not merely whether a model was used at some step.

Kent also stressed that AI-created does not automatically mean untrustworthy, a point that subtly pushes back against a future where “AI label” becomes a scarlet letter. In a world where AI tools are baked into mainstream apps and devices, the label has to communicate context without implying that the content is necessarily false.

The session’s most candid moment: “It’s not a silver bullet”

The panel’s most repeated phrase, in different ways, was that none of these systems are perfect. Sameer Boray was direct: Coalition for Content Provenance and Authenticity is “a solution”, not “the solution”. He placed it in a wider toolbox that includes watermarking, human review, and other provenance methods. This is not just a rhetorical hedge. It is a practical admission that any one mechanism will have failure modes, especially once content starts getting screen-recorded, re-encoded, shared across messaging apps, or remixed in ways that strip metadata.

Even Andy Parsons leaned into this realism: provenance is foundational, not magical. The value is in improving the baseline, making it easier for platforms, governments, and users to make better decisions with more information than they have today.

India’s regulatory moment, and the collision between ambition and deployment

A key undercurrent through the discussion was that India is in a particularly intense moment for digital regulation, with proposals that touch AI governance, privacy, and platform rules. The panel did not frame this as India acting in isolation, but as part of a global wave, with similar conversations happening in Europe, the US, and elsewhere.

Still, India’s scale, linguistic diversity, and mobile-first internet make the implementation question sharper. If policy is written as if every surface can instantly show provenance labels and verify credentials, it risks becoming performative, or worse, unworkable.

Boray raised the obvious concern in the context of a tight implementation window: provenance is not uniformly supported across major platforms, and it is particularly hard to enforce in private, closed, high-velocity sharing environments. He argued for a phased approach that gathers stakeholders and maps what is technically realistic, rather than assuming that a standard can be switched on everywhere at once.

This is where the panel’s tone became more pragmatic than ideological. Nobody was denying responsibility. The debate was about sequencing, feasibility, and the difference between a regulation that looks good on paper and one that can actually be complied with in a heterogeneous ecosystem of devices, apps, and user behaviours.

The government’s centre of gravity: citizens carry the risk

Deepak Goyal gave the clearest articulation of what is at stake from a governance standpoint. In his framing, the “risk bearer” is not primarily the platform. It is the individual whose likeness is cloned, whose voice is synthesised, whose decisions are manipulated, and whose credibility is undermined.

That view reorients the synthetic media debate away from abstract arguments about misinformation and towards more immediate harms: impersonation, fraud, coercion, and reputational damage. It also leads naturally to a rights-based framing: the right to know, the right to protection against impersonation, and the right to remedy when harm occurs.

He also emphasised a familiar regulatory instinct: being technology-agnostic, and ideally purpose-agnostic, while still aiming for citizen empowerment and ease of doing business. The suggestion was that if regulation sets principle-based outcomes rather than prescribing a specific technical approach, it is more likely to scale and more likely to align globally.

There was a revealing aside too: Goyal said he would like to test provenance tooling himself if given access. It sounds small, but it hints at a recurring problem in tech regulation, namely that the people tasked with making rules often do not get hands-on exposure to how the tooling behaves in real workflows.

The hard part the panel kept circling: virality and private sharing

One of the cleanest distinctions raised was between creation and dissemination. A person can create synthetic content and share it privately without triggering public harm. The societal risk appears when the content becomes viral, amplified, and detached from the context of its creation.

That is exactly where provenance struggles today. Metadata can be stripped. Content can be re-shared as screenshots. Audio can be re-recorded. Video can be re-encoded. Messaging apps can be the fastest path, and also the hardest place to attach, preserve, and display context.

The panel did not claim to have a perfect fix for this. Instead, it leaned into a more layered approach: provenance signals where possible, other markers where needed, and user-facing interfaces that make context understandable in the moment people are deciding whether to trust and share.

The three-way model: companies, government, users

Kent laid out a simple but consequential framework: trust is a shared ecosystem. Companies need to build and ship the tooling. Government needs to set principled goals and create workable rules. Users need media literacy, because no cryptographic system can replace human judgement in every situation.

This is also where a subtle warning surfaced: a trust ecosystem cannot be built solely through compliance. If the labels are confusing, if they stigmatise legitimate content, or if they are inconsistently applied, users will ignore them. In that sense, UX and literacy become as important as cryptography.

Kent’s personal example of a parent forwarding questionable information is familiar, but the point was less about family dynamics and more about scale: a world where every user requires personalised verification help is not a world where trust has been solved.

Can the world avoid a patchwork of incompatible rules?

The panel ended on a question that hangs over nearly every major tech policy conversation today: how to avoid a fragmented ecosystem where each jurisdiction sets slightly different requirements, forcing global products into inconsistent behaviours and reducing the chance of universal adoption.

All three voices who answered converged on principle-based regulation as the most realistic path to compatibility. If laws specify goals such as transparency, security, and privacy preservation, and avoid mandating a single implementation, the industry has room to innovate and standardise. If laws become prescriptive, convergence becomes harder, and fragmentation becomes more likely.

Boray added a policy practitioner’s caution: synthetic media intersects with privacy, cybersecurity, and AI governance, not only “content moderation”. Treating it as a narrow issue risks contradictory rules, even within the same country.

What the panel ultimately agreed on, without calling it agreement

Despite the surface-level disagreement over whether provenance is merely “one solution” or “a foundational solution”, the session converged on a few practical truths.

First, trust is becoming a prerequisite infrastructure layer for digital life, not a nice-to-have. Second, provenance standards are attractive because they are interoperable and, in theory, verifiable without relying on any single platform’s claims. Third, no single mechanism will solve virality, impersonation, and manipulation, so the future will be layered: provenance plus other markers, plus interfaces, plus education, plus remedies when harm occurs.

Most importantly, the panel treated implementation as the real battleground. Not because policy is irrelevant, but because every promise about trust is only as good as what survives the messy journey from creation tools to phones to feeds to forwards.

If synthetic media is the new normal, then “trust in the age of synthetic media” stops being a slogan and starts looking like systems engineering: standards, incentives, and user comprehension, all moving together, or not moving at all.

If you want, this can be tightened into a 900 to 1,100-word feature for a news site, or expanded into a longer reported piece by adding one concrete Indian case study of impersonation fraud and mapping exactly where provenance would, and would not, have helped.

Mithun Mohandas

Mithun Mohandas

Mithun Mohandas is an Indian technology journalist with 14 years of experience covering consumer technology. He is currently employed at Digit in the capacity of a Managing Editor. Mithun has a background in Computer Engineering and was an active member of the IEEE during his college days. He has a penchant for digging deep into unravelling what makes a device tick. If there's a transistor in it, Mithun's probably going to rip it apart till he finds it. At Digit, he covers processors, graphics cards, storage media, displays and networking devices aside from anything developer related. As an avid PC gamer, he prefers RTS and FPS titles, and can be quite competitive in a race to the finish line. He only gets consoles for the exclusives. He can be seen playing Valorant, World of Tanks, HITMAN and the occasional Age of Empires or being the voice behind hundreds of Digit videos. View Full Profile

Digit.in
Logo
Digit.in
Logo