Iran-Israel-US war: AI images and videos intensifying fog of war in Middle East
As missiles fly across the Middle East, a parallel war over information is being fought on screens and artificial intelligence is making it harder to know what is real. The New York Times has identified a plethora of AI-generated photos and videos misrepresenting battlefield events in the ongoing conflict, adding to evidence that generative AI has become a tool of modern warfare.
SurveyTrump’s claims and the complications
U.S. President Donald Trump on Sunday accused Iran of deploying AI as a “disinformation weapon,” citing fabricated images of Iranian kamikaze boats that “do not exist.” But Reuters has verified footage from Iraq’s Basra port showing fuel tankers under attack by Iranian boats. The boats exist, though how much specific viral imagery has been manipulated remains contested. Even when the event is real, synthetic imagery can distort the scale, context, and consequence and in a high-speed information environment, the distortion travels a lot faster than the correction.
Also read: This guy saved his dog from cancer by creating a mRNA vaccine using ChatGPT

Trump’s disinformation accusation would carry more weight if his administration were not running its own version of information control. While the president calls out Iranian AI propaganda, his FCC chairman has threatened to revoke broadcast licenses over war coverage he deems incorrect. Controlling the narrative and policing disinformation can look identical depending on which side of the border you are standing on.
Mass hysteria in the age of deepfakes
The risks extend well beyond bad headlines. A convincing deepfake of a nuclear facility under attack, a capital city on fire, or a leader assassinated could trigger real-world panic and escalation before a single verification is complete. Markets would move. Governments would face pressure to respond. Populations with no way to assess what they are seeing would fill the gap with fear.
Also read: India’s deepfake crisis: Women are falling prey to AI menace more than men
This is not hypothetical, during the Russia-Ukraine war, unverified footage shaped public perception before ground truth could be established. Today’s AI tools are over a generation more capable than those of 2022, and this conflict involves nuclear motives. The margin for a catastrophic misread is thin.
Children are particularly at risk. Younger audiences consuming war content through TikTok and Instagram have grown up where synthetic imagery is normal and the line between entertainment and news starts thinning in geopolitical conflicts. Early research suggests exposure to unverifiable conflict content produces measurable anxiety and distorted threat perception in minors, yet platforms have no specific protections in place, and media literacy education has not kept pace with the technology.
A governance vacuum
If a deepfake triggers a military response or market crash, no jurisdiction has a clear answer on liability. Synthetic media in conflict should fall under laws of armed conflict, but no framework exists. Mandatory watermarking remains the most discussed fix, but open-source models and non-compliant states make the enforcement of that watermarking a distant prospect. Major social media platforms like X, Instagram, Reddit and Youtube have no credible wartime protocol, their moderation systems were built for peacetime, and adversarial actors are moving faster. The fog of war has always been a feature of conflict for a long time. AI is making it denser and the systems that might cut through it are not yet ready.
Also read: Elon Musk’s Starlink shaping 2026 Iran-US-Israel cyber war, here’s how
Vyom Ramani
A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile