India’s deepfake crisis: Women are falling prey to AI menace more than men

When a morphed video of actor Rashmika Mandanna went viral in late 2023, it forced India to confront an uncomfortable reality – artificial intelligence had quietly become one of the most potent weapons against women. The clip, which superimposed her face onto another woman’s body, reached millions of views before platforms could act. Prime Minister Modi called it a “crisis.” It was, in fact, just the beginning.

Also read: Women powering India’s premium smartphone recommerce, says Cashify

Two years on, the numbers tell a damning story. A new report by AI safety firm pi-labs reveals that 93% of deepfake victims globally are women, with a massive 900% rise in non-consensual synthetic content targeting them in recent years. In India, cybercrime complaints involving women have jumped from roughly 50,000 in 2024 to nearly 80,000 by 2026, a 60% hike in just two years.

What makes this crisis distinctly gendered is not just who is targeted, but how. Image morphing and explicit deepfake videos dominate the abuse landscape, with deepfake pornography accounting for some of the highest frequency content online. Victims range from school-aged girls to young professionals, primarily between 18 and 30 years old, with cities like Bengaluru emerging as a hotspot, accounting for nearly 30% of reported cases.

Yet the most chilling statistic is that 62% of deepfake abuse cases involving women go unreported. Victims often remain silent due to stigma. Over a third of Indian women who experience online harassment take no action at all, and many just quietly shrink their digital presence rather than fight back. Around 33% aren’t even aware of the laws that exist to protect them.

Also read: ‘Feel the fear and do it anyway’: Lenovo’s Fiona O’Brien tells women in tech

The abuse is no longer limited to celebrities. In one widely reported case, an ex-partner used AI tools to create an explicit Instagram profile using a woman’s face, amassing 1.4 million followers before it was even detected. The industrialisation of this abuse, powered by more than 5,000 face-swap tools and 1,000 voice-cloning applications currently accessible online, means ordinary women are now just as vulnerable as public figures.

The Grok controversy in early 2026 made the scale explosively clear. When users on X exploited the AI image tool to generate sexualised versions of women’s photos at an estimated 6,700 images per hour, India’s IT ministry issued a 72-hour ultimatum to the platform. MP Priyanka Chaturvedi filed a formal complaint.

India has begun responding. Amended IT Rules now require AI-generated content to carry watermarks, and several celebrities – from Aishwarya Rai Bachchan to NTR Jr. – have successfully won court orders against deepfake creators. But legal remedies still remain largely inaccessible to the women who need them most.

The pi-labs report concludes with a sobering prescription, reducing your digital footprint. For a generation that has grown up online, that is less a solution than a surrender.

Also read: I replaced my PC with the Asus ROG ecosystem, here is what happened

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack.

Connect On :