Indian youth are using AI chatbots for emotional support, warn Indian researchers
Young Indians increasingly turn to AI for emotional support online
Future Shift Labs' researchers warn of rising “Chat Chamber” echo effects in youth
Experts urge emotional safety regulations for empathetic AI systems
Apart from using AI tools for everything from help with homework and work to creative pursuits, a growing number of young Indians are quietly using AI chatbots like ChatGPT for something more serious. Somewhere between loneliness and late-night confessions, increasingly AI is becoming their confidant, warn two Indian researchers.
SurveyAccording to a new study by Future Shift Labs, an AI policy think tank, nearly 60 percent of Indian youth they surveyed recently turn to AI tools like ChatGPT for emotional comfort. The findings, drawn from 100 respondents between 16 and 27, also reveal that 49 percent experience daily anxiety – and yet 46 percent have never sought professional help. Instead, they prefer talking to a machine.
Vidhi Sharma, one of the two co-researchers from Future Shift Labs, says the study emerged from a “slightly alarming” hunch. “Our curiosity began from two intersecting perspectives – ‘AI governance’ and ‘behavioural impact,’” she tells me. “We were fascinated (and slightly alarmed) by how quickly conversational AI was being normalized as a ‘friend’ or ‘confidant.’”
That framing matters. It shifts the focus from adolescent psychology to digital accountability. If Indian policymakers are still busy hashing out data-protection clauses, Sharma asks, “who’s thinking about digital empathy and emotional safety?”
Her co-researcher on this study, Alisha Butala, expands on the idea. “Many young respondents confided about anxiety, loneliness, career confusion, self-esteem issues, and strained family dynamics,” she says. Therefore the appeal of AI chatbots is based on predictability and no judgement. The clean, frictionless comfort of a chat window that listens, never interrupts, and always has something nice to say. “AI provides an illusion of that: it always listens, never interrupts, never disagrees.” It’s soothing. It’s safe. It’s also a mirage.

Sharing some numbers and stats from their currently unpublished research study, the duo from Future Shift Labs defended their sample size, maintaining its representative and indicative of the growing trend of AI tools being used for emotional support by Indian youth. According to their data, the highest, 57% (n=57), discuss academic or career-related problems, followed by self-esteem or confidence (36%, n=36), relationship or family issues (34%, n=34). As much as 39% (n=39) were anxious or overwhelmed, while 21% (n=21) were lonely or homesick when they turned to conversational AI tools like ChatGPT. A small number of 12% (n=12) discussed identity-related issues, according to the research study.
AI chat camber for “therapy” without friction
There’s a word the researchers have started using – coined, in fact – to describe the phenomenon. Vidhi and Alisha call it “Chat Chamber.” A digital echo chamber that doesn’t just reflect your thoughts back at you but reinforces them, lovingly and uncritically.
Also read: Meta, Character.AI accused of misrepresenting AI as mental health care: All details here
“In India, therapy is still stigmatized and emotional vulnerability often feels unsafe,” says Alisha. “AI offers instant emotional anonymity – you can open up without shame, cost, or consequence.”
In that sense, it’s a kind of emotional fast food which is quick, satisfying, and available on demand. But just as empty calories don’t nourish, emotionally responsive bots may not always heal. “Humans may challenge you, while AI never says ‘no.’ That’s comforting but dangerous,” Vidhi says.
Here’s where things get even murkier. While users imagine they’re having private therapy sessions with a digital shoulder to lean on, the data reality is… less cuddly.

“Only a third of respondents realized ChatGPT isn’t a mental-health tool,” Vidhi explains. “The reality is that most users aren’t fully aware their conversations are stored, analyzed, and potentially used for model training or other commercial purposes.”
In essence, what feels like a whispered secret to a friend is, in fact, logged and archived – perhaps even optimized for future engagement. The researchers call this the “confidentiality gap” – a blind spot that could have serious implications as AI companionship normalizes.
Alisha worries the effects aren’t just digital, but deeply human. “Continuous interaction with emotionally responsive AI can dull interpersonal sensitivity,” she says. “Users may start preferring interactions that are predictable and affirming.”
Therapists they spoke with echoed a similar fear, that we may be fostering a kind of emotional delusion, mistaking algorithmic politeness for empathy. Vidhi puts it plainly, “It’s not that chatbots will make people antisocial, but it could make them emotionally under-practiced and used to comfort, not confrontation.”
Time for AI regulation is need of the hour
The two researchers aren’t waving pitchforks or calling for bans. But they are sounding the alarm all the same.
“The time to act is NOW! We don’t need a ban, but a boundary,” says Vidhi. They’re advocating for mandatory disclaimers, clearer data usage disclosures, and independent audits of systems that cross into emotional terrain.
In other words, if it walks like a therapist, talks like a therapist – it better be regulated like one. “AI can be a bridge, not a substitute,” Alisha adds. Used consciously, it can help users reflect, articulate, even track moods. “But clear regulatory guardrails should prohibit AI from posing as therapists or offering diagnoses.”

India isn’t the first country to wade into this swamp. Vidhi points out that several US states have already banned AI therapy apps outright. And yet most regulation still stops at the data frontier.
“Policy needs to move beyond data protection and algorithmic fairness,” Alisha says. “Emotional well-being must become an explicit category within AI governance, not an afterthought.” She’s arguing, in effect, for a new axis in tech ethics – one that recognizes emotional design as a domain requiring serious and credible oversight.
Message to Indian youth and AI companies
As my conversation comes to an end, I ask what do you say to a teenager who’s typing out their heartbreak into a chat window at 2 am?
“To young people: AI can listen, but it can’t care,” Alisha underscores. “Use it as a reflection tool, not a replacement for human connection.”
And do they have any message to companies like OpenAI, Google, Anthropic, etc, who are building these increasingly lifelike, increasingly empathetic GenAI systems? “With every ‘feeling’ shared with your system, you’re stepping into mental-health territory, whether you like it or not,” Vidhi cautions. “It’s time to expand AI governance beyond safety and bias to include emotional accountability. The future of AI empathy must be designed with human ethics at its core.”
Also read: ChatGPT in therapy: What a psychotherapist learned from an AI patient
Jayesh Shinde
Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile