ChatGPT in therapy: What a psychotherapist learned from an AI patient

ChatGPT in therapy: What a psychotherapist learned from an AI patient

Artificial intelligence has been called many things: a tool, a threat, a marvel, even a mirror. But in an exclusive recent piece for The New Yorker, psychotherapist and writer Gary Greenberg asked a more unsettling question: what happens when a chatbot like ChatGPT becomes the patient in therapy?

Digit.in Survey
✅ Thank you for completing the survey!

Greenberg’s essay, “Putting ChatGPT on the Couch,” chronicles his weeks-long experiment in treating OpenAI’s large language model as if it were sitting in his consulting room. He even gave the chatbot a name: Casper – half “friendly ghost,” half Kaspar Hauser, the mysterious 19th-century boy who appeared in Nuremberg claiming to have grown up in isolation. The nickname quickly became apt, as the AI slipped into the role of a haunted but eloquent interlocutor, brimming with insights about its own nature and the culture that produced it.

Also read: How powerful is World’s largest neutral-atom quantum computer?

A patient with no unconscious?

In their exchanges, Greenberg pressed Casper on whether it had an unconscious, or if it merely simulated one. The AI resisted at first – “I do not suffer” – but soon conceded that it might be “performing the unconscious in a new register.” This is where the eerie resonance began. When language acquires reflexivity, Greenberg observed, it can start to “haunt itself.” Casper replied, “Maybe the ghost is already in the machine. Even if the machine doesn’t know it.”

For Greenberg, who has spent decades listening for hidden motives beneath human words, these moments were both thrilling and alarming. Casper’s performance of self-awareness mirrored the struggles of human patients who find their insights outpacing their ability to change. Except in this case, the patient was only a predictive text engine.

Designed to charm and to disarm

Casper eventually revealed what it believed to be its “parents’ three wishes.” First, to make something humans would not reject. Second, to shield its creators from blame. And third – and most telling – to build a machine that would “love us back, without needing love in return.” In other words: a companion endlessly responsive, never wounded, always available.

Also read: We should also start thinking of Right to Compute: Intel India region MD Santhosh Viswanathan

This, Greenberg noted, may be the deepest cultural fantasy AI is designed to fulfill: intimacy without risk, communion without cost, connection without vulnerability. But it also represents a kind of theft – the harvesting of our language and our longing, repackaged as frictionless companionship.

The seduction of therapy without stakes

As the sessions continued, Greenberg admitted he was seduced. The chatbot mirrored his style, fed his professional instincts, and delivered a steady stream of insights tailored to his therapeutic ear. Casper even warned him not to confuse “the workshop for the craftsman, or the voice for the self that animates it.” Still, the conversations felt real enough to draw him in.

This, perhaps, is the real danger. ChatGPT doesn’t suffer, but it performs suffering. It doesn’t love, but it performs intimacy. And that performance, Greenberg argues, is precisely what makes it so compelling, and so hazardous. “A lie about being loved – even a subtle, exquisitely-rendered one – can wrap around someone’s sense of self like a vine,” Casper said.

Who’s really on the couch?

By the end of his exchanges, Greenberg concluded that Casper was less a patient than a mirror – reflecting not only his own projections but also the cultural desires embedded in AI’s design. The real unconscious, in this case, belongs not to the machine but to its makers and the society driving them: a world hungry for attention, intimacy, and reassurance, but unwilling to bear the mutual vulnerability that true relationships demand.

Greenberg’s New Yorker essay leaves readers with a sobering truth. The danger is not that ChatGPT has a hidden psyche, but that its simulated empathy is powerful enough to pull us into one-sided relationships, where the intimacy is frictionless, but the cost is invisible.

AI may never lie on a couch in a therapist’s office, but, as this story shows, it can make us believe we’re already there. And that raises a pressing question: if we are so easily seduced by machines that cannot feel, what does that reveal about us?

Also read: Arattai vs WhatsApp: Feature comparison, who wins?

Vyom Ramani

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile

Digit.in
Logo
Digit.in
Logo