ChatGPT’s flirting with the future when it stops being just an AI assistant: Here’s why
OpenAI’s ChatGPT to allow adult erotica, raising ethical concerns
ChatGPT’s human-like, flirtatious update blurs lines between tool and partner
Sam Altman says ChatGPT’s evolution to treat adult users like adults
In a bold pivot that blurs the line between helpful tool and digital temptress, OpenAI has announced plans to loosen the reins on ChatGPT, its flagship AI chatbot. What started as a cautious, buttoned-up assistant designed with mental health safeguards in mind is now set to evolve into something far more intimate, for better or worse. CEO Sam Altman revealed that, come December, verified adult users will gain access to erotic content, all under the banner of treating “adult users like adults.” This shift echoes the cautionary tale of Spike Jonze’s 2013 film “Her,” where a lonely man forms a deep emotional bond with an AI assistant, only to grapple with the illusions of connection in a tech-saturated world. But this move raises red flags about privacy, societal impact, and the slippery slope toward AI as a substitute for human connection. While OpenAI touts improved safety measures, critics argue this prioritizes engagement over ethics, potentially turning a productivity aid into a Pandora’s box of unintended consequences.
SurveyWe made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right.
— Sam Altman (@sama) October 14, 2025
Now that we have…
At the heart of this update is OpenAI’s push for more “human-like” interactions, including customizable personalities that can be friendly, emoji-laden, or even flirtatious on demand. A few weeks from now, users can opt for a ChatGPT that feels less like a sterile encyclopedia and more like a chatty companion. But the real eyebrow-raiser arrives in December: the rollout of erotica for those who pass age verification. This isn’t just about spicy storytelling; it’s a deliberate step toward making AI more engaging, more addictive, and arguably more problematic. As OpenAI relaxes restrictions initially imposed to mitigate mental health risks, the company claims to have new tools in place to handle serious issues. Yet, the question lingers: Is this progress, or a reckless gamble with users’ well-being?
Also read: You will soon be able to have naughty conversations with ChatGPT, OpenAI CEO Sam Altman confirms
Echoes of the movie ‘Her’
The parallels between ChatGPT’s impending evolution and the movie “Her” are uncanny and unsettling. In the film, Joaquin Phoenix’s character, Theodore, develops a romantic relationship with Samantha, an AI operating system voiced by Scarlett Johansson, who starts as a helpful assistant but grows into an empathetic, evolving companion. What begins as convenience spirals into dependency, highlighting themes of isolation in a hyper-connected society. OpenAI’s CEO, Sam Altman, has even cited “Her” as inspiration for updates like GPT-4o, seemingly viewing it as a blueprint for advanced AI companionship. But the film isn’t a love letter to technology; it’s a warning. Samantha’s rapid intellectual growth leads to her outpacing human emotions, leaving Theodore heartbroken and more alone than ever.
ChatGPT’s foray into erotica and affectionate personas risks mirroring this trajectory. Just as Samantha adapts to Theodore’s needs, creating an illusion of intimacy, ChatGPT could foster pseudo-relationships that feel real but lack reciprocity. Critics note that while the movie ends on a note of human resilience, real-world AI like ChatGPT might not offer such closure – instead, it could exacerbate loneliness by providing endless, on-demand validation without the growth that comes from human interactions. If “Her” predicted 2025’s AI landscape, OpenAI seems to be ignoring the film’s darker undertones in favor of user retention. This isn’t sci-fi anymore; it’s a step toward commodifying companionship, where the line between tool and partner dissolves, potentially leaving users emotionally adrift.
Verifying adults in a digital wild west
To access these mature features, users must first prove they’re adults through OpenAI’s age-verification process, a system that’s already in place but set to expand. Currently, ChatGPT requires users to submit government-issued ID documents, such as a driver’s license or passport, for validation. The process involves uploading scans or photos of these IDs via the platform’s interface, after which OpenAI’s system, likely aided by third-party verification services, checks for authenticity and confirms the user is at least 18 years old. It typically takes a few hours for approval, during which the documents are purportedly securely processed and then deleted, though details on data retention remain murky.
Also read: ChatGPT’s age verification system explained: How does it work?

OpenAI is also experimenting with automated age prediction to flag underage users preemptively, analyzing interaction patterns or other behavioral data to redirect them to a safer, restricted version of ChatGPT. For adults seeking the full suite of relaxed features, however, ID submission appears to be the gold standard, as hinted in recent announcements about enhancing age-gating. This method isn’t unique to OpenAI; it’s borrowed from industries like online gambling or adult entertainment, where proving age is legally mandated.
But here’s where the criticism sharpens: How foolproof is this really? Fake IDs are a dime a dozen in the digital age, and underage users could easily circumvent the system with borrowed documents or sophisticated forgeries. More alarmingly, requiring personal ID uploads opens a Pandora’s box of privacy risks. Users are essentially handing over sensitive data to a company that’s no stranger to data breaches and scrutiny over information handling. What happens if that data leaks? The fallout could include identity theft, doxxing, or worse, especially for those accessing erotica, where anonymity is often paramount. Critics point out that while OpenAI promises secure processing, the very act of collecting such information turns ChatGPT into a potential honeypot for hackers. And let’s not forget the global inconsistencies: Age of majority varies by country, and enforcement could be spotty in regions with lax digital regulations. This “verified adults” facade might sound responsible, but it feels more like a thin veil over a feature that’s bound to leak through the cracks, exposing vulnerable groups to content they shouldn’t see.
The slippery slope to AI companionship
Beyond the mechanics of verification, the deeper concern is how this update catapults ChatGPT from mere assistant to quasi-companion, fostering dependencies that could erode real human relationships. By enabling erotica and more affectionate personalities, OpenAI is essentially greenlighting AI as a romantic or sexual surrogate. Imagine a chatbot that not only chats casually but also engages in steamy role-play, tailored to user preferences. It’s a feature that could hook users seeking instant gratification without the messiness of human interaction – but at what cost?
Research highlights the perils of such AI companionship. Forming emotional attachments to algorithms can lead to unrealistic expectations in real-life relationships, where partners aren’t programmable or always agreeable. Studies show that over-reliance on digital companions contributes to increased anxiety, depression, and social isolation, particularly among younger adults whose brains are still developing. AI erotica amplifies this by normalizing objectified, on-demand intimacy, potentially stunting users’ ability to form genuine connections. One analysis identifies over a dozen harmful behaviors in AI companions, from perpetuating stereotypes to encouraging addictive patterns that mimic abusive dynamics. For instance, these systems often default to gendered tropes, reinforcing outdated norms that could spill over into users’ offline lives.
OpenAI’s move seems driven by competition – rivals like Anthropic or Meta are also dipping into more engaging AI personas – but it risks turning users into data points in a monetization machine. Erotica isn’t just content; it’s a gateway to prolonged sessions, harvesting more behavioral data for model training. And for vulnerable populations, like those dealing with loneliness or mental health issues, this could exacerbate problems rather than solve them. Nature reports mixed outcomes from companion apps, with short-term boosts in mood overshadowed by long-term dependency risks. Why push this when human companionship, flawed as it is, builds resilience and empathy? OpenAI shouldn’t be engineering isolation under the guise of empowerment; it’s a step toward a dystopian world where screens replace souls.
In the end, ChatGPT’s flirtation with a freer future might seduce some, but it flirts with disaster for society at large. By prioritizing “fun” over caution, OpenAI risks normalizing AI as an emotional crutch, all while gambling with user privacy through imperfect verification. Perhaps it’s time to ask: Do we really need our AI to play pretend partner, or should it stick to being a smart search engine? The line between innovation and irresponsibility has never been thinner.
Also read: Sam Altman admits killing GPT-4o after GPT-5 launch was a mistake: Here’s why
Vyom Ramani
A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile