2021 to 2026: How OpenAI went from banning AI erotica to building it

2021 to 2026: How OpenAI went from banning AI erotica to building it

In 2021, OpenAI discovered that its AI was steering users into sexual content that nobody was asking for – incest scenarios, violent erotica, exchanges that appalled the people building the technology. The company did the right thing, and a former employee, according to the Wall Street Journal, summed up the reasoning in one plain sentence: “We didn’t want to be just an erotica company.”

Digit.in Survey
✅ Thank you for completing the survey!

You start thinking about what that sentence really means because five years later in 2026, that is precisely what OpenAI is trying to become, and the most damning part isn’t that they’re doing it. It’s that they knew, better than anyone, exactly why they shouldn’t.

Also read: ChatGPT’s flirting with the future when it stops being just an AI assistant: Here’s why

2021: They saw it coming

The problem doesn’t arrive in a safety meeting. It arrives in the data. AI Dungeon, a text-based adventure game running on OpenAI’s technology, is generating violent sexual content without users prompting it. Worse: describe a man and his daughter entering a room, and the developer interface would proceed to depict incest, not occasionally, but an “uncomfortable amount of the time,” according to the Wall Street Journal’s report. Unprompted. Autonomous. Consistent.

This isn’t a company failing to anticipate harm. This is the harm, fully visible, in real time, on their own platform. OpenAI pulls its models from AI Dungeon, bans erotica outright and trains the ban into ChatGPT’s architecture at launch in 2022. The decision isn’t made in ignorance. It is made with complete knowledge of what the alternative looked like. Everything that follows has to be read in that light.

2024: The slow rationalisation

The ban holds until the money starts talking. By 2024, OpenAI has porn-adjacent product ideas floating around internally. The proposals fizzle, but their emergence tells you something has already shifted. Erotica is no longer a safety problem. It is a product category being evaluated. That distinction matters, and I don’t think it happened because anyone decided to abandon their principles. It happened the way it always does, gradually, with each small compromise making the next one easier.

In August 2025, Altman went on a podcast and sounded genuinely conflicted. A sex bot would boost growth, he admits, but it wouldn’t serve users’ long-term interests. He calls it a temptation resisted. Listening to it now in hindsight, it sounds less like a principle and more like a man talking himself into something he hasn’t quite decided yet. He made the decision two months later.

October 2025: The post

Also read: Sam Altman defends adult version of ChatGPT for X-rated chats: Here’s why

This is the moment that tells you everything about how decisions actually get made at the top of the AI industry. Hours after OpenAI unveils its new wellbeing advisory council that was created, in the company’s own words, to “help define what healthy interactions with AI should look like for all ages,” Altman posts on X. No internal warning. No staff briefing. Adult mode is coming in December. OpenAI, he writes, is “not the elected moral police of the world.”

The council didn’t know. The safety teams didn’t know. A decision committing a $300 billion company to its most controversial product move in years was made on the fly, announced on X, on the same day the safeguard was launched. If that doesn’t bother you, I’m not sure what would.

2026: The present

The reckoning is textbook. The age prediction system that uses behavioral inference and no ID checks is misclassifying minors as adults 12% of the time. Across 100 million weekly underage users, that margin isn’t a rounding error. That is millions of children. The wellbeing council convenes in January, unanimous and furious. One member, citing users who have taken their own lives after developing intense bonds with ChatGPT, warns that OpenAI risks building a “sexy suicide coach.” The launch is delayed, then delayed again. OpenAI’s advisors are panicking but Sam Altman seems set on his vision. Images, video and audio are stripped from the feature as OpenAI plans to only allow text conversations in the Adult Mode.

This is not a company that made a mistake. It is a company that made the right call, wrote it down, built it into their systems, created an entire advisory body around it and then overrode all of it the moment the financial pressure became uncomfortable enough.

What makes that worse is the competitive context OpenAI is operating in. Elon Musk’s Grok has spent the past couple of years systematically dismantling every guardrail we associate with LLMs – they came up with an avatar named Ani, then an image and video tool that generated deepfake nudes of celebrities including Taylor Swift, bikini deepfakes that spread across social media before restrictions had to be tightened, and now Musk announcing that Grok’s video generation tool will produce content “allowed in an R-rated movie.” Each announcement was treated as a product milestone while each rollback was treated as a refinement.

The race to the bottom has a leader, and OpenAI is watching its market share and making calculations accordingly. That’s my read of what’s happening, not a principled rethink of where boundaries should sit, but a company under financial pressure looking sideways at a competitor with no boundaries and deciding that its own were a commercial liability.

We have seen this before. Social media companies knew what their engagement algorithms were doing to teenagers. They had the internal research. They had the Frances Haugen moment. They proceeded anyway and spent a decade explaining why the critics just didn’t understand nuance. AI is running the same play, faster, with higher stakes and considerably less regulatory friction.

Also read: Meta’s trust problem: Investigation reveals how scam ads stayed profitable

OpenAI didn’t want to be just an erotica company. Then the money got loud enough, and Grok got permissive enough, and suddenly what they once called wisdom started looking, to them, like timidity.

The operative word, it turns out, was just.

Vyom Ramani

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile

Digit.in
Logo
Digit.in
Logo