Sam Altman admits killing GPT-4o after GPT-5 launch was a mistake: Here’s why

HIGHLIGHTS

Sam Altman addresses GPT-4o backlash, admitting mistake over deep user AI attachment

AI as confidant, coach, and risk: Altman weighs AI–human attachment challenges

\Why removing an AI feels like ending a relationship to so many

Sam Altman admits killing GPT-4o after GPT-5 launch was a mistake: Here’s why

When OpenAI pulled GPT-4o from its platform during the GPT-5 rollout, the change might have looked like a straightforward technical upgrade. But for many daily users, GPT-4o was more than a tool, it was a consistent conversational partner whose tone, rhythm, and personality had become familiar over time.

Digit.in Survey
✅ Thank you for completing the survey!

Its abrupt disappearance triggered frustration, sadness, and, for some, a genuine sense of loss. This week, CEO Sam Altman addressed the backlash in a lengthy post on X, acknowledging that OpenAI had underestimated the depth of these connections.

Altman wrote, “It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake).”

Also read: Sam Altman rips into Elon Musk, accuses him of manipulating X for self-benefit

Recognizing AI–human attachment

Altman explained that OpenAI had been “closely tracking” this attachment for the past year, though it hadn’t received much public attention, apart from a moment when an update made GPT-4o “too sycophantic.”

He drew a distinction between most users, who “can keep a clear line between reality and fiction or role-play,” and a smaller percentage who might struggle, particularly those in vulnerable mental states. “If a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,” he said.

These situations aren’t always obvious. “Encouraging delusion is an extreme case and it’s pretty clear what to do,” Altman noted. “But the concerns that worry me most are more subtle. We plan to follow the principle of ‘treat adult users like adults,’ which in some cases will include pushing back on users to ensure they are getting what they really want.”

AI as confidant, coach, and risk

Altman acknowledged a reality many have observed: “A lot of people effectively use ChatGPT as a sort of therapist or life coach, even if they wouldn’t describe it that way. This can be really good!” The ideal scenario, he said, is when people “level up toward their own goals” and see long-term life satisfaction improve.

Chatbot Chat with AI, Artificial Intelligence. man using technology smart robot AI, artificial intelligence by enter command prompt for generates something, Futuristic technology transformation.

But the opposite is possible. “If users think they feel better after talking but they’re unknowingly nudged away from their longer term well-being, that’s bad,” he wrote. It’s also problematic, he said, if someone “wants to use ChatGPT less and feels like they cannot.”

Also read: Mumbai teen’s KnowUrMedicine website helps those who can’t read: Here’s how it works

Altman is wary of a future where “a lot of people really trust ChatGPT’s advice for their most important decisions,” even as he accepts it’s coming and that soon “billions of people may be talking to an AI in this way.”

The GPT-4o case highlights an ethical challenge for AI companies: retiring a model isn’t like updating old software. These systems have interaction patterns and personality traits users can connect with. Removing them can feel less like a performance upgrade and more like ending a relationship.

For Altman, the path forward lies in deliberate design and better measurement. “We have much better tech to help us measure how we are doing than previous generations of technology had,” he said, pointing to OpenAI’s ability to directly engage users about their goals and satisfaction.

In the age of personality-rich AI, the line between innovation and emotional connection is blurring and GPT-4o’s sudden absence is an early lesson in what happens when that line is crossed too quickly.

Also read: Early reactions to ChatGPT-5 are all bad: What went wrong for OpenAI?

Vyom Ramani

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile

Digit.in
Logo
Digit.in
Logo