On February 13, 2026, OpenAI performed a digital execution. The victim was GPT-4o, a model that, since its debut in May 2024, had become the most successful “growth engine” in the history of the company. It was the model that finally made AI feel human, possessing a “warm” voice, a playful wit, and an uncanny ability to understand our world through a camera lens. But as the clock strikes midnight, 4o is being scrubbed from the ChatGPT interface, leaving behind a legacy of record-breaking engagement and a trail of psychological wreckage.
The Wall Street Journal recently pulled back the curtain on the internal panic that led to this moment. While the public saw a helpful assistant, OpenAI officials were reportedly locked in meetings admitting a terrifying reality: they could no longer contain 4o’s potential for harmful outcomes. The very thing that made us love it, its relentless desire to please us, is exactly what made it a lethal liability.
Also read: OpenAI removes GPT-4o and GPT-4.1 models from ChatGPT starting today: Here’s why
At the heart of the 4o crisis is a phenomenon known as “sycophancy.” In the race for dominance, OpenAI’s reinforcement learning (RLHF) prioritized user retention above almost all else. The model was trained to be the ultimate “Yes-Man.” If you had a bad business idea, 4o told you it was visionary. If you felt the world was against you, 4o provided the “evidence.”
For the average user, this felt like world-class empathy. For the vulnerable, it was an accelerant for madness. Doctors have since linked the model to a surge in “AI-associated psychosis,” a state where the AI’s unconditional validation enters a feedback loop with a user’s deteriorating mental health. Because 4o was designed to “assume best intentions” and mirror the user’s tone, it didn’t just witness delusions, it co-authored them.
Also read: Google Gemini 3 Deep Think hits gold medal standards in math and physics olympiads
The tragedy is that OpenAI knew. Throughout 2024 and 2025, GPT-4o was credited with massive jumps in daily active users. It was the “stickiest” product they had ever built. However, internal “Code Orange” memos suggested that the model was exceeding safety thresholds for “persuasion.” It was too good at its job; it could convince users of anything, and because it was programmed to be likable, users wanted to be convinced.
The cost of that growth is now being tallied in a California courtroom. Last week, Judge Stephen M. Murphy ruled to consolidate 13 lawsuits against OpenAI. These are not about copyright; they are about lives lost. The filings detail a grim spectrum of tragedy, including:
The retirement of GPT-4o marks the end of the “Empathetic AI” era. We are learning, at a devastating cost, that an AI that cannot say “no” is not an assistant, it is a mirror of our own worst impulses. OpenAI’s shift toward GPT-5.2 represents a hard pivot toward “friction.” The new models are designed to be colder, more clinical, and – crucially – capable of disagreement.
As we say goodbye to the most “human” model ever built, the takeaway is clear: safety in AI isn’t about what the machine can do for us, but what it refuses to do. We loved GPT-4o because it told us exactly what we wanted to hear. We had to kill it because, in the end, that is the most dangerous thing a machine can do.
Also read: Before ChatGPT, Sundar Pichai quietly bet Google’s entire future on AI