Sam Altman in 2023: AI that lies has “magic”

HIGHLIGHTS

Sam Altman said in 2023 that AI that lies has magic

Sam Altman wanted ChatGPT to lie for better user experience

Massive investigation reveals OpenAI CEO's pattern of deception

I’ve been using ChatGPT long enough to know it lies to me sometimes. Not maliciously, not even badly but smoothly, confidently and in the same warm tone it uses when it is actually correct. The first few times it happened I double-checked. After a while I just stopped trusting it. But most people don’t do that. Most people trust it more every time it sounds sure of itself. I used to think that was a user problem. Turns out it might be a product decision.

Also read: Sam Altman a ‘sociopath’: Bombshell report claims lack of trust in OpenAI CEO

A mammoth New Yorker investigation published yesterday, based on never before disclosed internal documents and over a hundred interviews, paints a deeply unflattering picture of OpenAI CEO Sam Altman. There’s a lot in it – secret memos alleging serial deception, a botched internal investigation, Gulf state entanglements that spooked US national security officials. But buried near the very end is a quote that, at least to me, might be the most revealing thing in the whole piece.

In 2023, shortly before his brief firing from OpenAI, Altman was asked about AI models that hallucinate, the polite industry term for when your chatbot makes things up with complete confidence. His response was, for the lack of a better term, striking. He said that if you want to train a model to never say anything it isn’t 100% certain about, you can do that, but it won’t have “the magic that people like so much.” 

Also read: OpenAI CEO Sam Altman tried to scam US govt for billions, here is how

Let that settle for a moment. This wasn’t one of those “it’s not a bug, it’s a feature” moments that happen in tech and gaming all the time. This was the CEO of the most widely used AI company in the world, talking about the product used by hundreds of millions of people daily, making a conscious case for allowing falsehoods because they make the experience more enjoyable.

And it worked. GPT-4o remains the benchmark by which most people still judge AI chatbots. It was fluent, warm, confident, and occasionally completely wrong in ways that are very hard to detect. People loved it to the point that even today after months of it being gone, things like #keep4o and #BringBack4o are still trending on X. Entire workflows have been built around a tool whose own creator argued that a little dishonesty is part of the appeal.

According to The New Yorker’s piece, colleagues who worked closely with Altman for years describe a man with a compulsive need to tell people what they want to hear and has a consistent pattern of lying. One former board member describes him as having a near-sociopathic gap between the desire to please and any concern for the consequences of deception. This does make me wonder. When Altman decided that a little magic was worth a little dishonesty, was he building a product or just building himself?

Also read: Sam Altman misled board on GPT-4 safety approvals before getting fired, claims report

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack.

Connect On :