The New Yorker report on Sam Altman is damning no doubt, but it’s no hit job, even if it feels like one. It’s a methodical, painstakingly reconstructed pattern of Altman’s transgressions. It was built from internal documents, court depositions, disappearing messages and over a hundred interviews.
As a result, most of the damage to Altman’s reputation doesn’t come from a single smoking gun. It’s a gaping wound accumulated over a hundred small cuts. Of them all, here are six revelations about Altman’s alleged wrongdoings that hit us the hardest.
Also read: Sam Altman to blame? Why Microsoft and OpenAI are drifting apart
Ilya Sutskever once officiated Greg Brockman’s wedding in OpenAI’s own offices. Then he spent weeks compiling roughly 70 pages of Slack messages and HR documents alleging that his friend and CEO had a consistent, documented pattern of deception. The first item on the list was a single word. Lying.
He was so afraid of being caught that material was photographed on personal devices to avoid company servers. The final memos were sent to fellow board members as disappearing messages. A board member who received them said he was terrified. The Ilya Memos, never fully disclosed before this investigation, allege Altman misrepresented facts to executives and deceived them about internal safety protocols. The man who told recruits they were going to save the world had concluded the person leading that mission couldn’t be trusted with it.
In December 2022, Altman assured his board that controversial GPT-4 features had cleared the safety panel. Board member Helen Toner asked for documentation. There was none. The two features, one letting users fine-tune the model for specific tasks and another deploying it as a personal assistant, had never been approved.
That was bad. What came next was worse. As board member Tasha McCauley was walking out of that same meeting, an employee pulled her aside. Did she know about the breach in India? She did not. Altman had spent hours briefing the board across multiple sessions and never once mentioned that Microsoft had released an early version of ChatGPT in India without completing a required safety review. A researcher at the time said it was just kind of completely ignored. The board whose entire mandate was safety oversight had to find out in a corridor.
Also read: Sam Altman in 2023: AI that lies has “magic”
WilmerHale, the firm that handled the internal investigations of Enron and WorldCom, was brought in to review the circumstances of Altman’s firing. It, as you would expect, cleared him. It also produced no written report whatsoever. Findings were delivered as oral briefings only, apparently on the advice of the personal attorneys of the two new board members.
Six people close to the inquiry said it appeared designed to limit transparency, focused narrowly on clear criminality rather than the integrity questions that had actually motivated the firing. OpenAI announced the outcome in 800 words on its website. The most powerful AI company in the world had its CEO investigated and made sure nothing was written down.
In 2017, while publicly positioning itself as humanity’s last line of defense against rogue AI, OpenAI was internally discussing playing Russia and China against each other in a bidding war for its technology. The thinking, according to policy adviser Page Hedley, was essentially that it worked for nuclear weapons so why not AI.
The plan was eventually dropped, but not because anyone had serious concerns about triggering a great power conflict. It was dropped because employees threatened to quit. Altman, Hedley noted, could not afford to lose staff. The possibility of starting a war was apparently a secondary consideration.
When Altman sought a security clearance during the Biden administration, RAND Corporation staffers coordinating the process raised concerns about his foreign financial entanglements. The comparison they reached for was Jared Kushner, who had been recommended against for a clearance for similar reasons. Altman withdrew from the process.
He has since described Sheikh Tahnoon bin Zayed, the UAE’s national security adviser who controls one and a half trillion dollars in sovereign wealth, as a dear personal friend. Make of that what you will.
Brian Chesky processed watching his friend get fired and reinstated by giving a two hour talk at a YC alumni gathering that felt, by his own description, like group therapy. The message was that founders should trust their instincts and ignore anyone who questions them. Paul Graham wrote it up and called it Founder Mode.
It became one of Silicon Valley’s most discussed ideas of 2024. What nobody mentioned was that the whole thing started as one man working through the emotional wreckage of a boardroom coup. It was not a management philosophy. It was grief with better branding.
What the accumulation of these six findings reveals is not someone who is a villain in the traditional sense. It is a man who is genuinely brilliant at making people believe he shares their priorities, right up until the moment he doesn’t need to anymore. Altman didn’t deceive people despite wanting to build something important. He deceived people because wanting to build something important was always the pitch.
Also read: Sam Altman a ‘sociopath’: Bombshell report claims lack of trust in OpenAI CEO