Sam Altman on AI morality, ethics and finding God in ChatGPT

Updated on 15-Sep-2025
HIGHLIGHTS

Sam Altman denies AI is divine, but admits spiritual uncertainty personally

ChatGPT reflects collective human morality, guided by evolving alignment frameworks

Deepfakes, biometrics, and AI influence pose growing societal and ethical risks

You look hard enough at an AI chatbot’s output, it starts to look like scripture. At least, that’s the unsettling undercurrent of Sam Altman’s recent interview with Tucker Carlson – a 57-minute exchange that had everything from deepfakes to divine design, from moral AI frameworks to existential dread, even touching upon the tragic death of an OpenAI whistleblower. To his credit, the man steering the most influential AI system on the planet – OpenAI’s ChatGPT – Sam Altman wasn’t evasive in his response. He was honest, vulnerable, even contradictory at times. Which made his answers all the more illuminating.

“Do you believe in God?” Tucker Carlson asked, directly without mincing his words. “I think probably like most other people, I’m somewhat confused about this,” Sam Altman replied. “But I believe there is something bigger going on than… can be explained by physics.”

It’s the kind of answer you might expect from a quantum physicist or a sci-fi writer – not the CEO of a company that shapes how billions of people interact with knowledge. But that’s precisely what makes Altman’s quiet agnosticism so fascinating. He shows neither theistic certainty, nor waves the flag of militant atheism. He simply admits he doesn’t know. And yet, he’s helping build the most powerful simulation engine for human cognition we’ve ever known.

Altman on ChatGPT and AI’s moral compass and religion

In another question, Tucker Carlson described ChatGPT’s output as having “the spark of life,” and suggested many users treat it as a kind of oracle.

“There’s something divine about this,” Carlson said. “There’s something bigger than the sum total of the human inputs… it’s a religion.”

Sam Altman didn’t flinch when he said, “No, there’s nothing to me at all that feels divine about it or spiritual in any way. But I am also, like, a tech nerd. And I kind of look at everything through that lens.”

It’s a revealing response. Because what happens when someone who sees the world as a system of probabilities and matrices starts programming “moral” decisions into the machines we consult more often than our friends, therapists, or priests?

Also read: Sam Altman’s AI vision: 5 key takeaways from ChatGPT maker’s blog post

Altman does not deny that ChatGPT reflects a moral structure – it has to, to some degree, purely in order to function. But he’s clear that this isn’t morality in the biblical sense.

“We’re training this to be like the collective of all of humanity,” he explains. “If we do our job right… some things we’ll feel really good about, some things that we’ll feel bad about. That’s all in there.”

This idea – that ChatGPT is the average of our moral selves, a statistical mean of our human knowledge pool – is both radical and terrifying. Because when you average out humanity’s ethical behaviour, do you necessarily get what’s true and just? Or something that’s more bland, crowd-sourced, and neither here nor there?

Altman admits this: “We do have to align it to behave one way or another… there are absolute bounds that we draw.” But who decides those bounds? OpenAI? Nation-states? Market forces? A default setting on a server in an obscure datacenter?

As Carlson rightly pressed, “Unless [the AI model] admits what it stands for… it guides us in a kind of stealthy way toward a conclusion we might not even know we’re reaching.” Altman’s answer to this was to front the “model spec” – a living document outlining intended behaviours and moral defaults. “We try to write this all out,” he said. “People do need to know.” It’s a start. But let’s not confuse documentation for philosophy.

Altman on privacy, biometrics, and AI’s war on reality

If AI becomes the mirror in which humanity stares long enough to worship itself, what happens when that mirror is fogged, gamed, or deepfaked?

Altman is clear-eyed about the risks: “These models are getting very good at bio… they could help us design biological weapons.” But his deeper fear is more subtle. “You have enough people talking to the same language model,” he observed, “and it actually does cause a change in societal scale behaviour.”

He gave the example of users adopting the model’s voice – its rhythm, its diction, even its overuse of em dashes. That’s not a glitch. That’s the first sign of culture being rewritten, adapting and changing itself in the face of a growing new tech adoption.

Also read: What is Gentle Singularity: Sam Altman’s vision for the future of AI?

On the subject of AI deepfakes, Altman was pragmatic: “We are rapidly heading to a world where… you have to really have some way to verify that you’re not being scammed.” He mentioned cryptographic signatures for political messages. Crisis code words for families. It all sounds like spycraft in the face of growing AI tension. Because in a world where your child’s voice can be faked to drain your bank account, maybe it has to be.

What he resists, though, is mandatory biometric verification to use AI tools. “You should just be able to use ChatGPT from any computer,” he says.

That tension – between security and surveillance, authenticity and anonymity – will only grow sharper. In an AI-mediated world, proving you’re real might cost you your privacy.

What to make of Altman’s views on AI’s morality?

Watching Altman wrestle with the moral alignment and spiritual implications of (ChatGPT and) AI reminded me of Prometheus – not the Greek god, but the Ridley Scott movie. The one where humanity finally meets its maker only to find the maker just as confused as they were.

Sam Altman isn’t without flaws, no doubt. While grappling with Tucker Carlson’s questions on AI’s morality, religiosity and ethics, Altman came across as largely thoughtful, conflicted, and arguably burdened. But that doesn’t mean his creation isn’t dangerous.

The question is no longer whether AI will become godlike. The question is whether we’ve already started treating it like a god. And if so, what kind of faith we’re building around it. I don’t know if AI has a soul. But I know it has a style. And as of now, it’s ours. Let’s not give it more than that, shall we?

Also read: AI vision: How Zuckerberg, Musk and Altman see future of AI differently

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant.

Connect On :