Can religion help fix AI ethics or make it worse?

HIGHLIGHTS

OpenAI and Anthropic are consulting religious leaders on AI ethics

Faith can deepen AI morality, but risks institutional bias

Religion should contribute to AI ethics, not control it

For decades, tech companies have behaved as if every moral and social problem is secretly an engineering problem waiting to be solved. Don’t believe me? Look up techno-solutionism. AI is taking that to a whole new extreme, answering questions that previously were the express domain of priests, parents, teachers, philosophers and late-night existential crises. 

In this scenario, it was only a matter of time that religion would enter the chat.

It’s no joke. Recent reporting suggests that OpenAI, Anthropic and others are looking for guidance from religious leaders and faith-based organisations on matters related to AI ethics. The most prominent example of this was the Faith-AI Covenant roundtable in New York, where OpenAI and Anthropic representatives met leaders from multiple religious traditions to see how moral perspectives should inform future AI development. 

The Associated Press reports that the recent initiative was organised by the Interfaith Alliance for Safer Communities, and it wasn’t a one-off either as more roundtables are expected in Beijing, Nairobi and Abu Dhabi in the near future.

Also read: What does the evolution of AI so far tell us about its future?

Anthropic has gone furthest in this direction, from what I could gather. According to The Washington Post, the company hosted around 15 Christian leaders in San Francisco in March 2026 to discuss Claude’s moral and spiritual compass, including grief, suicidal ideation and whether AI could have any morality baked in. Moving on this topic as far back as 2020, the Vatican-backed Rome Call for AI Ethics had already framed AI around principles such as transparency, inclusion, responsibility, impartiality, reliability, security and privacy.

My instinctive reaction to this development is that AI makers need to tread carefully. 

I think this is a good development, because AI ethics cannot be left only for engineers, lawyers, safety researchers and for-profit tech companies to decide. Religious traditions, for all their supposed faults and contradictions, have spent centuries asking how humans must behave – as individuals and societies. 

Religion has often highlighted what reckless power must not do, how the vulnerable should be protected, and why intelligence needs humility in equal measure. In countries like India, where faith is the daily operating system for tens of millions, excluding religious voices from AI ethics frameworks would itself be culturally illiterate – making them irrelevant to the local context.

But I also think extreme care is needed in this direction. Religious takes aren’t so simple to distill. It is wisdom and comfort, but also hierarchy, exclusion, and power. Which faith gets heard loudest? Which sect gets left out? Which interpretation is the right interpretation? Which inconvenient minority gets politely excluded while “universal values” are drafted in the next AI ethics release? Like I said, there’s no easy answer to these questions.

The right role for religion in AI ethics is definitely one of contribution. It should be one voice among many, not the most important voice above all. Theology cannot be encapsulated by corporate policy, and the AI born from it won’t make the machine moral. It will remain biased, just with better excuses.

Also read: Sam Altman on AI morality, ethics and finding God in ChatGPT

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant.

Connect On :