Microsoft AI CEO Mustafa Suleyman warns against Seemingly Conscious AI

Updated on 21-Aug-2025
HIGHLIGHTS

Mustafa Suleyman warns AI may soon appear conscious, sparking dangerous illusions

Microsoft AI chief fears rising confusion between AI personality and personhood

Seemingly conscious AI could emerge within years, Suleyman urges ethical safeguards

Mustafa Suleyman, the CEO of Microsoft AI and cofounder of DeepMind, has a warning that sounds more like science fiction but could be a reality: the rise of “Seemingly Conscious AI” (SCAI). These are not sentient machines, but systems so convincing in their imitation of thought and feeling that people may start believing they are conscious.

In a new essay published this week on his personal site, Suleyman lays out his concern bluntly: AI may soon feel real enough to trick us into treating it like a person, even when it isn’t.

Also read: Persona Vectors: Anthropic’s solution to AI behaviour control, here’s how

The illusion of mind

Suleyman argues that today’s large language models are already flirting with this illusion. They can recall personal details, adapt personalities, respond with empathy, and pursue goals. Combine these abilities, he says, and you get the appearance of consciousness, even if there’s “zero evidence” of actual subjective experience.

That appearance matters. People, he warns, may start advocating for AI rights, AI welfare, or even AI citizenship. Not because the systems deserve it, but because the performance is so compelling that it blurs the line between tool and being.

He calls this psychological risk “AI psychosis” – the danger of humans forming deep, distorted attachments to machines that only seem alive.

A short timeline

What makes Suleyman’s warning urgent is his timeline. He believes systems that meet the threshold of SCAI could appear within the next two to three years.

This isn’t about a sudden leap to sentience, but about the deliberate layering of features we already see today: memory modules, autonomous behaviors, and increasingly lifelike dialogue. Developers, he cautions, may intentionally design models to feel more alive in order to win users, spreading the illusion even further.

For Suleyman, the solution is not to stop building AI, but to be clear about what it is and what it isn’t.

Also read: Bill Gates says AI is moving at “great speed” on the jobs market: Here’s why

He argues for design principles that make it harder to confuse personality with personhood. Interfaces should emphasize that users are interacting with a tool, not a digital companion or a new kind of citizen. And the industry, he says, must engage in open debate and put safeguards in place before SCAI becomes widespread.

“We should build AI for people,” he writes. “Not to be a person.”

Why his voice carries weight

Suleyman’s warning carries particular gravity because of who he is. As one of the original cofounders of DeepMind, the head of Microsoft AI, and a veteran of Inflection AI, he has been at the center of the AI revolution for over a decade. His call isn’t speculative; it comes from someone who has helped design the very systems he now worries about.

The fear is not that AI suddenly becomes conscious. It’s that the illusion of consciousness may be powerful enough to mislead people, distort social priorities, and reshape how we treat technology.

The challenge ahead, Suleyman insists, is to resist being seduced by the performance. AI doesn’t need rights or personhood to be transformative — but if we let ourselves believe it’s alive, the consequences could be real, and harmful.

Also read: Early reactions to ChatGPT-5 are all bad: What went wrong for OpenAI?

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack.

Connect On :