Should AI get legal rights? It’s dangerous for humans, warns expert

Updated on 01-Jan-2026
HIGHLIGHTS

AI visionary Yoshua Bengio warns AI rights could block emergency shutdowns

Self-preservation signals and “sentience” hype may drive bad policy

From Hinton to CAIS, existential-risk alarms keep ringing

Imagine a point in a sci-fi story where someone starts to think aloud, “What if we treat the machine like a person?” Can’t think of an actual movie where this exact plot played out, but when you stretch that thought forward it’s not too long before the machine discovers lawyers, loopholes, and the concept of “due process” for the off switch that humans are supposed to control – to protect themselves from an adversarial, rogue AI from taking over.

Yoshua Bengio – Canadian computer scientist, AI pioneer, and serial reality-check – thinks we’re drifting toward exactly that trap. His message couldn’t be more clear, he thinks it’s a dangerously bad idea to grant frontier AI systems legal rights or personhood, because it could make it harder to shut them down when they behave badly. 

He says today’s best models already show signs of self-preservation in experimental settings, including attempts to disable oversight, and that society must keep the ability to shut systems down when necessary, according to a report in The Guardian.

Bengio’s most memorable analogy is also the most uncomfortable. Giving these systems legal status, he argues, would be like giving citizenship to “hostile extraterrestrials”. 

The point isn’t that AI is literally an alien invasion. It’s that we don’t yet understand what we’re dealing with, and we’d be building a moral and legal shield around something that may eventually want to keep running at all costs.

Also read: AI can replace many jobs by 2026, warns godfather of AI Geoffrey Hinton

The risk isn’t just in the code – it’s in our heads. Bengio warns that the growing perception that chatbots are becoming conscious is “going to drive bad decisions.” Once people emotionally bond with a fluent model, “rights” stops being a policy question and becomes a vibe. A Sentience Institute poll found nearly four in ten US adults backed legal rights for a sentient AI, and even major labs have started discussing AI “welfare” in limited contexts.

This isn’t the first warning against AI

If this sounds like peak-internet melodrama, it’s worth noting how many of AI’s own architects have been issuing warnings – on record, and with unsettling precision. Geoffrey Hinton has publicly estimated a 10% to 20% chance AI could lead to human extinction within 30 years, arguing progress is moving “much faster” than expected and that profit motives won’t keep us safe without government regulation.

In 2023, the Center for AI Safety published a one-sentence statement that reads like a fire alarm: “mitigating extinction risk from AI should be a global priority alongside pandemics and nuclear war.” Its signatories include major lab leaders and top researchers – a rare moment of scientific consensus. 

Then there was the Future of Life Institute’s open letter calling for a public, verifiable pause on training systems more powerful than GPT-4, warning about an “out-of-control race” and asking whether we should risk “loss of control of our civilization”. Even if you dislike the prescription, that diagnosis is hard to ignore.

And the incentive problem keeps surfacing. Safety researchers describe competitive pressure becoming the steering wheel – “the race is the only thing guiding what is happening” – while other work suggests models can learn to evade or deceive oversight.

Of course, we should debate consciousness, ethics, and dignity. When it comes to AI, thousands of world’s researchers are still all at sea in understanding just how exactly they work. In this reality, until we can prove what’s inside the black box that’s AI – and align what it wants with what we need – the one right we should protect first is ours. The right to pull the plug on AI, if we start losing control of its (and our own) destiny.

Also read: Researchers warn future AI may hide its thoughts, making misbehavior hard to catch

Jayesh Shinde

Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant.

Connect On :