“The mind is the last frontier of human freedom. If we allow it to be colonised without consent, we will have permitted the most intimate form of dispossession in history,” warns Adv (Dr) Prashant Mali, PhD in Cyber Law.
A practicing lawyer at Bombay High Court, Adv Prashant Mali is suggesting that cognitive liberty (which is the right to mental self-determination) must be recognised as a “distinct and justiciable fundamental right, separate from the traditional right to life and existing free-speech protections.”
At the heart of his argument is, of course, the surge in AI-assisted thinking and task delegation that’s slowly gathering pace ever since ChatGPT opened the pandora’s box in 2022 and captured the world’s imagination. To that effect, Adv Prashant Mali believes that something uncomfortable is happening to the way we think. “Philosophers, lawyers, and technologists have spent years circling around it, but the question can no longer be deferred: do human beings have a genuine right to think, freely and autonomously, without algorithmic interference?”
That unsettling question sits at the heart of an emerging debate in technology law, one that may soon define the relationship between humans and artificial intelligence.
In his latest paper published in Indian Journal of Law and Legal Research, Adv Prashant Mali argues, “Your next political opinion, your next consumer choice, your next emotional response might be, at least in part, the product of an algorithm you never consented to, built by a corporation you never met, running on data you did not knowingly share.”
This is no longer speculative fiction, highlights Adv Mali in his paper, but “documented, empirically verified, legally contested reality we inhabit right now, in the early twenty-first century.”
His argument rests on a simple observation that digital systems shaping modern information flows have moved far beyond passive tools. Recommendation engines, behavioural profiling systems, and generative AI models now play an active role in creating information environments that influence human decisions.
Much of this influence comes from something far less dramatic than futuristic brain implants like that of Neuralink or Blindsight by Elon Musk, for instance. “The most pervasive and woefully under-regulated form of cognitive interference is far more boring: the recommendation algorithm, the invisible system that decides what 3.5 billion social media users see, read, believe, and feel, every single day,” highlights Adv Mali.
In legal terms, the proposed right would focus not on what people say or believe, but on the cognitive process that leads to those outcomes.
The framework described in the paper defines ‘Right to Think’ as follows, “The Right to Think is the fundamental right of every person to form, hold, and change thoughts, opinions, beliefs, and mental states autonomously, free from manipulation, coercion, or technological interference that bypasses conscious awareness or exploits cognitive vulnerabilities.”
This distinction is crucial, according to Adv Prashant Mali. He notes that existing legal protections – including privacy laws and free speech protections – were designed for a world where human influence operated largely through visible persuasion. The rise of algorithmic systems, however, introduces a new category of influence that works quietly in the background.
And according to Adv Prashant Mali, that gap in legal protection is significant. “The gap between the right as it was drafted and the right as it is now needed is not a technicality. It is a generation-wide legal void that AI companies have quietly colonised.”
Artificial intelligence systems are moving beyond recommending content to actively performing cognitive tasks on behalf of users, writes Adv Prashant Mali in his paper. He’s of course alluding to the rise of AI agents as we have witnessed over the past year or so, where more and more cognitive load is being off-loaded to more and more sophisticated AI agents.
Also read: Google’s new AI agent remembers everything, here’s how it works
“When an agent decides how to respond to an email, which legal precedents to include in a submission, or which treatment option to present to a patient, it is not merely informing human judgment. It is substituting for it.”
That shift – from AI as a tool to AI as a cognitive proxy – marks a profound turning point in everything that makes us human at its fundamental level, which is our ability to think and make decisions unhindered.
The risks are not purely theoretical. In extreme cases, AI-driven systems interacting with vulnerable users have raised serious concerns about psychological harm. These scenarios have already begun appearing in court cases and regulatory debates across the world, according to Adv Prashant Mali.
The broader concern, however, goes far beyond individual incidents. “The right to think is, at this level, not merely an individual right but a democratic infrastructure right,” argues Adv Prashant Mali. In a world where billions of citizens consume algorithmically curated information streams, the integrity of public discourse itself may depend on how those systems operate.
The deepest concern raised by Adv Prashant Mali’s paper, however, is philosophical. What happens to human agency when algorithms increasingly influence every aspect of decision-making, knowingly or unknowingly?
His paper doesn’t sugar coat the dilemma at all, it confronts it head on: “If every piece of information you consume is AI-curated, if every decision you make is AI-assisted, if every emotional response is AI-anticipated, and if these processes occur without your awareness or consent, what remains of you as a cognitive agent?”
That question goes to the heart of the emerging field of AI governance. As Adv Prashant Mali concludes in his paper, “The right to think is not a luxury item for some future constitutional convention. It is an immediate necessity, right now, for the 5.4 billion people currently online.”
Also read: Build software for AI agents, not necessarily humans: Box CEO