ChatGPT won’t give medical or legal advice? False, says OpenAI: Here’s why

HIGHLIGHTS

Rumours claimed ChatGPT banned legal and medical advice are wrong

OpenAI says chatbot behavior unchanged despite viral misinformation online

Policy update clarified existing rules, not new restrictions

ChatGPT won’t give medical or legal advice? False, says OpenAI: Here’s why

For a few hours, the internet spiraled into panic over news that is seemingly false. Screenshots flooded Reddit, X (formerly Twitter), and Discord, all carrying the same uneasy message – “ChatGPT can no longer give medical or legal advice.” Users lamented that their once-chatty AI companion had suddenly gone quiet on some of life’s most serious questions. To many, it felt like the chatbot had “gone corporate,” replacing empathy with legalese. For a tool that millions rely on for everything from writing contracts to decoding blood test reports, this supposed silence felt oddly personal, like a friend who suddenly stopped picking up the phone.

Digit.in Survey
✅ Thank you for completing the survey!

Also read: Satya Nadella says Microsoft has GPUs, but no electricity for AI datacenters

But as it turns out, ChatGPT hadn’t taken a vow of silence at all. The truth, OpenAI says, is far less dramatic and far more about how humans interpret the tone of a machine.

A rumour that spread faster than the chatbot could respond

The confusion began, fittingly, with a handful of cropped screenshots. One user posted a conversation where ChatGPT refused to answer a question about a skin rash. Another tried asking about a legal dispute and got the same line: “I can’t provide medical or legal advice.” Within hours, social media labeled it a “policy change,” and the usual corners of the internet began buzzing with theories.

Some claimed OpenAI had bowed to legal pressure or regulatory oversight. Others saw it as a sign of tightening censorship – one more example of Silicon Valley “playing it safe.” But the reality was simpler. ChatGPT has always been designed to walk a careful line between being informative and being responsible. It can explain how the law works or how medical diagnoses happen, but it will never tell you which pill to take or which clause to invoke. Those boundaries have always existed, they just became more visible.

OpenAI clarifies: no new ban, only old boundaries

OpenAI quickly defused the situation. “There’s been no new change to our terms,” said Karan Singhal the head of health AI at OpenAI, emphasizing that ChatGPT continues to discuss legal and medical topics in an informational capacity. The shift users noticed might stem from ongoing fine-tuning following their October 29th update which had a few minor safety changes meant to make the model’s responses more consistent and cautious.

Also read: ChatGPT Atlas and Perplexity Comet can bypass online paywalls, study finds

That language in the update, while new in phrasing, closely mirrors the company’s previous policy, which already discouraged activities that could “impair the safety, wellbeing, or rights of others,” such as “providing tailored legal, medical/health, or financial advice without review by a qualified professional.”

The updated policy doesn’t mean ChatGPT will stop talking about law or medicine. It simply means that the chatbot will now couch such information in stronger disclaimers, reminding users to consult licensed professionals for any personal or high-stakes advice.

Trust, tone, and our relationship with AI

The brief uproar revealed less about ChatGPT’s rules and more about ours, our instinct to humanise machines. People expect their AI to sound familiar, predictable, even understanding. So when it suddenly changes its tone, we interpret that as intent: censorship, compliance, or betrayal. But the reality is that these models don’t have intent, at least not yet. 

The “advice ban” is a small but telling episode in the larger context of how humans and AI learn to coexist. As these systems become more and more integrated in our daily life, even the slightest silence – or the wrong tone – can echo loudly across the internet.

Also read: ISRO’s ‘Bahubali’ LVM3 rocket and GSAT-7R ‘heaviest’ satellite explained

Vyom Ramani

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile

Digit.in
Logo
Digit.in
Logo