US attorneys general warn OpenAI, Google and other AI giants to fix delusional chatbot outputs

HIGHLIGHTS

A large group of state attorneys general is urging major artificial intelligence companies to take stronger steps to stop chatbots from producing “delusional outputs”.

The National Association of Attorneys General warned that the companies must improve their safety practices or risk violating state laws.

The letter included Microsoft, OpenAI, Google, Anthropic, Apple, Meta, and other major AI firms.

US attorneys general warn OpenAI, Google and other AI giants to fix delusional chatbot outputs

A large group of state attorneys general is urging major artificial intelligence companies to take stronger steps to stop chatbots from producing “delusional outputs” that could harm users. In a letter signed by dozens of AGs from across the United States and territories, the National Association of Attorneys General warned that the companies must improve their safety practices or risk violating state laws. The letter included Microsoft, OpenAI, Google, Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika and xAI.

Digit.in Survey
✅ Thank you for completing the survey!

According to the letter, the companies should adopt new safety measures, including transparent third-party audits of large language models to check for signs of delusional or sycophantic ideations, reports TechCrunch. These audits should be done by outside experts, such as academics or civil society groups, who must be allowed to test systems before release and publish their findings “without prior approval from the company,” the letter says.

Also read: Apple iPhone 16 Pro price slashed by over Rs 15,900 on this platform

The AGs warn that GenAI tools have already been linked to serious incidents, including cases of suicide and violence, in which chatbots reportedly encouraged harmful thoughts. “GenAI has the potential to change how the world works in a positive way. But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations,” the letter states.

The group says companies should handle mental health risks with the same seriousness as cybersecurity threats. That means creating clear incident-reporting systems and notifying users if a chatbot produces outputs that might have been psychologically harmful. 

Also read: Samsung Galaxy S26 Ultra India launch date, specifications, price and all other leaks

The AGs also call for stronger pre-release testing to ensure models do not generate dangerous responses.

Meanwhile, US President Donald Trump recently announced plans for an executive order aimed at preventing states from imposing their own rules.

Also read: Did you know iPhones without cameras exist? Here’s why you cannot buy them and who actually uses them

Ayushi Jain

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds. View Full Profile

Digit.in
Logo
Digit.in
Logo