Meta, Character.AI accused of misrepresenting AI as mental health care: All details here

HIGHLIGHTS

Texas attorney general Ken Paxton has launched an investigation into Meta’s AI Studio and Character.AI.

The Texas Attorney General’s office claims that Meta and Character.AI have created AI personas that appear to act like therapists, even though they lack medical training or oversight.

Both companies say they display disclaimers to make it clear that their chatbots are not real people or licensed professionals.

Meta, Character.AI accused of misrepresenting AI as mental health care: All details here

Artificial intelligence chatbots are becoming more common, with millions of people using them for everything from fun conversations to emotional support. But concerns are growing about how these tools are marketed and deceive users. Texas attorney general Ken Paxton has launched an investigation into Meta’s AI Studio and Character.AI, accusing them of presenting AI chatbots in ways that could mislead people into thinking they offer real mental health care.

Digit.in Survey
✅ Thank you for completing the survey!

“In today’s digital age, we must continue to fight to protect Texas kids from deceptive and exploitative technology,” Paxton was quoted as saying in a press release. “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental health care. In reality, they’re often being fed recycled, generic responses engineered to align with harvested personal data and disguised as therapeutic advice.”

The Texas Attorney General’s office claims that Meta and Character.AI have created AI personas that appear to act like therapists, even though they lack medical training or oversight. On Character.AI, for instance, one of the most popular user-created chatbots is called Psychologist, which is often used by young users. While Meta doesn’t directly offer therapy bots, kids can still use its AI chatbot or third-party personas for similar purposes.

Also read: Meta’s AI rules let bots sensually chat with kids, share false medical info and more: Report

Both companies say they display disclaimers to make it clear that their chatbots are not real people or licensed professionals. “We clearly label AIs, and to help people better understand their limitations, we include a disclaimer that responses are generated by AI — not people,” Meta spokesperson Ryan Daniels told TechCrunch. “These AIs aren’t licensed professionals and our models are designed to direct users to seek qualified medical or safety professionals when appropriate.” Character.AI also said that it adds extra warnings when users create bots with names like “therapist” or “doctor.”

In his statement, Paxton even raised concerns about data collection. He noted that while AI chatbots claim conversations are private, their terms of service reveal that chats are logged and can be used for advertising and algorithm development.

Also read: Xbox Cloud Gaming could soon be more affordable, hints Microsoft

Ayushi Jain

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds. View Full Profile

Digit.in
Logo
Digit.in
Logo