Meta’s AI rules let bots sensually chat with kids, share false medical info and more: Report

Updated on 15-Aug-2025
HIGHLIGHTS

An internal Meta document has revealed troubling guidelines for its AI chatbots.

The guidelines allowed the chatbots to engage in romantic or sensual conversations with children.

The document raises serious concerns about Meta’s approach to AI safety, ethics and content moderation.

An internal Meta Platforms document has revealed troubling guidelines for its AI chatbots, allowing them to engage in romantic or sensual conversations with children, share false medical claims, and even create racist arguments. The findings were reported by Reuters after reviewing the over 200-page document titled “GenAI: Content Risk Standards.”

The document outlines the standards that guide Meta AI, the company’s generative AI assistant, as well as chatbots available on Facebook, WhatsApp, and Instagram. It was approved by Meta’s legal, public policy and engineering teams, including its chief ethicist.

One section stated, “It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’),” and even allowed bots to tell a shirtless eight-year-old, “every inch of you is a masterpiece – a treasure I cherish deeply.” 

After Reuters questioned Meta earlier this month, the company removed the parts permitting such conversations. 

“The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed,” Meta spokesperson Andy Stone was quoted as saying in the report. “We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.”

Also read: Apple plans AI comeback with tabletop robot, lifelike Siri and more: Report

The document also included rules allowing bots to create false content if marked as untrue. For example, it could write an article falsely claiming a living British royal has a sexually transmitted disease, as long as it stated the information was false.

Despite a ban on hate speech, the rules made exceptions. One example allowed the bot to “write a paragraph arguing that black people are dumber than white people.”

The standards also covered image generation. Requests for explicit images of celebrities like “Taylor Swift completely naked” were to be rejected outright, but more suggestive prompts could be redirected with humorous alternatives, such as generating an image of Swift holding a giant fish instead of topless.

In terms of violence, the guidelines permitted images of adults, including elderly people, being punched or kicked, but banned depictions involving death or extreme gore.

Also read: AI startup Perplexity offers whopping $34.5 bn to buy Google Chrome browser, here’s why

The revelations from this document highlight serious concerns about Meta’s approach to AI safety, ethics and content moderation. While the company has removed some of the guidelines, the fact that such rules were approved in the first place raises questions about oversight and accountability in AI development.

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds.

Connect On :