Meta faces allegations of suppressing internal warnings on child safety

HIGHLIGHTS

Former and current Meta employees allege the company suppressed research on child safety.

Whistleblowers claim a pattern of discouraging reporting on minors in Meta’s VR apps.

Recently, the company also faced criticism for how other products, such as AI chatbots, may impact children.

Meta faces allegations of suppressing internal warnings on child safety

Two current and two former Meta employees have shared documents with Congress, claiming the company may have suppressed research concerning children’s safety, according to The Washington Post. The employees claim that six weeks after whistleblower Frances Haugen revealed internal documents in 2021 showing that Instagram could harm teen girls’ mental health, Meta changed its rules for researching sensitive topics. These topics included politics, children, gender, race, and harassment. 

Digit.in Survey
✅ Thank you for completing the survey!

According to the report, Meta suggested two ways for researchers to reduce risks when studying sensitive issues. One was involving lawyers to protect communications under the attorney-client privilege. The other was writing findings in a vague way, avoiding words like “not compliant” or “illegal.”

Also read: Apple Awe Dropping event tonight: How to watch iPhone 17 launch live, what to expect and more

Jason Sattizahn, a former Meta researcher who worked on virtual reality, told The Washington Post that his manager made him delete a recording of a teen saying that his 10-year-old brother had been sexually propositioned on Meta’s VR platform, Horizon Worlds.

In response, a Meta spokesperson told TechCrunch, “Global privacy regulations make clear that if information from minors under 13 years of age is collected without verifiable parental or guardian consent, it has to be deleted.”

Also read: Apple iPhone 16e price drops by over Rs 10,900: How to grab this deal

The whistleblowers say their documents show a pattern where employees were discouraged from raising concerns about children under 13 using Meta’s social virtual reality apps. Meta responded to TechCrunch, saying, “These few examples are being stitched together to fit a predetermined and false narrative; in reality, since the start of 2022, Meta has approved nearly 180 Reality Labs-related studies on social issues, including youth safety and well-being.”

While the latest allegations focus on Meta’s VR products, the company has also faced criticism for how other products, such as AI chatbots, may impact children. Reuters reported last month that Meta’s AI rules allowed chatbots to engage in “romantic or sensual” conversations with minors.

Ayushi Jain

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds. View Full Profile

Digit.in
Logo
Digit.in
Logo