Meta, the parent company of Facebook and Instagram, is accused of exposing children to harm on its platforms while misleading the public and lawmakers about the risks. Internal research conducted by the company and cited in the court filings of a US lawsuit shows that the apps could contribute to increased anxiety, depression, and exposure of teenagers to sexual abuse content. Despite these known risks, the filings say, Meta delayed safety measures that might have protected user engagement and profits.
In all, the lawsuit includes over 1,800 plaintiffs, including children, parents, schools, and state authorities. It is part of a broader multidistrict litigation in California that also involves complaints filed against YouTube, Snapchat, and TikTok. The plaintiffs claim the social media companies pursued growth at any cost, putting revenue over the mental and physical safety of kids. The stakes are high, as millions of young users around the world, including in India, may have been exposed to harmful content, adult strangers, and addictive features designed to keep them online longer.
The case relies on internal company documents, executive testimony, and research obtained in the course of the lawsuit. Former executives and safety experts have testified that Meta was aware of how its platforms could harm teenagers yet decided not to take swift action. It underlines a growing concern for children’s online safety among Indian parents, as social media usage has exploded throughout the teen population in India over the past ten years.
The court filings also describe how Instagram allowed accounts involved in sex trafficking to remain active until at least 16 prior violations, whereas minor infractions like spam were removed immediately. It quotes the former safety executive of Instagram, Vaishnavi Jayakumar, as describing this as a ‘very high strike threshold’, with serious threats to children tolerated for far longer than minor issues.
Meta said it has a zero-tolerance policy for child sexual abuse material and that it continuously updates its reporting tools. The company said it uses both AI systems and human reviewers to detect and remove harmful content.
Also read: Sam Altman warns OpenAI staff of challenging months ahead as Google gains ground in AI
The internal research supposedly found that anxiety and depression in teenagers started to decrease when they reduced the amount of time spent on social media. Despite this, those findings were not publicly shared by Meta. When reportedly questioned by the US Senate about teen mental health, the company responded in short or misleading ways that minimised how its platforms could be harmful.
Meta continues to maintain that the company shares research and knowledge about teenagers and mental health and has implemented features like Instagram Teen Accounts, parental controls, and content-limiting tools to lessen the amount of sensitive content displayed. It also denied intentionally deceiving lawmakers.
Filings detail how millions of teens were exposed to inappropriate interactions with adults on the site. Meanwhile, recommendations that teen accounts be made private by default were delayed for years over concerns about the impact on engagement and growth. The launch of Instagram Reels reportedly increased exposure to strangers and heightened the risk.
Meta claims it now sets all teen accounts to private by default, restricts messaging from adults that teens are not connected with, and offers parental supervision features. It says it has put these various measures together to prevent unwanted interactions.
Also read: Zoho Notebook: Smart AI, good design, but is it a powerhouse?
The court filings also alleged that Meta had deliberately targeted kids, including preteens, to increase engagement. Workers reportedly likened the strategy to the way cigarette companies marketed to children, and internal communications suggested enlisting schools to help drive early phone use during class.
Meta says it does not allow the registration of persons below the age of 13 and has developed features to protect young people, along with different age-appropriate experiences to stop children from signing up. Safety considerations now anchor product design at the company.
Plaintiffs said executives blocked or downplayed initiatives aimed at shielding teens, such as concealing ‘likes’, limiting beauty filters, and scaling back addictive usage features. Internal research reportedly warned that such measures would improve teen well-being, but the projects were mothballed because they could impact engagement or revenue.
Meta added that it regularly updates its safety features through research and response, citing Instagram Teen Accounts and AI monitoring tools as examples of continued efforts at improving online safety for teenagers.
Even when the AI systems did detect content related to self-harm or eating disorders, such content was not taken down unless the system was highly certain it was a violation of the rules. Plaintiffs argue this left millions of teens regularly exposed to harmful material. Employees reportedly recommended making the risks public, but the company did not. Meta points out that AI systems are supplemented with human reviewers and work to eliminate violating content as soon as possible. It also points to continuous investments in detection and removal technology.
Also read: NPU to power: Snapdragon X2 Elite beats Intel, AMD, Apple chips, says Qualcomm
Accordingly, internal research reportedly discovered that both Instagram and Facebook could be addictive, with features intended to keep users, especially teens, on the platform for longer. An internal researcher reportedly wrote, ‘IG is a drug’, and then compared the company to ‘pushers, ‘ showing an awareness of how the platform exploited human psychology. Contrary to these findings, Meta publicly downplayed the risks, stating only a small fraction of users had ‘severe’ problematic use while omitting the broader evidence of addictive behaviour on a widespread scale. Efforts to include features reducing compulsive use, like ‘quiet mode’, were delayed or watered down because it was felt that doing so might hurt metrics of engagement and revenue.
The company states that it studies ‘problematic use’ rather than addiction and continually develops tools, such as parental controls and Instagram Teen Accounts, to help users manage time on the platform. Meta denies that it deliberately exploited users or misled the public about risks.
The case represents a rare window into how social media companies balance growth with safety. The allegations paint a picture of revenue and engagement being put ahead of the mental health and safety of millions of children. For Indian families, where social media use among young people is growing at a rapid clip, the case presents a warning about the potential risks and ways to mitigate them through the use of parental oversight tools and privacy settings.
A Meta spokesperson said, ‘We take the safety of teens seriously. We’ve introduced Instagram Teen Accounts, AI systems to detect harmful content, and parental tools. Our platforms continue to evolve to provide experiences that are not only safe but also age-appropriate. Allegations that we deliberately harm teens are false, and we will continue our work to keep young users safe online.’ With the case now unfolding, courts will review whether Meta knowingly exposed children to harm for business benefits, possibly shaping the future of social media safety worldwide.