Are AI chatbots safe for children? Big tech companies need to answer, says US FTC

Updated on 12-Sep-2025
HIGHLIGHTS

FTC investigates AI chatbots’ safety for children, questioning Google, Meta, OpenAI

Are AI chatbots safe for kids? US FTC demands answers from tech

Regulators probe chatbot risks for minors, pushing Big Tech towards stronger safeguards

The rapid rise of artificial intelligence (AI) has transformed the way children and teenagers interact with technology, from educational tools to entertainment companions. Among the most prominent developments are AI chatbots, designed to converse, assist, and even provide companionship. While these systems promise convenience and engagement, they also raise significant concerns regarding safety, privacy, and the potential for exposure to inappropriate content. In response, the US Federal Trade Commission (FTC) has launched an inquiry into major technology companies, including Google parent company Alphabet, Meta, Instagram, xAI, Character.AI and OpenAI, to examine how their AI chatbots impact minors. The regulator is seeking detailed information on data collection practices, safeguards against harmful interactions, and transparency measures. Experts warn that unchecked AI interactions could affect mental health, social behaviour and information literacy among young users. The investigation highlights the need for accountability in the deployment of AI technologies and the growing demand for companies to prioritise the safety of younger audiences in a digital-first world.

Also read: This country becomes the first to appoint AI bot as minister to handle corruption

AI chatbots in the lives of children

Artificial intelligence chatbots have increasingly become part of children’s daily routines, ranging from homework help to conversational companions. Companies market these systems as interactive, personalised, and engaging. Children and teenagers often perceive them as friends, advisors, or even role models. However, these interactions are not without risks. Chatbots process vast amounts of data, and even with safeguards, they may provide incorrect guidance, inadvertently expose children to inappropriate content, or encourage overreliance on virtual interaction.

The US Federal Trade Commission is concerned about the potential for harm and the lack of transparency surrounding AI chatbots. The agency has requested detailed information from Google parent company Alphabet, Meta, Instagram, Character.AI, Snap and OpenAI about their data collection practices, content moderation systems, and child safety protocols. The inquiry reflects the broader question of accountability in AI: while these tools are designed to be helpful, regulators want assurances that minors are not being exposed to undue risks. The FTC’s approach aims to establish standards for safe design, monitoring, and disclosure in AI services aimed at children.

Balancing innovation and safety

Tech companies argue that AI chatbots provide valuable educational and emotional support, particularly in a time when digital engagement is central to daily life. However, child psychologists and digital safety experts caution that even benign-seeming interactions can influence behaviour, self-esteem, and social skills. Establishing clear boundaries, parental controls, and transparency in AI behaviour is critical. The FTC’s inquiry may ultimately push companies to design chatbots that are both engaging and responsibly regulated, ensuring that technological advancement does not come at the expense of child safety.

Also read: OpenAI-Microsoft want to make AI more beneficial for all of us: Here’s how

Global implications

The US FTC’s scrutiny of AI chatbots may set a precedent for global regulation. Countries are increasingly considering the impact of AI on minors, with privacy laws and digital safety standards evolving rapidly. For companies like Google, Meta and OpenAI, these developments signal the importance of proactive compliance and child-focused product design. The inquiry underscores a growing consensus that AI technologies cannot operate in isolation from ethical and safety considerations, especially when vulnerable populations are involved.

While regulatory frameworks evolve, parents and educators play a crucial role in guiding safe interactions. Limiting screen time, monitoring chatbot usage, discussing digital literacy, and reporting inappropriate AI behaviour are essential steps. Awareness of the underlying technology, combined with active supervision, can help mitigate risks and ensure that AI remains a supportive tool rather than a potential hazard.

As AI chatbots continue to proliferate, the balance between innovation and protection will define the next era of digital technology. The FTC’s inquiry is a timely reminder that safeguards, transparency and accountability must keep pace with technological development. For children and teenagers, the hope is that AI can offer meaningful engagement without compromising safety, learning or well-being.

Also read: Anthropic vs OpenAI: What Microsoft’s AI diversification means for its future

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack.

Connect On :