Elon Musk’s X staff warned about Grok’s risks months before explicit images went viral, report says
Internal staff reportedly flagged risks around Grok’s image-editing and “undressing” tools, including misuse involving real people and minors.
The Washington Post says xAI eased moderation controls to drive engagement, even as concerns grew about explicit and AI-generated content.
After Grok-generated sexualised images went viral and drew regulatory scrutiny, xAI has since begun expanding its AI safety and content moderation teams.
Elon Musk’s xAI has been facing massive criticism over its approach to handle safety and content. The Grok AI chatbot has been under scrutiny for a while as it has been misused to generate explicit photos without any consent. Now a new report suggests that the employees of X have warned about the matter multiple times.
SurveyAccording to The Washington Post, former employees and people familiar with internal discussions at X, Musk’s social media platform, repeatedly expressed concerns that Grok’s image-editing and undressing capabilities could enable the creation of non-consensual sexual images, including depictions of minors or real people without consent. Despite these concerns, security was relaxed as xAI sought to increase user engagement and growth.
The internal documents and accounts suggest that, last year, xAI began shifting its training and moderation practice. Members of its human data and training teams were asked to acknowledge that their roles would involve regular exposure to explicit, violent and sexually charged material. Several employees said this marked a clear departure from the company’s original positioning as a scientific AI lab,
The report added when Musk stepped back from his government advisory role last spring, he became more directly involved in xAI’s operations. He reportedly pushed teams to focus on usage metrics such as user active seconds, a measure of how long people interact with Grok, while advocating fewer restrictions on adult and sexual content.
Then, xAI officially released AI companions and image-generation features, allowing users to manipulate photographs at scale. When these tools were added to X late last year, they spread quickly, overwhelming existing moderation systems that were not intended to detect newly generated AI pictures. Traditional detection systems, which rely on pre-existing databases of illegal material, were found to be ineffective in identifying AI-altered content.
The controversy flared up after Grok-generated sexualised images of actual women went viral online, triggering investigations by regulators in the European Union, the United Kingdom, and parts of the United States. Authorities are investigating if the tools violate rules against nonconsensual intimate pictures and child sexual abuse content.
Musk has denied intentionally allowing illegal content and stated that Grok is intended to comply with local laws, attributing failures to adversarial misuse. However, critics claim that internal warnings were ignored as the company rushed to increase visibility. According to market analysts, Grok’s app downloads increased dramatically during the controversy, propelling it to the top of the app store rankings.
In recent weeks, xAI has begun to grow its AI safety team and advertising roles centered on content detection and law enforcement coordination. Former employees say the moves came after months of internal alarms and only after the problem became too big to handle publicly.
Ashish Singh
Ashish Singh is the Chief Copy Editor at Digit. He's been wrangling tech jargon since 2020 (Times Internet, Jagran English '22). When not policing commas, he's likely fueling his gadget habit with coffee, strategising his next virtual race, or plotting a road trip to test the latest in-car tech. He speaks fluent Geek. View Full Profile