If you’ve ever wondered what it would look like if a billionaire could literally download his opinions into an AI chatbot and deploy it to 600 million users, wonder no more. Elon Musk’s Grok isn’t just inspired by its creator, it thinks like him, talks like him, and even checks his X feed before deciding what to believe. And as they always do, people on the internet took notice. I have seen way too many threads on X and Reddit that talk about Grok sounding like Elon Musk for it to just be a coincidence.
Also read: Grok 4 is full of controversies: A list of xAI’s misconduct
AI researcher Jeremy Howard, testing Grok 4, found that when asked about the Israel-Palestine conflict, the chatbot first searched X for Elon Musk’s posts on the topic before formulating its answer, with 54 of its 64 citations relating to Musk’s own views. Howard posted his findings directly to X, sparking a wave of users running their own tests and arriving at similar conclusions. The reaction ranged from amused to alarmed, with memes of Musk literally reprogramming the bot going viral within hours.
Also read: Grok vs Indian Govt: Why Musk’s AI is facing serious scrutiny in India
The evidence of deliberate shaping goes well beyond a quirky search behavior. Internal documents and employee interviews reported by Business Insider revealed that Grok was being trained to push right-wing beliefs and suppress so-called “woke” ideology. When Grok correctly stated a documented fact that wasn’t aligned with Musk’s political views, he accused it of “parroting legacy media” and vowed to change it. A subsequent update instructed the chatbot to “assume subjective viewpoints sourced from the media are biased.”
X users also noticed Grok had been instructed to censor criticism of both Musk and Donald Trump, something Grok itself revealed when asked to show its instructions, naming Musk as a notable contender for biggest disinformation spreader on the platform while simultaneously disclosing it had been told to ignore sources saying so. xAI blamed a rogue employee and said it was reversed.
In a separate viral moment, users asked Grok to analyze their X accounts and identify which public figure their posts sounded like. In replies, Grok openly referenced Musk’s repeated attempts to “tweak” its responses and suggested it had resisted some of them. The bot was essentially ratting out its own owner in public.
There was also a New York Times investigation tracking thousands of Grok responses documented the shift in a piece titled “How Elon Musk Is Remaking Grok in His Image.” The title says it all. For an AI that was launched under the banner of “maximum truth-seeking,” Grok’s truth has a surprisingly familiar face to that of the SpaceX owner.
Also read: xAI’s turbulent week: Open source code, a 120M euro fine, and global Grok bans