xAI’s turbulent week: Open source code, a 120M euro fine, and global Grok bans

HIGHLIGHTS

X open-sources algorithm amid €120M fine and global Grok bans

Musk releases Phoenix code while EU fines X for secrecy

Deepfake crisis triggers Grok bans as X reveals algorithm code

xAI’s turbulent week: Open source code, a 120M euro fine, and global Grok bans

It has been a defining week for Elon Musk’s “everything app,” but perhaps not the one he intended. On Tuesday, January 20, X (formerly Twitter) officially open-sourced its recommendation algorithm, fulfilling a long-standing promise of transparency. Yet, this technical milestone arrives in the middle of a regulatory storm, sandwiching the company between a massive €120 million fine from the European Union and outright service bans in Indonesia and Malaysia caused by Grok’s safety failures.

Digit.in Survey
✅ Thank you for completing the survey!

For observers of the tech industry, the juxtaposition is stark: X is voluntarily opening its “black box” code to the public while simultaneously being punished by global governments for what that black box produces.

Also read: Elon Musk denies Grok AI created illegal images, blames adversarial hacks

Inside “Phoenix”

The code release on GitHub offers the first concrete look at X’s new architectural core, internally dubbed “Phoenix.” The most significant revelation is the complete erasure of the old Twitter.

According to the documentation, X has eliminated “every single hand-engineered feature” from its ranking system. The manual boosts for video content, the penalties for external links, and the complex web of if-then rules that governed the timeline for a decade are gone. In their place is a Grok-based transformer model.

This marks a shift from “heuristic” ranking (rules written by humans) to “probabilistic” ranking (predictions made by AI). The new system relies on two main components:

  • Thunder: A pipeline for fetching in-network posts (from accounts you follow).
  • Phoenix Retrieval: A vector-based system that finds out-of-network content by matching semantic similarities.

Both feed into the Phoenix Scorer, a model derived directly from the Grok-1 architecture. Instead of following a human rule like “promote tweets with images,” the model analyzes a user’s sequence of historical actions to predict the probability of 15 different future interactions, ranging from a “Like” to a “Block.”

Also read: Grok vs Indian Govt: Why Musk’s AI is facing serious scrutiny in India

Musk himself described the current iteration as “dumb” and in need of massive improvement, framing the open-source release as a way to “crowdsource” optimization. However, moving to a pure-AI model could make the platform less transparent in practice; while we can see the code that trains the model, we cannot see the billions of weights inside the model that actually decide why a specific post goes viral.

The €120 million “transparency” fine

While X invites developers to inspect its code, European regulators have penalized it for hiding critical data. The European Commission’s €120 million (approx. $140 million) fine, levied under the Digital Services Act (DSA), focuses on three specific failures of transparency that contradict Musk’s “open book” narrative.

First are the deceptive “blue checks” that the EU ruled as a “dark pattern.” By allowing anyone to purchase verification without ID checks, the platform deceives users about the authenticity of accounts, a violation of DSA consumer protection rules.

Regulators found that X failed to provide a searchable, transparent archive of advertisements, making it impossible for researchers to track disinformation campaigns or malicious ads. The fine also cited X’s refusal to grant academic researchers access to public data, effectively blinding external watchdogs.

The deepfake crisis: Grok bans in Asia

Perhaps the most damaging development for X’s reputation this week is the tangible harm caused by its AI tools. While the EU fine is bureaucratic, the bans in Southeast Asia are visceral reactions to safety failures.

Both Indonesia and Malaysia have temporarily blocked access to Grok (and by extension, parts of the X Premium experience) following a surge in non-consensual sexualized images (NCII). Users exploited Grok’s image generation capabilities to “digitally undress” women and minors, creating deepfake pornography that spread rapidly on the platform.

Unlike the text-based controversies of the past, this involves the direct generation of illegal content. The Indonesian Ministry of Communication and Informatics cited a “complete lack of effective guardrails” in Grok 2.0. The UK’s communications regulator, Ofcom, has also launched a formal investigation, threatening similar blocks if safety protocols aren’t immediately overhauled.

X is attempting a high-wire act. By open-sourcing “Phoenix,” it hopes to win back the trust of the technical community and prove its commitment to free speech and transparency. But code on GitHub does not absolve a platform of its tangible impact. As long as the “dumb” algorithm amplifies deepfakes and the “verified” badge deceives users, X remains in a precarious position – technically open, but functionally broken.

Also read: Indonesia and Malaysia ban Grok AI amid explicit image generation, will India be next?

Vyom Ramani

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile

Digit.in
Logo
Digit.in
Logo