OpenAI vs. Elon Musk: Lawsuit escalates with claims of intimidation against AI safety advocates

HIGHLIGHTS

OpenAI accused of intimidating AI safety advocates over California law

Musk lawsuit linked to subpoenas against small nonprofit Encode

SB 53 enforces AI transparency despite OpenAI legal pressure

OpenAI vs. Elon Musk: Lawsuit escalates with claims of intimidation against AI safety advocates

The legal battle between OpenAI and its co-founder Elon Musk has intensified dramatically following public accusations that the artificial intelligence powerhouse is employing aggressive intimidation tactics against advocates for government oversight, allegedly using the Musk lawsuit as a tool to silence critics.

Digit.in Survey
✅ Thank you for completing the survey!

The core of the controversy involves a tiny, three-person policy non-profit named Encode, which played a pivotal role in pushing for the passage of California’s Transparency in Frontier Artificial Intelligence Act (SB 53). This landmark law, signed recently by Governor Gavin Newsom, represents a significant step in U.S. state-level AI regulation, mandating that developers of powerful frontier models publish safety frameworks, report major incidents, and share risk assessments with state authorities. OpenAI, despite publicly touting its commitment to safety, had reportedly lobbied to weaken the bill during negotiations.

Also read: Study finds hacking most LLMs is remarkably easy: It just takes malicious documents

The weaponization of the subpoena

The claims of intimidation were brought to light by Nathan Calvin, the 29-year-old general counsel for Encode. Calvin published a viral thread on X (formerly Twitter) in which he detailed how OpenAI allegedly attempted to exert legal pressure on the small organization and its allies to undermine their advocacy work on SB 53.

Calvin specifically accuses OpenAI of leveraging its ongoing, high-profile litigation with Elon Musk to issue exceptionally broad and burdensome subpoenas to critics. The non-profit’s counsel described the shocking moment a sheriff’s deputy arrived at his home to serve the legal demand in August. The subpoena demanded that Calvin and Encode turn over a vast quantity of their private communications and records related to OpenAI’s internal governance, investors, and policy work, including private messages concerning their efforts on SB 53.

According to Calvin, the intent of the legal action was not discovery, but pure intimidation. OpenAI, with its multi-billion dollar valuation and virtually limitless legal resources, was allegedly trying to overwhelm a small non-profit that operates on a minuscule fraction of its budget. By linking the subpoenas to the Musk lawsuit, OpenAI implicitly suggested that Encode and other vocal critics were secretly funded by the rival tech billionaire, a claim that Encode has vehemently denied. This narrative, the critics argue, serves to delegitimize policy-focused advocacy groups as proxies for a corporate rival, rather than independent voices concerned with public safety.

Internal dissent and policy backlash

The decision to deploy such aggressive legal tactics against a public-interest non-profit immediately drew criticism from across the AI industry, including a rare and notable wave of internal dissent from current and former OpenAI personnel.

Joshua Achiam, OpenAI’s Head of Mission Alignment, posted a remarkably candid public statement expressing his discomfort with the company’s actions. He noted that the use of such tactics against constructive critics was “not great” and stated that the company cannot be seen as a “frightening power instead of a virtuous one.” He urged his colleagues to “engage more constructively” with critics, reinforcing the idea that the company has a “duty and a mission to all of humanity.”

Also read: After NVIDIA, OpenAI chooses AMD chips for ChatGPT’s future: But why?

Adding to the outcry was Helen Toner, a former OpenAI board member who was involved in the 2023 drama surrounding CEO Sam Altman’s brief ousting. Toner was blunt, writing that while OpenAI does good research, “the dishonesty & intimidation tactics in their policy work are really not.”

This public pressure from insiders, who are often bound by strict non-disclosure and off-boarding agreements, underscores the deep philosophical rift within OpenAI between its profit-driven imperatives and its founding non-profit mission.

Musk’s allegation

The intimidation claims provide a stark real-world example of the concerns at the heart of the legal case filed by Elon Musk. Musk’s lawsuit alleges that OpenAI, which he co-founded in 2015, has fundamentally betrayed its charter as a non-profit dedicated to open-source, non-commercial development of Artificial General Intelligence (AGI) for the benefit of humanity.

Musk’s public statement that OpenAI is “built on a lie” is amplified by these new revelations. The critics argue that a company prioritizing its original mission of safe AGI deployment would welcome, or at least constructively engage with, a law like SB 53 that requires basic transparency and safety reporting. Instead, OpenAI’s alleged strategy was two-fold: attempt to dilute the bill’s requirements through lobbying, and then use legal force to silence the very people advocating for those requirements. This behavior, opponents argue, is consistent with a company that has prioritized rapid commercial deployment and profit maximization over its core safety mission.

Despite the legal pressure, Encode and its allies stood firm, refusing to hand over the demanded documents and continuing their advocacy. California’s SB 53, requiring developers to report on safety measures and risks, was ultimately signed into law in September. However, the use of targeted subpoenas against small civil society groups sets a dangerous precedent, threatening to chill public interest advocacy and allowing powerful tech companies to overwhelm critics with expensive, broad-ranging legal demands during critical legislative debates. The escalation transforms the Musk v. OpenAI dispute from a corporate governance fight into a battle with significant implications for the future of democratic regulation and AI safety oversight.

Also read: AI boom is causing RAM and SSD supply shortage, will keep prices high for a decade

Vyom Ramani

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile

Digit.in
Logo
Digit.in
Logo