Anthropic accuses Chinese AI firms of misusing Claude, Elon Musk fires back

HIGHLIGHTS

Anthropic has accused three Chinese AI companies of misusing its Claude chatbot to improve their own AI models.

According to the company, these firms created more than 24,000 fake accounts and generated over 16 million interactions with Claude.

The claims quickly drew a reaction from Elon Musk.

Anthropic accuses Chinese AI firms of misusing Claude, Elon Musk fires back

Anthropic has accused three Chinese AI companies of misusing its Claude chatbot to improve their own artificial intelligence models. The claims quickly drew a reaction from Elon Musk, who criticised the company and questioned its own data practices.

Digit.in Survey
✅ Thank you for completing the survey!

In a post on X, Musk targeted Anthropic and made serious allegations. ‘Anthropic is guilty of stealing training data at massive scale and has had to pay multi-billion dollar settlements for their theft. This is just a fact,’ Musk said in his post.

Musk, who heads xAI, also cited screenshots of X Community Notes to support his claim. The note stated that Anthropic settled a $1.5 billion lawsuit related to the training of Claude AI and alleged that the company ‘also train using stolen data.’

Also read: AI agents will transform every computer-based job, warns Anthropic engineer

What did Anthropic say?

Anthropic has claimed that three Chinese AI firms- DeepSeek, Moonshot AI and MiniMax- carried out what it described as ‘industrial-scale distillation attacks.’

‘We’ve identified industrial-scale distillation attacks on our models by DeepSeek, Moonshot AI, and MiniMax,’ Anthropic said in an X post.

According to the company, these firms created more than 24,000 fake accounts and generated over 16 million interactions with Claude. ‘These labs created over 24,000 fraudulent accounts and generated over 16 million exchanges with Claude, extracting its capabilities to train and improve their own models,’ it added.

Also read: Samsung Galaxy S26 Ultra leaks in early hands-on, shows privacy display and more

Anthropic said the companies used a method called ‘distillation.’ In the AI industry, distillation refers to training a smaller or less capable model using the responses of a more advanced system. While the technique itself is not always harmful, Anthropic believes it was used improperly in this case.

‘These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers, and the global AI community,’ the company said.

Ayushi Jain

Ayushi Jain

Ayushi works as Chief Copy Editor at Digit, covering everything from breaking tech news to in-depth smartphone reviews. Prior to Digit, she was part of the editorial team at IANS. View Full Profile

Digit.in
Logo
Digit.in
Logo