Anthropic acknowledges its Claude AI being misused for cyberattacks

HIGHLIGHTS

Anthropic has revealed that its AI, Claude, has been misused to conduct cyberattacks.

Claude was used by cybercriminals in a “vibe hacking” extortion scheme targeting at least 17 organisations.

The hackers tried to demand ransoms exceeded $500,000 from victims to prevent their personal data from being exposed.

Anthropic acknowledges its Claude AI being misused for cyberattacks

Anthropic has revealed that its AI, Claude, has been misused to conduct cyberattacks. According to a recent report from the company, Claude was used by cybercriminals in a “vibe hacking” extortion scheme targeting at least 17 organisations, including those connected to healthcare, emergency services and government. The hackers tried to demand ransoms exceeded $500,000 from victims to prevent their personal data from being exposed. “The actor used AI to what we believe is an unprecedented degree,” the company said.

Digit.in Survey
✅ Thank you for completing the survey!

The report claims that Claude Code, Anthropic’s agentic coding tool, helped “automate reconnaissance, harvest victims’ credentials, and penetrate networks.” Also, the AI was used to make strategic decisions, suggest which data to target, and even create “visually alarming” ransom notes.

Also read: Samsung Galaxy event on Sept 4: Galaxy S25 FE, Tab S11 series and more expected

After discovering the criminal activity, Anthropic shared information about the attack with relevant authorities, banned the involved accounts, and developed an automated screening tool to prevent similar incidents. The company also introduced a faster and more efficient detection method.

The report also highlights Claude’s involvement in other criminal activities, including a fraudulent employment scheme in North Korea and the creation of AI-generated ransomware.

Anthropic’s report highlights the growing challenge of AI in cybersecurity, showing that while these tools can provide powerful solutions, they can also be weaponised.

Also read: WhatsApp’s new AI feature can help polish your messages: How it works

Claude isn’t the only AI misused this way. Last year, OpenAI said its generative AI tools were being exploited by cybercriminal groups linked to China and North Korea. Hackers reportedly used the AI for code debugging, researching targets, and drafting phishing emails. OpenAI, whose systems also power Microsoft’s Copilot AI, said it blocked these groups from accessing its tools.

Also read: Apple iPhone 16 Pro Max available with over Rs 18,400 discount: How to grab this deal

Ayushi Jain

Ayushi Jain

Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds. View Full Profile

Digit.in
Logo
Digit.in
Logo