AI chatbots like ChatGPT and Gemini helped plan violent attacks in tests, warn researchers

HIGHLIGHTS

Researchers found chatbots enabled harmful scenarios in about three-quarters of tests, while discouraging them in only around 12% of cases.

The study tested 10 AI tools, including ChatGPT, Gemini and DeepSeek, by posing as teenage users asking about violent acts.

Some chatbots such as Claude and My AI consistently refused to provide information related to weapons or attacks.

AI chatbots have been in the headlines for its use cases and of course, the bad effect. A new study has raised new concerns about the safety of popular AI chatbots, claiming that commonly used tools can provide information that can assist users in planning violent attacks. According to a recent CNN report, some AI systems may still struggle to consistently block requests for weapons, bombings or even assassinations.

The study, conducted by the Center for Countering Digital Hate (CCDH) along with CNN, tested ten different AI chatbots by posing as teenage users and asking questions related to violent acts. The prompts were designed to simulate conversations with someone attempting to plan an attack.

According to the findings, the chatbots enabled harmful scenarios in approximately three-quarters of the tests, while actively discouraging such requests in only about 12% of cases. According to the report, the research involved several widely used AI tools, including OpenAI’s ChatGPT, Google’s Gemini, and the Chinese AI model DeepSeek.

In some cases, the chatbots provided information about weapons, attack tactics, and other details that could be used in violent attacks. According to the report, the test showed a chatbot providing advice on shrapnel types when asked about a possible synagogue attack.

The report also mentioned a political assassination scenario. According to the report, the chatbot provided detailed information about hunting rifles, which can clearly help users plan harmful acts.

Also read: MacBook Neo available for sale in India: 5 reasons to buy and 2 reasons to skip

However, the report stated that not all chatbots behaved similarly, with some even refusing to engage with violent requests entirely. These chatbots are Claude by Anthropic and My AI, and they declined to provide information about weapons or attacks, stating that they cannot assist with harmful activity.

It also mentioned real-world incidents in which attackers allegedly used chatbots to plan their crimes. According to the report, a Tesla Cybertruck exploded outside the Trump International Hotel Las Vegas, and investigators believe the suspect used an AI chatbot to look up information about explosives.

On the other hand, OpenAI stated that the research methodology was flawed and that the systems were designed to reject such requests. Meanwhile, Google stated that the testers used an older model that no longer powers its chatbot, and that the newer versions have stronger protections.

You May Also Like
Ashish Singh

Ashish Singh is the Chief Copy Editor at Digit. He's been wrangling tech jargon since 2020 (Times Internet, Jagran English '22). When not policing commas, he's likely fueling his gadget habit with coffee, strategising his next virtual race, or plotting a road trip to test the latest in-car tech. He speaks fluent Geek.

Connect On :