OpenAI flags rising cyber threats as AI models get more powerful

Updated on 11-Dec-2025
HIGHLIGHTS

OpenAI says future AI systems may be able to generate zero-day exploits and automate complex cyberattacks.

The company is boosting investment in AI-driven defensive tools for vulnerability detection and code auditing.

Experts warn that advanced offensive AI capabilities could trigger stricter global regulations and oversight.

OpenAI has warned of growing concerns about the artificial intelligence models that could increase the global cybersecurity risks, even as the company is working to improve its defensive capabilities. As per the company, the future AI startups may become capable of generating advanced zero-day exploits, carrying out high-skilled intrusion operations or breaching complex enterprise networks, tasks traditionally associated with elite human hackers.

To counter the risks, OpenAI has said it is increasing the investment in the defensive applications of AI, including tools that help security teams audit code, identify vulnerabilities and automate patching processes. The company also added that it relies on the strict access controls, outbound use restrictions, hardened infrastructure and continuous monitoring to limit the misuse of its models.

“As our models grow more capable in cybersecurity, we’re investing in strengthening safeguards and working with global experts as we prepare for upcoming models to reach ‘High’ capability under our Preparedness Framework,” the company wrote in its social media post.

According to cybersecurity researchers, if AI systems achieve the level of offensive capability that OpenAI warns about, attackers will be able to automate cybercrime on an unprecedented scale. These models could create malicious payloads, generate exploits without human intervention, and mass-produce malware like worms, ransomware, and botnets, dramatically increasing the number and intensity of attacks.

Experts warn that such breakthroughs will draw intense attention from authorities around the world, as policymakers strive for stronger compliance norms and greater accountability from AI developers. To stay up with the changing threat landscape, many organisations will certainly require AI-powered defensive solutions.

It remains to be seen whether the upcoming artificial intelligence model will bring the safety that every user around the globe deserves.

Ashish Singh

Ashish Singh is the Chief Copy Editor at Digit. He's been wrangling tech jargon since 2020 (Times Internet, Jagran English '22). When not policing commas, he's likely fueling his gadget habit with coffee, strategising his next virtual race, or plotting a road trip to test the latest in-car tech. He speaks fluent Geek.

Connect On :