OpenAI has teased a new cybersecurity-focused AI model to take on Anthropic’s recently released Mythos AI. This comes after OpenAI CEO Sam Altman stated on X that the model will begin rolling out to a select group of cybersecurity professionals and organisations in the coming weeks.
According to OpenAI, GPT 5.5 Cyber is an advanced security model that helps experts identify software vulnerabilities, assess threats, and strengthen protection across critical infrastructure and enterprise systems. While the company has not disclosed all technical details, the model is said to improve on previous cybersecurity-focused releases by providing stronger analytical and defensive capabilities.
The timing is crucial as the launch coincides with rising interest and concerns about Anthropic’s Mythos AI model, which has gained attention for its ability to autonomously identify and exploit software flaws. Because of the potential risks associated with misuse, Anthropic has restricted access to Mythos to a small group of approved users.
OpenAI appears to be taking a similar approach with GPT 5.5 Cyber. The company confirmed that the model will not be made public but will instead be distributed through its Trusted Access for Cyber program. According to this framework, access will be restricted to vetted researchers, cybersecurity teams, and select institutions working in defensive security.
Altman stated that OpenAI intends to collaborate closely with government agencies and the broader cybersecurity ecosystem to develop “trusted access” guidelines. The model was created by the company to assist in the security of businesses, infrastructure, and digital systems in an era of increasingly complex cyber threats.
The GPT 5.5 Cyber is built on OpenAI’s previous GPT-5.4 Cyber model, which introduced features like reverse engineering tools that can allow analysts to scan compiled software for vulnerabilities even without the original source code.
As per the reports, even after these reports are positioned as defensive tools, experts have raised concerns about dual-use AI systems, arguing that the same technology capable of detecting flaws can also be repurposed to exploit them.