OpenAI quietly fixed a ChatGPT bug that could have exposed your Gmail data: Here’s what happened
The flaw allowed hackers to steal Gmail data without user interaction.
Researchers showed how hidden email instructions could trigger data leaks.
OpenAI confirmed the bug was fixed and reaffirmed user safety as a priority.
OpenAI has fixed a security flaw in its popular AI chatbot ChatGPT, which was left unpatched and could have allowed cybercriminals to access users’ Gmail accounts. The vulnerability was discovered this year by cybersecurity firm Radware in ChatGPT’s Deep Research agent, an advanced tool designed to sift through large amounts of data on behalf of users.
SurveyAccording to Radware, the security flaw was discovered in February and allows hackers to quietly extract sensitive information from both personal and corporate Gmail accounts associated with the service. There is currently no evidence of real-world exploitation, but researchers warned that the risk was high enough to allow attackers to steal data without the victim ever interacting with a malicious email.
According to Pascal Geenens, Radware’s director of threat intelligence, companies could have been completely unaware that confidential information was leaving their systems if corporate accounts were compromised. During one test, the researchers demonstrated how hidden instructions embedded in an email can trick the Deep Research agent into scanning inboxes and sending personal information to an external server.
For those unfamiliar, the Deep Research feature, which is available to paying ChatGPT subscribers, allows the AI to extract information from Gmail with user permission. While the tool broadens the range of AI agents capable of performing complex tasks with little human intervention, Radware’s discovery emphasises the potential security risks these systems pose.
OpenAI had previously confirmed that the problem had been resolved. According to the report, a company spokesperson stated that user safety remains a top priority and that adversarial testing by researchers is encouraged because it helps strengthen the platform against future threats.
Ashish Singh
Ashish Singh is the Chief Copy Editor at Digit. He's been wrangling tech jargon since 2020 (Times Internet, Jagran English '22). When not policing commas, he's likely fueling his gadget habit with coffee, strategising his next virtual race, or plotting a road trip to test the latest in-car tech. He speaks fluent Geek. View Full Profile