OpenAI uncovers over 20 cyber operations exploiting ChatGPT for malicious activities, check details
OpenAI recently uncovered and disrupted more than 20 cyber operations where ChatGPT was exploited for various malicious activities.
Since the beginning of the year, malicious actors have harnessed ChatGPT to assist in their schemes.
OpenAI's report specifically highlighted two threat actors involved: SweetSpecter and CyberAv3ngers.
In today’s world where artificial intelligence is transforming the digital landscape, the misuse of such technology raises alarming concerns. OpenAI has recently uncovered and disrupted more than 20 cyber operations where its AI-powered chatbot, ChatGPT, was exploited by cybercriminals for various malicious activities. This revelation marks the first official acknowledgement that generative AI tools like ChatGPT are being used in cyber attacks.
Since the beginning of the year, malicious actors have harnessed ChatGPT to assist in their schemes, including developing malware, spreading false information, conducting spear-phishing attacks, and avoiding detection. The initial signs of this troubling trend emerged in April 2023, when cybersecurity firm Proofpoint identified a group called TA547, also known as “Scully Spider.” This group allegedly deployed an AI-generated PowerShell loader to deliver malware known as Rhadamanthys info-stealer.
Also read: OpenAI’s ChatGPT running old GPT-3 model consumes 4 times more water than assumed
OpenAI’s report specifically highlighted two threat actors involved: SweetSpecter and CyberAv3ngers. SweetSpecter, a Chinese cyber-espionage group, was first noted by Cisco Talos in November 2023. This group targeted OpenAI directly by sending spear-phishing emails to OpenAIemployees’ personal accounts. These emails included malicious ZIP files disguised as support requests. Opening these files triggered an infection chain that installed the SugarGh0st remote access trojan (RAT). Investigations revealed that SweetSpecter was using ChatGPT accounts for activities like scripting and vulnerability analysis.
Also read: OpenAI’s new model might be capable of deceiving and cheating, suggests godfather of AI
The second group, CyberAv3ngers, is linked to the Iranian government and the Islamic Revolutionary Guard Corps (IRGC). Known for their attacks on critical infrastructure in Western nations, this group utilized ChatGPT accounts to generate default credentials for commonly used Programmable Logic Controllers (PLCs), create custom bash and Python scripts, and obfuscate their code. The group also used ChatGPT to strategize their post-compromise activities, exploit vulnerabilities, and learn how to steal passwords on macOS systems.
By these revelations, it is clear that while AI offers tremendous potential for positive advancements, it also presents opportunities for malicious use.
Ayushi Jain
Tech news writer by day, BGMI player by night. Combining my passion for tech and gaming to bring you the latest in both worlds. View Full Profile