Top 3 ChatGPT malicious use cases: How the AI chatbot is being used for nefarious purposes

Top 3 ChatGPT malicious use cases: How the AI chatbot is being used for nefarious purposes
HIGHLIGHTS

ChatGPT is being used for developing malicious tools

We will go through examples of ChatGPT being used for nefarious purposes.

The article also gives a broad overview of various ways AI language models are weaponised for bad activities.

J. Robert Oppenheimer once said, “we know too much for one man to know too much”. Today, we are living in a world where artificial neural networks behind AI Chatbots like ChatGPT can be trained to analyze large amounts of data that would be physically impossible for a person to grasp. Not only that, these AI language models can memorize, find patterns, and spew results at stupendous speeds. They don’t even suffer from a patchy brain like us. So, in a crux, ChatGPT and the likes really know too much and like any technology, it too can be used for both good and bad purposes. We have already gone in depth about the various use cases of ChatGPT, but here, we will step over to the other side. 

The dark side of ChatGPT: How is it being used to develop malicious tools

Check Point Research (CPR), a leading player in the cybersecurity research space has discovered some cases of bad actors employing ChatGPT for various cybercrimes. The findings of CPR classify ChatGPT-related cybercrimes into 3 kinds:

1. Using ChatGPT for creating Infostealer

ChatGPT dark side

ChatGPT has been demoed to have the capability of cooking up info-stealers, and malware strains and performing activities that you could easily read in books and publications about malware. How these works were outlined in a thread titled  “ChatGPT – Benefits of Malware,” which popped up on one of the deep web hacking forums on December 29, 2022. 

2. Using ChatGPT for creating malicious Python scripts

ChatGPT malicious tools

A few days before the aforementioned post, i.e., on December 21, 2022, a Python script with malicious intent was posted by a supposed threat actor called USDoD. He mentioned it was the ‘first script he ever created,’ and he had got a “nice [helping] hand [presumably from ChatGPT] to finish the script with a nice scope.”

So, perhaps with some programming skills, one could code multi-layer encryption tools and hell, even ransomware.  

3. Using ChatGPT for fraud activities

OpenAI

Taking things even further is the case of an alleged cybercriminal who used ChatGPT to build  Dark Web marketplace scripts. For the uninitiated, such marketplaces are where illegal or stolen goods are traded against crypto payments.

Well, this is only one of the ways AI applications like ChatGPT are being misused. In the past, we have come across many instances where the internet and AI have been put to illicit purposes by cyber crooks.

Other examples of AI technologies used for bad purposes

1. Deepfake 

AI is used to digitally alter the face or body of a person in a video and the result would be convincing enough to manipulate the viewer to believe it's real. Deepfake videos of politicians, celebs, and famous personalities are used for misinformation and pornographic purposes. 

2. Social media bots

AI-powered bots are prevalent on most social media platforms. Seemingly real, but these fake social media accounts are also employed to disseminate false information and manipulate public discourse. You must already know about AI bots being used for getting likes and followers on socials. Elon Musk had raised the massive bots problem on Twitter. Even, the 2016 US elections were allegedly influenced by Russian-funded AI bots on Facebook.

3. Phishing scams

AI is also used for phishing scams through email and messages, tricking people into revealing their sensitive information or many times, money. Since there is hardly any learning curve required, scam artists can easily concoct scam emails using Generative AI apps like ChatGPT.

4. Fraud Detection Evasion

Although there are Fraud Detection Systems wherein AI models are put to reverse engineer and bypass these systems, cybercriminals get an easy pass. 

ChatGPT

In our own test, we found although ChatGPT initially tries to avoid responding to queries/prompts (being a responsible AI model) that are bad or about bad topics, with some rephrasing and convincing, we can get it to respond. 

Basically, we could trick it and so should anybody else.

Having said that, we agree that essentially, technology isn't good or evil and it depends on how it’s put to use. Take our man Oppenheimer, the father of the Atomic Bomb, as an example. When he worked on the scientific project, he would not have predicted the ungodly use of his creation. And as Berkely Connect Physics Director Bernard Sadoulet puts it, “Science may be neutral, but its use in society is not, and scientists’ work often has unintended consequences that must be considered carefully,” we think so is the case with a potent technology like ChatGPT.

G. S. Vasan

G. S. Vasan

Vasan is a word weaver and tech junkie who is currently geeking out as a news writer at Digit. View Full Profile

Digit.in
Logo
Digit.in
Logo