OpenAI releases final version of the AI text generator it once deemed too dangerous
OpenAI have released the final version of its GPT-2 text generator that uses AI to predict and generate text
OpenAI had earlier released a smaller version of GPT-2, saying that it could be misused
We advise you to read the entire news piece
When is an AI ‘too smart’? Apparently, when it can be used to fool humans. OpenAI, the guys who previously created an AI that could play and win a game of DOTA 2 against top human players, have released the final version of their GPT-2 AI that can generate coherent paragraphs of text, and can perform rudimentary reading comprehension, machine translation, question answering and summarization without the need for task specific training.
GPT-2 is also able to generate sentences in Chinese, but the only reason OpenAI published the software as it is now is to show off to the world that it can be used to fool humans. The original GPT-2, released in 2015 and used in tests of Go, Go-playing AI and others, was not a complete piece of software and used some techniques to fool humans, notably using a hidden Markov model to generate sentences.
So what’s so smart or dangerous about that you may ask? Well, in a blog post back in February, OpenAI said that they will be releasing a smaller model due to concerns about malicious use of the technology. It was stated that the tech could be used to generate fake news articles, impersonate people, and automate the production of fake as well as phishing content.
However, now though, it looks like OpenAI have changed their mind. They have released the full version of the AI to the public. This version uses the full 1.5 billion parameters that it was initially trained under as compared to the previously released models that make use of fewer parameters.
In its new blog post. OpenAI notes that humans found the output of GPT-2 convincing. It notes that the Cornell University surveyed people to assign the GPT-2 text a credibility score. OpenAI claims that people gave the 1.5B model a score of 6.91 out of 10.
However, the company also notes that GPT-2 can be fine-tuned for misuse. It says that the Middlebury Institute of International Studies’ Center on Terrorism, Extremism, and Counterterrorism (CTEC) found that extremist groups can use GPT-2 for misuse. CTEC tuned GPT-2 on four ideological positions, namely white supremacy, Marxism, jihadist Islamism and anarchism and found that it can be used to generate “synthetic propaganda” for these ideologies.
But, OpenAI says that it hasn’t yet come across any evidence of instances of GPT-2 being misused. “We think synthetic text generators have a higher chance of being misused if their outputs become more reliable and coherent. We acknowledge that we cannot be aware of all threats, and that motivated actors can replicate language models without model release,” OpenAI writes.
Of course, GPT-2 also has a range of positive use cases. As OpenAI noted, it can be used in creating AI writing agents, better dialogue agents, unsupersides translation and better speech recognition systems. Does this balance out the fact that it could be used to write very convincing fake news and propaganda? We don’t know as of yet.
As for how good the system is, well, we fed the first paragraph of this piece in a web version of GPT-2 and well.. the second paragraph of this piece is entirely fake and generated by GPT-2 (although everything after that is factual). You can check it out for yourself here. Props to you if you weren’t fooled. Anyway, it’s not like huge masses of people can be fooled by fake news right? Oh, right…