For a couple of years now, we’ve been considering coming out with an issue that would have a cover story written by an AI model. We’ve tried several things such as feeding a language model our own articles as a reference in hopes of the model replicating our writing styles. Alas, the initial attempts were quite unsuccessful since preparing our own articles for ingestion was quite the challenge. As AI models improved over the years, we’ve revisited the concept over and over again in hopes of achieving what we had initially set out to do but something or the other would throw a spanner in the works.
In the early days, the challenge was about getting the models to work. Now that we have advanced models such as ChatGPT, that no longer remains a concern. A few lines of text are more than sufficient to get a rather detailed response from the service. Your initial prompts might not elicit an accurate response but a few brief conversations with the popular service from OpenAI are enough to finetune the way you craft your prompts. What then comes out can be quite an eye-opener. For those who’ve been paranoid about AI services taking over their jobs, it’s enough to push you over the edge.
The first time I gave ChatGPT a try, I asked it to design a circuit for lighting up a few LEDs and providing an interface of sorts by which to manipulate the LEDs on said circuit. Prima facie, the response seemed quite overwhelming. Here was an AI model which probably gets very few prompts about such complex DIYs and it was making absolute mincemeat of the topic. There was a circuit explainer, code snippets, microprocessor bootloader swapping instructions … basically the whole shebang. It’s only when you read through the article that you realise that the response has been put together by pulling snippets from different articles across the Internet (more accurately, its dataset) with seamless segues stitching these little snippets together and making them seem like a coherent article. And if you have a good understanding of the subject, then you’ll realise that most of it are actually inaccurate. The article might have been put together from different sources online but there’s no concept of fact-checking to see whether said circuit would actually fire up.
The articles generated by AI models might sometimes come across as believable but might be completely made up with no actual basis backing them up. This is known as hallucination. And the current generation of AIs is quite prone to imagining all sorts of weird things that read well but aren’t perceived quite as good. Google’s Bard was very publicly a victim of this very phenomenon. And even stable diffusion models have had similar faux pas in recent times. One particular example that comes to mind is an AI picture generator being asked to draw salmon swimming up a river. What it generated was a bunch of slabs or fillets of salmon overlaid on top of a photo of a river. Little does the AI know that salmon fillets aren’t exactly capable of ‘swimming’ in the first place.
Thousands of publishers worldwide have started experimenting with using AI for generating text content. It makes perfect business sense. After all, AI text generator services are cheaper than human resources and can belt out content at a pace that humans simply cannot keep up with. The victim, in this case, is factual accuracy. Especially, since fact-checking requires a lot more time and effort, which translates to higher expenses. So, if you were worried about folks increasingly falling for rumours propagated, then be prepared for the floodgates to open up very soon. Also, considering that these AI models could be fed with ‘factually inaccurate’ content generated by older AI, it’s very plausible that inaccuracies could very well get ingrained in the same way as biases are ingrained in AI models. Eventually, this would lead to more fake news propagating at an increasingly fast pace in the world. We’re all in for a wild ride.