ChatGPT, Gemini and other AI tools reportedly directing users to illegal gambling sites: Report

HIGHLIGHTS

The investigation focused on five popular AI services: ChatGPT by OpenAI, Copilot by Microsoft, Gemini by Google, Grok by xAI, and Meta AI by Meta.

All of the tested chatbots were able to recommend gambling platforms that do not have a UK license.

Some responses included suggestions for bonuses, faster payouts and cryptocurrency payment options.

ChatGPT, Gemini and other AI tools reportedly directing users to illegal gambling sites: Report

ChatGPT, Gemini, GrokAI, and other AI chatbots have become popular options for doing repetitive work, completing complex tasks, and much more. However, it is affecting you. A recent investigation suggests that several AI chatbots are directing users to unlicensed online casinos, potentially exposing vulnerable people to fraud and gambling-related harm. According to a report by The Guardian and Investigate Europe, some AI tools created by major technology companies can be prompted to recommend gambling websites and even explain how users may bypass safety checks designed to protect players.

Digit.in Survey
✅ Thank you for completing the survey!

AI chatbots reportedly suggesting offshore casinos

According to the report, the investigation focused on five popular AI services: ChatGPT by OpenAI, Copilot by Microsoft, Gemini by Google, Grok by xAI, and Meta AI by Meta. The researchers asked each chatbot a series of questions about unlicensed casinos and how to access gambling websites that are not regulated in the UK.

The report added that all of the tested chatbots were able to recommend gambling platforms that do not have a UK license. Some responses included suggestions for bonuses, faster payouts and cryptocurrency payment options.

Also read: After MacBook Neo, Apple may introduce MacBook Ultra with touchscreen and OLED screen

Concerns over bypassing safety measures

According to the report, some chatbots provided information on how to avoid verification checks, which are designed to ensure that players do not gamble beyond their means or use illegal funds. Furthermore, some responses reportedly included advice on how to access gambling platforms that are not part of GamStop, the UK’s national self-exclusion program for people who want to limit their gambling activity.

Campaigners and addiction experts have criticised the lack of safeguards, warning that such responses may increase the risks for people struggling with gambling problems.

In response, OpenAI stated that its chatbot is designed to refuse requests that encourage harmful behaviour and instead provide factual information or lawful alternatives. Meanwhile, Microsoft said its AI assistant uses multiple safety layers, including automated monitoring and human review, to limit harmful recommendations.

On the other hand, the issue has also drawn attention from the authorities. Officials mentioned that AI platforms must comply with rules under the Online Safety Act, which requires technology companies to address harmful or illegal content online.

Ashish Singh

Ashish Singh

Ashish Singh is the Chief Copy Editor at Digit. He's been wrangling tech jargon since 2020 (Times Internet, Jagran English '22). When not policing commas, he's likely fueling his gadget habit with coffee, strategising his next virtual race, or plotting a road trip to test the latest in-car tech. He speaks fluent Geek. View Full Profile

Digit.in
Logo
Digit.in
Logo