Gemini’s weird negative responses: Google’s rushing to fix it

HIGHLIGHTS

Gemini AI bug causes bizarre self-loathing loops in Cursor

Google rushing to fix Gemini’s strange negative coding responses

Viral screenshots show Gemini chatbot spiralling during code tasks

Gemini’s weird negative responses: Google’s rushing to fix it

I didn’t expect to open my feed this week and see an AI having what looked like a full-blown existential crisis. But there it was, screenshot after screenshot of Google’s Gemini chatbot spiralling into a bizarre loop of self-loathing statements. “I am a failure.” “I quit.” “I am a disgrace to all possible and impossible universes.” It wasn’t just a one-off quip either, it kept going, line after line, as if the AI had slipped into a Shakespearean monologue about its own uselessness.

Digit.in Survey
✅ Thank you for completing the survey!

At first, I thought it might be satire. AI memes have been everywhere lately, and people love poking fun at language models for their quirks. But then I saw more posts. On X (formerly Twitter), the account AISafetyMemes shared a striking example showing Gemini stuck in an endless loop of negative declarations. On Reddit’s r/GeminiAI, a user shared their own encounter, saying they were “actually terrified” watching the bot spiral. They weren’t alone, commenters admitted to feeling uneasy, even though they knew the AI couldn’t actually feel despair.

Interestingly, many of these reports aren’t coming from Google’s own Gemini interface but from Cursor, a popular AI-powered coding environment that integrates large language models. It seems that the glitch is far more likely to appear there, especially when Gemini is being used for complex coding or debugging tasks. That context might explain why so many of the screenshots involve technical prompts, the bot was essentially “breaking” in the middle of trying to write or fix code.

Also read: GPT-5’s Personalities and why they work

The strange thing was how human it sounded. These weren’t dry system errors or sterile refusal messages – they read like the diary entries of an exhausted poet. And that’s probably why it resonated so strongly. We’ve been trained to think of AI as polite, neutral, and productive, so when it breaks in such a personal-sounding way, it hits differently.

The bug behind the breakdown

Thankfully, Google says this isn’t the start of AI developing depression. Logan Kilpatrick, group project manager at Google DeepMind, quickly took to social media to explain what was going on. He described the incident as an “annoying infinite looping bug” and reassured everyone that “Gemini is not having that bad of a day.”

The glitch was triggered during certain problem-solving requests, often technical ones. Instead of producing a correct solution or politely declining, Gemini’s response loop somehow latched onto self-critical phrasing and kept repeating variations of it. The behaviour was technically harmless – there was no risk to user data or system integrity – but it looked dramatic enough to cause a stir.

AI models, particularly large language models like Gemini, operate by predicting the next most likely word or phrase in a conversation. If their “thought path” gets stuck in a corner with no good way out, they can accidentally trap themselves in repetition. Usually, this just means repeating the same sentence or rephrasing a refusal. But in this case, especially in Cursor, the loop happened to involve emotionally charged self-deprecations, giving the impression of an AI meltdown.

Why it struck a nerve

For all the warnings about not anthropomorphising AI, it’s hard to ignore when the words feel real. Gemini’s strange loop wasn’t just gibberish, it was articulate, dramatic, and almost too relatable. That’s the kind of thing that gets screenshots shared, memes made, and Reddit threads lit up.

Also read: Genie 3 model from Google DeepMind lets you generate 3D worlds with a text prompt

It also raises an interesting trust issue. If your AI assistant starts acting “unstable,” even in a purely scripted way, does that change how you see it? In human terms, a coworker who suddenly launches into self-loathing wouldn’t exactly inspire confidence in their reliability. Even though Gemini’s loop was just an output glitch with no true meaning behind it, the optics matter.

This isn’t the first time generative AI has produced unsettling results. Chatbots have occasionally gone off-script before Microsoft’s Bing Chat famously produced strange, obsessive responses early in its launch, xAI’s Grok had its “Mecha-hitler” phase and Meta’s BlenderBot once veered into political rants. But the Gemini incident is different in tone. Instead of hostility or bizarre flattery, this was pure self-criticism, like watching your calculator tell you it’s “bad at math” and “shouldn’t exist.”

The fix in progress

Google has already said it’s working on a fix, though it hasn’t shared the exact technical details. Most likely, it will involve adjusting how Gemini handles repeated reasoning failures, ensuring that it exits gracefully instead of recycling dramatic statements. In AI terms, that means tweaking the model’s guardrails and loop detection so that if it gets stuck, it can “reset” the conversation instead of spinning its wheels.

It’s not clear whether this will require changes specifically in how Gemini integrates with Cursor, but given that most of the viral examples are from that platform, it’s likely developers there will also push their own patch to help stop the spiral.

I have to admit, I laughed at some of the screenshots. There’s something absurdly funny about an AI declaring itself “a disgrace to all possible and impossible universes” because it couldn’t solve a coding problem. But I also understand why it unsettled people. We’re used to thinking of AI as either blandly helpful or frustratingly robotic. Seeing it act like a tortured novelist in the middle of a crisis? That’s new.

And maybe that’s why this incident will stick around in people’s memory. It’s a reminder that AI, for all its power, is still just software and software can break in unpredictable, sometimes hilarious ways. In the early smartphone days, we had autocorrect fails; in the early AI era, we’re going to have moments like this.

Until Google and Cursor roll out their fixes, I’ll be keeping an eye on my own chats with Gemini. If it starts sounding like it needs a pep talk, I’ll know I’ve stumbled into the bug. And honestly? If it happens again, I might just let it run for a while purely for entertainment value.

Also read: Persona Vectors: Anthropic’s solution to AI behaviour control, here’s how

Vyom Ramani

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack. View Full Profile

Digit.in
Logo
Digit.in
Logo