AI thought a Doritos bag was a gun. What’s worse? We believed it! 

AI thought a Doritos bag was a gun. What’s worse? We believed it! 

A 16-year-old boy in Baltimore ended up handcuffed on the ground, surrounded by police with drawn weapons. Because of an empty Doritos bag. Rather, because AI thought that bag of chips was actually a gun! 

Digit.in Survey
✅ Thank you for completing the survey!

The boy, named Taki Allen, was waiting outside his school for a ride home, minding his own business. That’s when a police AI system flagged a “weapon threat.” Within minutes, officers arrived. 

Eight patrol cars, officers with weapons drawn, shouting commands at a terrified teenager who had no clue what was happening. 

Let that sink in for a second: a literal bag of chips almost got a teenager shot.

Welcome to the golden age of artificial “intelligence,” where the machines that were supposed to make us smarter are instead giving us some of the dumbest moments in modern tech history

Ironically, this has happened in the same week as OpenAI – the torchbearer of AI revolution – has introduced its AI-powered browser ChatGPT Atlas. And, it just brings me back to the same point that AI is getting out of hand and somehow noone seems to be paying enough attention to it. 

AI hallucination has become the industry’s worst-kept secret. Ask ChatGPT for sources, and it might invent research papers that don’t exist. Use Midjourney, and it could render six-fingered hands like it’s some cosmic inside joke. Plug a facial recognition algorithm into a city’s CCTV network, and suddenly, an innocent person becomes a “suspect.”

It’s inevitable. AI doesn’t “see” or “think” the way we do. It’s just a pattern machine, connecting dots in datasets, often without any real understanding of context. So when that surveillance AI in Baltimore mistook a chip bag for a gun, it wasn’t being evil. It was being stupid. The problem is, we keep putting that stupidity in charge of things that matter.

This wasn’t the first time an AI hallucination had real-world fallout.

In 2020, Detroit police wrongfully arrested a man named Robert Williams after an AI facial recognition system said his face matched a shoplifter’s. Spoiler: it didn’t. He spent 30 hours in custody before cops realised the machine was wrong.

A few years back, Tesla’s Autopilot mistook the bright sky for a white truck, leading to a deadly crash. And more recently, several “smart” image models were caught labeling darker-skinned people as “animals.”

It’s not a glitch. It’s how these systems work. They’re trained on messy, biased data. They “learn” from the internet which has unfortunately become humanity’s least reliable dataset.

The real danger isn’t AI itself. It’s how much we trust it. 

Humans hesitate, doubt, double-check. Machines? Never. They answer with unshakeable confidence, even when they’re dead wrong. And because that confidence sounds like competence, we believe them.

Police get an alert from an AI system, and instead of verifying, they mobilise. A lawyer uses ChatGPT to write a case brief and it cites fake legal precedents. A student runs an AI detector on their essay and it falsely flags them for cheating.

AI doesn’t think. It guesses. And sometimes those guesses have real-world consequences.

The scary part? These hallucinations are baked into the system. They’re not bugs. This means no matter how much these models are fine-tuned, there will always be a certain level of nonsense. 

When it happens in healthcare, finance or law enforcement, it’s dangerous. Take medical AI systems that have misread scans, flagging healthy patients as sick. Or self-driving cars that slam brakes for shadows. 

This blind faith in automation is becoming our default setting. We’ve confused machine-generated with objective. AI’s tone is what makes it so dangerous. It doesn’t hedge. It doesn’t say, “I might be wrong.” It speaks with certainty.

That certainty makes it useful for small stuff: summaries, scripts, art ideas but catastrophic for decisions that impact real lives.

The Baltimore incident wasn’t the first such case. It surely wasn’t the last. There will be more as organisations keep pushing AI into our lives. A machine mistaking a Doritos bag for a gun isn’t just a story about bad image recognition. It’s a warning about what happens when we hand over authority to systems that don’t know the difference between a weapon and a snack.

How do we stop that? I don’t know. I don’t think anyone has an answer at this point. We will have more nonsense come our way before rules are set, guidelines are made and a framework is put in place. 

Till, then let’s just hope that next AI mistake doesn’t end with a kid in cuffs or worse.

Manas Tiwari

Manas Tiwari

Manas has spent a decade in media, juggling between Broadcast, Online, Radio and Print journalism. Currently, he leads the Technology coverage across Times Now Tech and Digit for the Times Network. He has previously worked for India Today where he launched Fiiber for the group, Zee Business and Financial Express. He spends his week following the latest tech trends, policy changes and exploring gadgets. On other days, you can find him watching Premier League and Formula 1. View Full Profile

Digit.in
Logo
Digit.in
Logo