AI is making you worse at thinking: Wharton study rings serious alarm bells

I had just started college when ChatGPT started blowing up, and I used my fair share of it to mindlessly go through assignments. Essay outlines, discussion posts, summarising readings I didn’t want to do, it felt like a cheat code. It was efficient, harmless and everyone was doing it.

It took me an embarrassingly long time to realise I was outsourcing not just the work, but the thinking itself. Today, I found out, researchers at the Wharton School of the University of Pennsylvania have a name for it, called cognitive surrender. Once you understand what it means, it’s hard to look at your AI habits the same way again.

Also read: OpenAI and Microsoft: From friends to enemies, what went wrong?

Cognitive surrender in damaging numbers

Across three experiments with 1,372 participants and nearly 10,000 trials, the team at Wharton tested how people reason with and without AI assistance. Without AI, participants answered correctly about 46% of the time. With an accurate LLM, accuracy jumped to 71%. So far, so useful. 

But when the AI gave a wrong answer, confidently and without explanation, accuracy dropped to just 31.5%. That was worse than using no AI at all and people still followed the incorrect output nearly 80% of the time. They didn’t question it. They just went with it. It seems ironic how this tool that was supposed to be making us better at our jobs could make us that much more inadequate. 

What makes this cognitive surrender worse is that access to AI boosted confidence by nearly 12 percentage points, even when the answers were wrong. People were less accurate and more sure of themselves simultaneously. In psychology, they call this the Dunning-Kruger effect. 

Also read: Is Dolby Cinema the upgrade Indian moviegoers actually need?

Why your brain does this

There is a really interesting cognition theory called the Tri-system theory that the Wharton Study mentions that you should check out. Basically when AI delivers an answer quickly and confidently, your brain doesn’t feel the need to engage in slower and more effortful analysis. And crucially, cognitive surrender doesn’t feel like outsourcing. The AI’s conclusion gets quietly filed under things I figured out myself. It feels like yours.

In the study, when people were rewarded for correct answers and told immediately when they got something wrong, they started questioning the AI more. But they still followed bad AI answers more often than not.

The most resistant participants in the Wharton study scored higher on fluid intelligence and enjoyment of effortful thinking. Obviously, these are traits that aren’t quickly trainable, which means this is less about carelessness and more about the fundamental way our brains are programmed to think.

In all of this, what the research suggests is that the problem isn’t just human behaviour. It’s that most AI interfaces are built to feel certain even when they aren’t.

Also read: Jensen Huang says ‘AGI is now’: Truth behind viral clip explained

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack.

Connect On :