We built them to be cold, calculating, and immune to the subjective flaws of the biological mind. Yet, when shown a specific arrangement of static, colored circles known as the “Rotating Snakes,” advanced AI models do something unexpectedly human, they get dizzy.
For years, neuroscientists and computer engineers believed that optical illusions were “bugs” in the human operating system – evolutionary shortcuts that left us vulnerable to visual trickery. But as detailed in a fascinating new report by BBC Future, recent research suggests these errors are not biological flaws. They are mathematical inevitabilities of any system, biological or artificial, that tries to predict the future.
Also read: Elon Musk says Apple-Google AI deal is bad for all of us, here’s why
The phenomenon was first flagged when researchers at Japan’s National Institute for Basic Biology, led by Eiji Watanabe, trained deep neural networks (DNNs) to predict movement in video sequences. The AI was shown thousands of hours of footage – propellers spinning, cars moving, balls rolling – until it learned the physics of motion.
Then, they showed it the Rotating Snakes illusion. To a standard camera, this image is a static grid of pixels. There is zero motion energy. But when the researchers analyzed the AI’s internal vectors, they found the system was “hallucinating” rotation. The AI didn’t just classify the image; it predicted that the wheels were turning clockwise or counter-clockwise, matching human perception almost perfectly.
Why would a supercomputer fall for a parlor trick? The answer lies in a theory called Predictive Coding. The human brain is not a passive receiver of information. If it waited to process every photon hitting the retina, you would be reacting to the world on a massive delay. Instead, the brain is a “prediction machine.” It constantly guesses what visual input will come next based on memory and context, only updating its model when there is a significant error.
Also read: Sam Altman’s problem: Apple’s Gemini pick for Siri disempowers ChatGPT
The illusion is not a failure of vision. It is the signature of a system optimizing for speed. The AI models that succumb to these illusions are built on this same architecture. They are designed to minimize the “prediction error” (E) between the predicted frame and the actual next frame.
When the AI sees the high-contrast shading of the “Rotating Snakes,” its learned parameters interpret the pattern as a motion cue. It predicts the next frame should be shifted. When the image remains static, the prediction fires again. The result is a continuous loop of anticipated motion, an artificial hallucination born from the drive to be efficient.
The fact that AI succumbs to optical illusions is arguably one of the strongest validations of modern AI architecture. It suggests that as neural networks become more capable of navigating the real world, they are converging on the same solution nature found millions of years ago, predictive processing.
We often fear AI hallucinations as a sign of unreliability. But in the realm of vision, these hallucinations are proof that the machine is not just recording data, it is trying to understand it. It is learning to see, and in doing so, it is learning to be tricked.
Also read: Anthropic unveils medical AI suite to rival ChatGPT Health: Key features explained