I really enjoy looking at what’s happening on X sometimes because it shows you a side of how we as humans think and react. For instance, I came across a post of someone who had posted an AI generated painting in the art style of Claude Monet and asked X users what made it inferior to a real Monet. The answer to that question is very elementary if you ask me, had Monet never thought of making such paintings, generating impressionist art of that caliber wouldn’t be possible for even the best of the LLMs. AI-art can be described in many ways but being original isn’t one of them, at least not yet.
Also read: Claude Mythos and GPT-5.5 have confirmed what researchers feared most about AI and cybersecurity
The internet, for its part, did what it does and absolutely tore it apart. “No cohesion of elements.” “Looks like high school art.” “It’s garbage.” The brushwork was wrong, the colours felt off, the composition lacked depth. People were so confident and articulate that you would think these are art majors talking about the painting. There was a small catch though that the painting wasn’t actually AI generated but an actual Claude Monet painting – namely the “Water Lilies.“
What’s really interesting to look at here is what it reveals about our thought process. The second someone tells us something is AI-generated, we immediately start looking for flaws. And here’s the thing about looking for flaws, you find them easily. Every single time without fail.
Also read: Figure AI’s Helix-02 humanoid robots is pulling full 8-hour factory shifts without human help
This is exactly what confirmation bias is and it is displayed here at its best. We had an interpretation ready and all we did was search for arguments that supported that interpretation. Call the painting AI and the brush strokes start looking mechanical and lifeless. Call it human and the brush strokes are now expressive and intentional. The painting didn’t change, we did.
Now this experiment did not at all say that an AI is at the level of Monet. That would be a pretty silly conclusion to arrive at. Monet’s work exists because Monet existed – because he stood in his garden at specific hours of specific days, chasing light that wouldn’t hold still, half-blind toward the end of his life, still painting. Everything else that we generate today that represents any impressionist work using any diffusion model is merely a sophisticated echo. You cannot separate the output from the source material, and the source material is centuries of human struggle, obsession, and vision.
The people here weren’t engaging with the painting, instead it was the label that they cared more about. That is much more uncomfortable to think about because we have reached a point where if we look at art, our first instinct isn’t to appreciate it but ask if it is actually real.
Experiencing art has always been about its narrative. Who made it, why did they make it, under what conditions and at what cost are all questions that matter. A Monet carries the weight of a biography. An AI image carries the weight of a prompt. When that weight is swapped out through a simple mislabel, our perception follows. We’re not as objective as we think we are, not by a long stretch.
What I keep coming back to is this – if the label shapes the experience this completely, then we need to be far more honest about what we’re actually evaluating when we critique AI art. Are we engaging with the image, or are we reacting to our feelings about the technology behind it? Most of the time, I suspect it’s the latter. The Monet experiment didn’t prove that AI can make great art. It proved that we’ve already made up our minds, and we’ll find the evidence to match.
Also read: LG’s Sanjay Chitkara on AI making appliances smarter and building products for India