ChatGPT to Gemini: How much energy do your AI queries cost?

Updated on 22-Aug-2025
HIGHLIGHTS

AI chatbot queries consume surprising energy, with efficiency gaps between platforms

ChatGPT, Gemini, Claude show efficiency gains, but Grok lags in energy usage

Text prompts use little energy, multimodal AI queries demand far more

In the race to build smarter, faster, and more capable AI, another competition is playing out quietly in the background: energy efficiency. Each chatbot query may feel weightless when you type it out, but behind the screen, servers are buzzing, GPUs are firing, and datacenters are drawing power. And as AI becomes a daily tool for millions, those watt-hours add up to something much bigger than a single conversation.

A new wave of research and disclosures in 2025 has finally shed light on just how much electricity the world’s leading chatbots consume per query and the differences are stark.

Also read: 5 Ways India is Helping the Environment: From Net Zero to Beyond Plastic

The numbers behind the prompts

According to the latest estimates:

  • ChatGPT (GPT-4o): ~0.3–0.34 Wh per query
  • Gemini (Google): ~0.24 Wh per query
  • Claude (Anthropic): ~0.3–1.7 Wh per query
  • Perplexity AI: ~0.3 Wh per query (and higher for longer prompts)
  • Grok (xAI): ~2–3 Wh per query

For context, a Google search consumes about 0.3 Wh, while streaming a minute of video on YouTube can use 1–3 Wh. That means most advanced chatbots are now on par with or only slightly above a search query, a remarkable leap in efficiency compared to just two years ago, when estimates for ChatGPT queries were as high as 3 Wh each.

But there’s a catch: these numbers generally reflect short, text-only prompts. Once you shift into image generation, video synthesis, or other multimodal tasks, energy usage spikes dramatically, sometimes into the tens of watt-hours for a single query.

Also read: Claude is taking AI model welfare to amazing levels: Here’s how

Why the gap exists

The sharp differences reflect not just the size of models, but also infrastructure, hardware optimization, and energy strategy.

  • Google’s Gemini stands out as the most efficient at ~0.24 Wh/query. Google has claimed this figure includes full datacenter overhead, not just compute, making it one of the most transparent disclosures in the industry.
  • OpenAI’s ChatGPT has narrowed its footprint to ~0.3 Wh/query on GPT-4o, thanks to custom inference hardware and software optimizations.
  • Anthropic’s Claude doesn’t publish its own figures, but comparisons to open-source large models suggest a range of 0.3 to 1.7 Wh depending on prompt size.
  • Perplexity AI, which often serves as a search-plus-chat hybrid, tracks close to ChatGPT but sees rising consumption for long, multi-modal queries.
  • xAI’s Grok is the outlier. Estimates put its per-query energy burn at 2–3 Wh, nearly an order of magnitude higher than Gemini. Reports indicate Grok’s datacenters are less optimized and in some cases rely on methane gas generators, raising environmental concerns.

The bigger picture

A single half-watt text query may not seem like much, but scale it up and the stakes become clear. Billions of daily prompts translate into megawatt-hours of electricity use. For instance, one million queries to Grok could consume as much energy as powering a U.S. household for a year.

And that’s before factoring in the heavy-duty side of AI: image generation, code compilation, or video outputs can multiply energy use by 10, 20, or even 100 times compared to a simple text exchange. The “per query” figures most companies share are therefore only part of the story.

The latest figures suggest a trend: the leading chatbots are converging around 0.3 Wh per text query, a massive improvement from just a couple of years ago. But the outliers show the risks, not every AI company is building with efficiency at the forefront.

As users, we may not feel the watt-hours behind our questions. But collectively, the power draw of generative AI is shaping climate debates, infrastructure investments, and even regulatory conversations.

The question now is simple: can AI’s energy appetite keep shrinking, even as its capabilities keep expanding into image, video, and beyond?

Because from ChatGPT to Gemini, Claude to Grok, every prompt we type or picture we generate is part of a much bigger power equation.

Also read: Mistral AI study highlights the environmental impact of LLMs

Vyom Ramani

A journalist with a soft spot for tech, games, and things that go beep. While waiting for a delayed metro or rebooting his brain, you’ll find him solving Rubik’s Cubes, bingeing F1, or hunting for the next great snack.

Connect On :