DeepSeek to Nano Banana: Top AI highlights of 2025
DeepSeek triggers price shock, making Chinese open weights mainstream
Gemini’s revival goes full-throttle, giving OpenAI a code red
Agentic assistants and rapid model drops redefine 2026’s starting line
As we bid 2025 goodbye, it’s only right to take one look back to wrap our heads around the year that was in all things AI. Suffice to say a lot has happened in the AI space, an industry that’s moving at lightning speed, with warranted buzz and unwarranted hype refusing to ebb.
SurveyIf 2023 was all about ChatGPT, and 2024 was about others catching up, then 2025 was the year the AI race turned properly global, properly multimodal, and properly commercial.
Here are the five moments that defined the year.
1) DeepSeek’s shockwave and China’s open-weight takeover
DeepSeek didn’t just ship strong models, it shipped a new price-performance reality. Its DeepSeek-V3 is a Mixture-of-Experts heavyweight (671B total parameters, ~37B active per token) built for efficiency, with hardware-aware ideas like Multi-head Latent Attention and the DeepSeekMoE stack.
Then came the cadence: R1 (January 20), V3 (March 25), and later V3.2 (December 1), plus aggressive API price moves in between – including a 50%+ cut announced in late September. In fact, DeepSeek R1 was pretty much all the talk in the world of tech in January 2025 – eclipsing even Donald Trump’s presidential inauguration.

The bigger story coming out of China went beyond DeepSeek, though. It was all about open-weights. Open weights models went from a Meta-led subplot to a China-led main act. By year-end, “best open models” increasingly meant Alibaba’s Qwen family, DeepSeek, Moonshot/Kimi, Zhipu, MiniMax – names that would’ve sounded niche to most readers two years ago. That shift matters because it changes who sets the default economics of building with AI.
2) Google’s Gemini comeback worries Sam Altman
Google spent much of 2024 getting memed for missing critical AI moments. In 2025, it started coming back with a vengeance. Gemini 2.5 Pro landed in March with a very clear message: we can reason now. Then the autumn sprint happened. Gemini 3 arrived on November 18 as a full-stack rollout – Search, Gemini app, developer tooling – framed as a “thought partner,” not a chatbot tab.
By mid-December, Google doubled down with Gemini 3 Flash, tuned for speed and cost-sensitive workloads, and pushed it into user-facing surfaces where latency actually matters. For the first time, it felt like Google was edging past ChatGPT in everyday speed and polish.

It wasn’t just text. Google’s generative media stack grew teeth too: Veo 3.1 (with native audio) and its faster variant, plus Nano Banana Pro for cleaner, more controllable image generation and editing. And Google’s Imagen 4 kept showing up as production infrastructure inside the Gemini API cadence.
And yes, somewhere in this stretch, Sam Altman likely lost a little sleep: Reuters reported a “code red” inside OpenAI as competition from Gemini 3 heated up, right before GPT-5.2 shipped. That’s the vibe shift: it’s no longer “Google vs. OpenAI.” It’s “Google has decided it will not be second.”
3) Agentic AI transforms from a pitch to product
Late-2025 models didn’t just answer better; they were marketed as doers. OpenAI positioned GPT-5.2 around professional work and long-running agents. Anthropic’s Opus 4.5 leaned hard into “coding, agents, and computer use.”
xAI followed with Grok 4.1 and an agent tools API that reads like a blueprint for automated workflows. The practical result: orchestration, memory, tool calling, and multi-step execution became table stakes.
4) AI-native operating systems – assistants everywhere
The OS layer finally stopped treating AI as a separate app. Gemini’s integration into Search and Google’s core surfaces shows the new strategy: make the assistant the interface.
Even seemingly small UX changes – like making an assistant overlay persist while you multitask – signal the direction: AI is becoming the default navigation layer for doing work on-device.
5) The frontier AI cluster drop
If you blinked between November 17 and December 11, you missed an entire era, in some sense. Grok 4.1 (Nov 17), Gemini 3 (Nov 18), Claude Opus 4.5 (Nov 24), and GPT-5.2 (Dec 11) landed in a rapid-fire sequence that turned “model launches” into EPL match fixtures.
What’s fascinating is not just the speed – it’s the specialization. Gemini leans multimodal and product integration; Claude owns coding/agent reliability; Grok sells conversational velocity; GPT-5.2 pitches itself as the professional workhorse.
If 2025 proved anything, it’s that there won’t be one model to rule them all. There will be a stack. And in 2026, the winners won’t just be the labs with the smartest models – they’ll be the companies that turn those models into habits.
Jayesh Shinde
Executive Editor at Digit. Technology journalist since Jan 2008, with stints at Indiatimes.com and PCWorld.in. Enthusiastic dad, reluctant traveler, weekend gamer, LOTR nerd, pseudo bon vivant. View Full Profile