When OpenAI says that artificial intelligence is moving faster than people think, it isn’t exaggerating. In its 2025 AI Progress and Recommendations report, the company outlines a future that sounds less like science fiction and more like an inevitability – where AI systems may begin making small scientific discoveries by 2026 and major ones by the end of the decade.
What’s striking is not just how confident OpenAI is about this pace, but how measured its recommendations are. The report reads like a call for calm preparedness: a reminder that progress may be unstoppable, but how we steer it still matters.
Also read: DS-STAR explained: Google’s most versatile data science agent yet
OpenAI’s researchers believe today’s systems are already “80% of the way to being AI researchers.” That’s a provocative statement, one that reframes how we think about progress. Instead of waiting for a big, cinematic “AGI moment,” OpenAI suggests we’re already deep into it, with capability jumps happening quietly behind the scenes.
The company notes that the cost-per-unit of intelligence is dropping at an astonishing rate, about 40 times per year, which means increasingly powerful systems are becoming cheaper to train and run. This compounding effect, it argues, is what’s pushing AI ahead at a pace that feels both thrilling and terrifying.
By OpenAI’s estimate, AI will begin contributing to “small discoveries” – such as optimising experimental designs or uncovering subtle correlations – as early as 2026. Larger, autonomous scientific breakthroughs could follow within a few years.
This transition, the company says, will happen quietly. Day-to-day life might not feel radically different, but under the surface, AI systems will be solving harder and more complex problems. In other words, society’s inertia will mask just how much change is actually taking place.
To keep pace with this acceleration, OpenAI outlines five key recommendations, a kind of governance blueprint for the AI era:
This list reads less like a warning and more like a framework for coexistence. OpenAI doesn’t call for a slowdown, instead, it asks for smarter steering.
Also read: AI meets storytelling: My experience at the Mumbai AI Film Festival (MAFF)
The report also makes a subtle, almost philosophical observation: even as AI capabilities leap ahead, our lived experience may remain deceptively stable. Social systems evolve slowly, bureaucracies even slower. By the time society fully registers the transformation, AI may already have reshaped research, policy, and industry in irreversible ways.
That’s what makes OpenAI’s message both pragmatic and haunting. It’s not the speed of progress that’s dangerous – it’s our inability to see it happening in real time.
The AI Progress and Recommendations report isn’t just an update, it’s a declaration that the next stage of AI won’t be about flashy demos, but silent revolutions. Machines will think more, cost less, and help us discover faster than ever before.
The challenge now, as OpenAI frames it, is to ensure those discoveries serve everyone, before the gap between capability and control becomes too wide to close.
Also read: Mustafa Suleyman’s AI plan for Microsoft beyond OpenAI: What it means