The future of AI won’t be decided by bigger models – but by the developers stitching micro-agents into autonomous systems. Because if you think about it, by mid-2025, the AI world seemed to be struggling for vocabulary. “Copilot.” “Chatbot.” “Assistant.” All too narrow.
And right there, where developer experience makes or breaks the enterprise AI use case, a new software architecture is quietly emerging. Not around prompts or bots, but agentic systems. Tiny, specialized AI units that reason, act, and make decisions on governed data – like an orchestra of digital minds playing a symphony of enterprise logic.
At least that’s the shift Jeff Hollan, Head of Cortex AI Agents at Snowflake, has been preaching. “Traditional copilots are reactive tools,” Hollan says. “Enterprise agents operate on governed data and know the ‘why,’ not just the ‘what.’” Beneath that line is a deeper reordering which points to new developer skills, new boundaries between ML and backend, and a new set of rules for building intelligence at scale.
If 2023 and 2024 were about the “omni-agent” dream – one LLM to rule them all – 2025 brought a reality check to that line of thinking. Which is that real-world systems are too fragmented, too rule-bound, too weird for a single model to handle.
When an agent hallucinates an HR policy or fumbles a compliance call, trust goes down the drain. The fix for that, according to Jeff Hollan, is micro-agents. They’re small, specialized units that handle one job well – not unlike microservices did for software a decade ago.
“A micro-agent is a lightweight, specialized intelligence unit,” Hollan explains. “Enterprises will orchestrate them into autonomous decision loops – agents that plan, act, and self-evaluate.” Instead of one super-agent, think: a payroll updater, a CRM summarizer, a supply-chain forecaster. Narrow, auditable, swappable – and when networked, powerful.
Also read: Snowflake’s bold AI bet: Turn AI agents into your next colleagues
This shift is reshaping the developer role as well, Jeff highlights. Developers are no longer writing one function at a time – but designing behaviours. Coordinating semi-autonomous entities. Managing cognitive load, not just compute.
“You want builders to have the right levels of control – access rules, agent behaviours for critical workloads, or choosing how and where agents are called,” says Hollan. Less code, more choreography. Less software engineering, more systems psychology.
Developers aren’t just building tools, but should now think of setting the physical laws within which intelligent agents should operate.
Here’s a pretty big insight from Jeff Hollan. He believes the most important person shaping an agent’s behavior isn’t the ML researcher anymore – it’s the data engineer.
“AI models are only as good as the data powering them,” Hollan says. “Data engineers move into a more strategic role, because their insights shape business decisions.”
And this is leading towards another front quietly opening up – user interface. For all our obsession with model speeds, the real bottleneck is human trust. What can an agent access? How do I know it’s right? How do I fix it when it’s not?
Developers are now taking cues from consumer AI: semantic layers, explainability, audit trails. But with governance built-in. “We wanted any employee to explore data securely without writing a line of code,” says Hollan. “Every response respects access controls and governance.” UX was once a soft problem. Now, it’s a strategic moat.
Ask Hollan about GenAI’s next chapter and he doesn’t bother guessing model sizes. He’s looking elsewhere.
“Performance is consistently strong across frontier models,” he notes. “By 2026, the real advantage won’t come from the model you use, but from the data you own and how effectively you can reason over it.”
Enter the data flywheel: better data leads to smarter agents, which in turn make better decisions, producing better data. A virtuous cycle that’s less about hype and more about control – and ultimately, harder to copy.
It’s a shift that feels less like a technology upgrade and more like a re-platforming of how business software works. And the most important insight is that the most powerful AI systems of 2026 won’t belong to the companies with the biggest models – but to the teams with the best developers.
Also read: AI chips: How Google, Amazon, Microsoft, Meta and OpenAI are challenging NVIDIA