When OpenAI launched ChatGPT in late 2022, it didn’t just set off an AI revolution, it created a global arms race. Billions of dollars poured into artificial intelligence startups, each promising to outsmart the other with larger models, faster training, and more powerful infrastructure. But amid the frenzy, one company – Anthropic – has quietly taken a very different route. And if early numbers are any indication, it may just beat OpenAI where it matters most: profitability.
Also read: ChatGPT as health assistant: OpenAI thinks it’s better than Google and Microsoft
OpenAI’s trajectory is as ambitious as it is expensive. According to The Wall Street Journal, the company expects to post losses of around $74 billion by 2028, largely driven by its massive investments in compute infrastructure and model training. Even with revenue nearing $13 billion in 2025, its burn rate remains high.
This is the cost of scale. OpenAI’s vision isn’t just to make AI tools, it’s to build a foundational platform for the world’s future software ecosystem. From powering Microsoft Copilot to licensing GPT models across industries, its reach is enormous. But so is the bill.
Running the world’s most advanced AI systems means maintaining hundreds of thousands of GPUs, paying for continuous model retraining, and handling the energy and data costs that grow exponentially. The company’s infrastructure ambitions, though visionary, may be eating into its ability to reach sustainable growth anytime soon.
Also read: AI is moving scarily fast: OpenAI’s 2025 progress report explained
By contrast, Anthropic, founded by former OpenAI researchers Dario and Daniela Amodei, has charted a more focused path. The company expects to break even by 2028, according to internal projections shared with investors. That’s four years earlier than many analysts anticipated, and far sooner than OpenAI’s profitability horizon.
Anthropic’s strategy hinges on restraint. Instead of flooding the consumer market with AI chatbots and image generators, it has concentrated on enterprise and developer clients. Its Claude AI assistant, for example, has become a go-to model for businesses prioritizing safety, privacy, and interpretability over viral engagement.
Internal forecasts suggest Anthropic’s cash burn will fall from 70% of revenue in 2025 to just 9% by 2027, even as its revenue grows past $4 billion. That’s not just an operational win, it’s a signal that AI companies can scale responsibly without collapsing under infrastructure debt.
The contrast between the two firms highlights a deeper divide in AI economics. OpenAI’s model is built for mass adoption, its products are everywhere, from classrooms to corporate dashboards. But that ubiquity comes with enormous computational cost.
Anthropic’s model is built for targeted efficiency. Its clients pay for reliability and control, not novelty. The result: lower server loads, fewer free users, and a cleaner revenue stream. In a space where every new model release costs tens of millions of dollars in training and deployment, that difference in focus could decide who survives long-term.
This divergence raises a critical question: what does “winning” mean in AI?
For years, dominance was measured in benchmark scores, model size, and user numbers. But as investors grow wary of indefinite spending, the conversation is shifting toward profitability, sustainability, and business fundamentals.
If Anthropic manages to turn profitable before OpenAI, it could reshape how AI firms are valued, away from “who’s smartest” toward “who’s smartest with their money.”
None of this means OpenAI is losing the race. Its technology remains the most powerful and commercially integrated in the industry. But Anthropic’s restraint offers a valuable lesson in maturity, one that even Silicon Valley seems to be rediscovering: innovation without discipline is just expensive ambition.
Also read: DS-STAR explained: Google’s most versatile data science agent yet