Bengaluru-based AI startup Sarvam AI has introduced two new large language models- Sarvam-30B and Sarvam-105 B. The announcement was made at the India AI Impact Summit 2026, where the company showcased its AI ambitions, including the Kaze smartglasses.
These foundational models are large neural networks trained on vast datasets to understand and generate text across a wide range of tasks. Once developed, they can be customised for applications such as coding assistance, research, translation, enterprise analytics, and conversational AI. The startup aims to offer Indian enterprises more control over data, lower dependency on global APIs and stronger support for regional languages.
Starting off, Sarvam-30B, the smaller of the two systems, has 30 billion parameters and can support 32,000 tokens in its context window. It is trained on 16 trillion tokens and is intended to strike a balance between performance and efficiency, producing competitive results in reasoning, coding, and problem-solving benchmarks. At the summit, the company demonstrated Vikram, a multilingual chatbot powered by the model. The bot interacted in languages such as Hindi, Punjabi, and Marathi, and it even worked on feature phones.
Sarvam-105B, the larger model with 105 billion parameters and a 128,000-token context window, is built for more demanding enterprise workloads. During a live demonstration, it analysed a company’s balance sheet in real time, responding to detailed financial queries and contextual prompts. The model is intended for applications that require processing large volumes of structured and unstructured data, including analytics, long-document summarisation, and advanced coding tasks.
Interestingly, this is the first time any Indian startup has created large-scale AI models that can compete with global systems in different tasks like reasoning, coding and data analysis, while supporting multiple Indian languages.