After Google and OpenAI introduced Gemini 3.0 and GPT-5, China’s AI startup DeepSeek has introduced two new large language models, DeepSeek-V3.2 and DeepSeek-V3.2 Speciale, in order to keep pace with the rapid advancements in the AI race. Taking to the blog post, DeepSeek described them as “reasoning-first” systems built with a focus on efficiency and advanced tool-use capabilities.
As per the company, the standard V3.2 model builds on earlier experimental versions, while the Speciale edition is made to deliver strong reasoning performance. The company said V3.2 is its first model to integrate reasoning directly into tool interactions and is capable of generating large-scale agent-training data across more than 1,800 environments and over 85,000 complex instructions.
Both models use the company’s DeepSeek Sparse Attention (DSA) mechanism, which is specifically designed to lower computational costs while preserving performance on longer inputs. The newly introduced V3.2 model, as claimed by the company, matches the performance that OpenAI’s GPT-5 offers. On the other hand, the company says the Speciale variant is positioned as the direct rival to Google’s Gemini 3.0-Pro. The Speciale model has also reportedly achieved gold-medal-level results in the 2025 International Mathematical Olympiad and the International Olympiad in Informatics, two benchmarks often used to evaluate high-end reasoning systems.
DeepSeek previously got massive attention earlier this year worldwide after the reports of its strong model performance caused major US tech stocks to drop. The company also stood out by offering AI tools at much lower costs than its rivals. It remains to be seen how the LLMs perform.