Sam Altman OpenAI
ChatGPT maker and a renowned AI startup OpenAI, after over six years has introduced its first open-weight AI model, GPT-OSS. The company has long avoided releasing such models due to safety concerns. However, the model is now available in two versions: a 120-billion-parameter model that can run on a single Nvidia GPU, and a lighter 20-billion-parameter variant optimised for systems with only 16GB of memory.
The company stated that the larger model performs similarly to its o4-mini closed model, while the smaller model matches the performance of the o3-mini. Both models are released under the permissive Apache 2.0 license through platforms such as Hugging Face, Azure, AWS, and Databricks, making them freely available for commercial use and customisation.
With this, OpenAI positions itself to compete with the growing ecosystem of open-source models, including Meta’s Llama, DeepSeek, and Google’s Gemma. Both models can reason, code, browse the web, and run agents using OpenAI APIs. However, the company has not disclosed the training data, but does state that it is the most thoroughly tested model yet. It also stated that external safety firms were brought in to audit the model for potential abuse in areas such as cybersecurity and biohazards. GPT-OSS also includes visible “chain-of-thought” reasoning, which is intended to help users trace how conclusions are reached and identify potential issues.
The company has not provided some performance benchmarks against competing models, but claims that both GPT-OSS versions perform competitively on tasks such as code generation and reasoning tests.
Furthermore, OpenAI has not set a timeline for future GPT-OSS updates; rather, the company views this release as a starting point for developers and businesses seeking greater control over how their data is used.