
OpenAI recently released GPT-5-Codex-Mini, a lightweight programming model designed specifically for developers. This is another significant iteration following GPT-5-Codex in September, further lowering the barrier to entry for intelligent programming. While maintaining high performance, the new model offers developers more flexible options by optimizing its cost structure, marking a crucial step towards the democratization of AI-assisted development tools.
As a "lightweight" version of GPT-5-Codex, the Mini version strikes a balance between performance and resource consumption. Tests show that it scored 71.3% in the SWE-bench Verified benchmark, slightly lower than the original version's 74.5%, but its call limit is increased to four times that of the original, making it particularly suitable for low to medium complexity tasks. OpenAI specifically recommends that when a user's call volume reaches 90% of their quota, the system will automatically recommend switching to the Mini version to ensure project continuity. Currently, the model supports CLI and IDE extensions, and the API interface will be available soon.
This upgrade not only introduces a new model, but OpenAI has also addressed server traffic fluctuations through underlying architecture optimizations, significantly improving stability during peak hours. Simultaneously, the call limits for ChatGPT Plus, Business, and Edu users have been increased by 50%, while Pro and Enterprise users enjoy priority resource allocation. These improvements, combined with the low-cost advantage of the Mini version, provide a more attractive intelligent programming solution for development teams of different sizes.
From code generation to refactoring and testing, the GPT-5-Codex series is gradually covering the entire software engineering process. The launch of the Mini version not only expands application scenarios but also promotes the sustainable development of the intelligent programming ecosystem through technological optimization. For developers seeking a balance between efficiency and cost, this innovation will undoubtedly accelerate the adoption of AI tools in practical development.