Thursday, February 5, 2026

OpenAI Unveils GPT-5.3-Codex The High-Speed Coding Prodigy Powered by NVIDIA GB200

OpenAI Unveils GPT-5.3-Codex The High-Speed Coding Prodigy Powered by NVIDIA GB200
Coding Reimagined: GPT-5.3-Codex Smashes Benchmarks and Introduces Real-Time Collaborative Development.

OpenAI has officially announced the launch of GPT-5.3-Codex, its most advanced AI model specifically engineered for software development. While maintaining the core logic and extensive knowledge base of its predecessor, GPT-5.2-Codex, this new iteration introduces significant breakthroughs in speed, efficiency, and real-time collaboration.

Faster, Leaner, and More Collaborative

The standout feature of GPT-5.3-Codex is its 25% increase in execution speed, achieved through a more efficient tokenization process. Key highlights include:

  • Human-like Interaction: Developers can now provide real-time input and context while the model is in the middle of a task, allowing it to function like a genuine collaborative partner.

  • Self-Evolving Intelligence: OpenAI revealed that the model was refined through Self-Learning (Reinforcement Learning), allowing it to improve its own coding patterns.

  • NVIDIA Synergy: The training and inference optimizations were made possible through a strategic partnership utilizing NVIDIA’s GB200 NVL72 (Blackwell) infrastructure.

Benchmark Dominance: Overtaking the Competition

GPT-5.3-Codex has delivered record-breaking results in several critical industry benchmarks:

  • SWE-Bench Pro: Achieved top-tier scores across four major programming languages, demonstrating its ability to resolve real-world software engineering issues.

  • Terminal-Bench 2.0: Outperformed all previous versions and notably surpassed Anthropic’s Claude Opus 4.6, particularly in complex system-level operations.

Availability

The model is now available to ChatGPT Plus, Team, and Enterprise users via the mobile app, CLI, IDE extensions, and the web interface. Access via API is currently undergoing final safety evaluations and is expected to roll out shortly.

The statement that it's developed on the NVIDIA GB200 NVL72 isn't just marketing; it signifies that the model can access massive bandwidth (130TB/s), enabling the processing of highly complex code (deep logic) with virtually no latency (zero latency).

The ability to "add instructions while the AI ​​is typing" is a significant step towards becoming a full-fledged AI agent. Unlike previous models that required waiting for the AI ​​to finish writing before making changes (linear interaction), this saves developers enormous time.

OpenAI's claim of "using fewer tokens" means users can deliver longer and more complex responses within the same usage limits, reducing "context drift," where the AI ​​forgets the first few instructions after prolonged communication.

GPT-5.3-Codex's victory over Opus 4.6 in Terminal-Bench 2.0 reinforces OpenAI's continued pursuit of dominance in system operations and sandbox environment control, their strongest points.

 

Claude Opus 4.6 Debuts Advanced Self-Debugging and Native Excel Integration for the Modern Professional.

 

Source: OpenAI

No comments:

Post a Comment