OpenAI Launches GPT-5.3-Codex-Spark Coding Model with 15x Speed Improvement

OpenAI's new coding model runs on custom chips, delivering 15x faster performance than previous versions while reducing Nvidia dependency.

OpenAI Launches GPT-5.3-Codex-Spark Coding Model with 15x Speed Improvement

According to Ars Technica, OpenAI has released GPT-5.3-Codex-Spark, a new coding model that delivers significantly faster performance than its predecessor. The model is reportedly 15 times faster at coding tasks compared to earlier versions.

A notable aspect of the release is OpenAI’s deployment strategy. The company is running the model on what Ars Technica describes as “plate-sized chips,” representing a move away from traditional Nvidia hardware. This approach suggests OpenAI is working to reduce its reliance on Nvidia’s GPU infrastructure, which has been the industry standard for training and running large language models.

The technical details of these custom chips and the specific architecture enabling the performance gains were not elaborated upon in the source material. The combination of improved speed and alternative hardware positioning could have implications for AI infrastructure costs and the competitive landscape among AI model providers.

The “Codex-Spark” branding indicates this model is specifically optimized for programming and software development tasks, continuing OpenAI’s focus on code-generation capabilities that began with earlier Codex models.