top of page

AWS and OpenAI Announce Multi-year Strategic Deal

AWS and OpenAI Announce Multi-year Strategic Deal

AWS and OpenAI just sealed a multi-year strategic partnership, a colossal agreement valued at $38 billion over the next seven years. This deal immediately grants OpenAI access to AWS’s world-class computing power, which is essential for running and scaling its core artificial intelligence workloads.


What does this access look like? OpenAI gains command over an infrastructure comprising hundreds of thousands of state-of-the-art NVIDIA GPUs. The architecture enables staggering growth, with the potential to scale to tens of millions of CPUs, specifically for the rapid expansion of agentic workloads.

AWS holds a unique position, proven by its experience running secure, reliable, and large-scale AI infrastructure, having successfully built clusters housing over 500,000 chips. This combination of AWS’s cloud dominance and OpenAI’s pioneering work with generative AI directly secures the continued capability of products like ChatGPT for millions of users globally.


The dramatic pace of AI advancement created an unprecedented, burning demand for raw computing power. As companies developing frontier models chase higher levels of intelligence, they consistently choose AWS for its proven speed, sheer scale, and security protocols.


OpenAI now moves swiftly, immediately integrating AWS compute. Plans call for deploying all initial capacity before the close of 2026, with an option to extend compute reach well into 2027 and beyond.


What breakthroughs will this new foundation unlock for consumers and businesses in the coming years?


AWS engineered a bespoke solution for this alliance. The infrastructure deployment features a highly specialised architectural design, tuned for peak AI processing capability and superior performance.


This design involves clustering NVIDIA GPUs, both the powerful GB200s and GB300s, through Amazon EC2 UltraServers. Placing them on the same network ensures low-latency performance across the interconnected systems. This technical precision lets OpenAI run demanding workloads efficiently, delivering optimal results.

The flexible clusters support everything from powering real-time inference for ChatGPT queries to training the next wave of models, adapting instantly to OpenAI’s rapidly evolving requirements.


The leaders of both companies confirmed the significance of this move.

“Scaling frontier AI requires massive, reliable compute," said OpenAI co-founder and CEO Sam Altman. “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”


“As OpenAI continues to push the boundaries of what's possible, AWS's best-in-class infrastructure will serve as a backbone for their AI ambitions,” said Matt Garman, CEO of AWS. “The breadth and immediate availability of optimised compute demonstrates why AWS is uniquely positioned to support OpenAI's vast AI workloads.”

 
 
 

Comments


bottom of page