What Happened
During his keynote at NVIDIA's GTC 2026 conference, CEO Jensen Huang made a direct statement about a major cloud partnership. "We're going to bring OpenAI to AWS," Huang said. He framed this as a move that will "drive enormous consumption of cloud computing at AWS" and "expand the reach, expand the compute of OpenAI."
Huang explicitly noted that OpenAI is "completely compute constrained," suggesting this partnership is designed to directly address that bottleneck by leveraging AWS's infrastructure, presumably powered by NVIDIA's hardware.
Context
This brief statement, shared via a social media post, points to a significant deepening of the relationship between three AI industry giants: NVIDIA, OpenAI, and Amazon Web Services (AWS).
- NVIDIA's Role: As the dominant supplier of AI accelerator chips (GPUs), NVIDIA's strategy increasingly involves enabling and orchestrating large-scale AI deployments across major cloud providers.
- OpenAI's Constraint: Huang's comment that OpenAI is "completely compute constrained" is a public acknowledgment of a well-known industry challenge. Scaling cutting-edge AI models like GPT and Sora requires vast, reliable compute resources.
- AWS as a Platform: Bringing a major AI service like OpenAI's suite of models natively to AWS would represent a major competitive move against Microsoft Azure, OpenAI's primary cloud partner and investor. It would allow AWS customers to integrate OpenAI models directly into their AWS workflows.
The statement implies a collaborative effort where NVIDIA facilitates OpenAI's expansion onto the AWS cloud stack, driving demand for both NVIDIA silicon and AWS cloud services.



