• AiNews.com
  • Posts
  • GPT-5 Could Be Delayed as OpenAI Faces GPU Shortage, Surging Demand

GPT-5 Could Be Delayed as OpenAI Faces GPU Shortage, Surging Demand

Image of OpenAI CEO Sam Altman speaking, with a quote beside him reading: “Stuff might break”: Altman warns of delays. Below the quote, a caption explains that future OpenAI product launches, including GPT-5, may be delayed due to capacity issues as ChatGPT usage hits record highs. In the bottom corner, Microsoft’s classic Clippy assistant appears with a speech bubble joking, “It looks like you're trying to handle 500 million users with limited GPUs. Need help with that? Have you tried turning them off and on again?”

Image Source: ChatGPT-4o

GPT-5 Could Be Delayed as OpenAI Faces GPU Shortage, Surging Demand

OpenAI CEO Sam Altman says future product launches—including the much-anticipated GPT-5—may be delayed due to capacity issues, as ChatGPT usage hits record highs.

Despite securing a $40 billion funding boost, OpenAI is facing mounting challenges in keeping up with surging demand. Just days ago, OpenAI restricted image generation due to GPU shortages—today’s warnings about GPT-5 delays are part of that same growing strain. Traffic surged this past week following the rollout of ChatGPT’s built-in image generator, which drew widespread attention for its ability to create Studio Ghibli-style artwork.

“Stuff Might Break”: Altman Warns of Delays

CEO Sam Altman acknowledged the strain on infrastructure in a candid series of posts on X (formerly Twitter), writing:

"We are getting things under control, but you should expect new releases from OpenAI to be delayed, stuff to break, and for service to sometimes be slow as we deal with capacity challenges."

He added, jokingly,

"Working as fast we can to really get stuff humming; if anyone has GPU capacity in 100k chunks we can get asap please call!"

Despite the strain, OpenAI reopened access to the image generator for free-tier users, signaling some stabilization.

A cartoon-style illustration of a frustrated scribe sitting at a desk with an open book, scroll, and a laptop labeled “AI News.” A speech bubble above says, “DALL·E is down & is having problems generating images.” The image humorously represents AI tool outages, matching the theme of OpenAI’s capacity challenges.

Image Source: ChatGPT-4o
Yes, this image is accurate. We ran into the same DALL·E outage while creating this article—so we’re embracing the irony. Sometimes the news writes itself. We feel you, Sam.

GPT-5 May Take Longer Than Expected

In February, Altman had previously teased that GPT-5 might launch within months. However, those plans now appear to be in flux. The company recently launched GPT-4.5, but limited access to paid users due to GPU limitations.

The bottleneck highlights how critical computing infrastructure is to OpenAI’s roadmap. Although the company now serves 500 million weekly users and 20 million paying subscribers, it is struggling to keep pace with that scale—especially as AI models demand more power and data center capacity.

A Rapidly Growing User Base

OpenAI’s internal metrics reflect the scale of growth:

  • 500 million weekly ChatGPT users

  • 20 million paid subscribers

  • A million new users joined in just one hour on Monday

  • These numbers are up sharply from 300 million users and 15.5 million subscribers at the end of 2024.

The pressure became especially intense following the launch of image generation within ChatGPT. Altman noted that the team worked late nights and through the weekend to keep the service running.

Infrastructure Strains and the Path Forward

OpenAI’s rising costs stem from more than just scale. The company has already spent billions on GPUs, which are both expensive and power-hungry. In response, partner Nvidia is preparing new GPU architectures that promise better performance and energy efficiency—but deploying them will still require major investment in data centers.

In the meantime, OpenAI has taken steps to ease system loads:

  • Temporarily delayed image generation for free users

  • Disabled video generation for new users of its Sora media suite

What This Means

OpenAI’s challenges underscore a growing truth in the AI industry: innovation alone isn’t enough. Even with cutting-edge models and billions in funding, the bottleneck is infrastructure. GPU shortages and data center constraints are now the key factors shaping the pace and reliability of new AI releases.

The delay of GPT-5 doesn’t suggest a lack of progress—it highlights just how demanding and resource-intensive generative AI has become. With 500 million weekly users, OpenAI is operating at a scale few tech companies ever reach, and keeping services stable under that load is a major engineering feat.

This also speaks to the democratization of AI: as tools like ChatGPT’s image generator reach free users, maintaining access for everyone becomes more difficult. While temporary service slowdowns and delays may frustrate users, they’re also a sign of how popular and widely adopted these tools have become.

Looking ahead, OpenAI—and the broader AI field—will need to solve not just for smarter models, but for scalable, sustainable infrastructure that can deliver those models at global scale.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.