%title%
AI Infrastructure
The mood is shifting in AI data center circles. The euphoria of record-setting, multi-gigawatt deals has given way to finger pointing as deadlines to get AI servers online slip or get dangerously close to falling behind. For months, data center builders have told me many of the gigawatt-size AI server facilities are running behind schedule because of the complexities of putting together the biggest clusters of servers ever attempted.
Nov 24, 2025

AI Infrastructure

Anissa Gardizy headshot
Supported by Sponsor Logo

The mood is shifting in AI data center circles. The euphoria of record-setting, multi-gigawatt deals has given way to finger pointing as deadlines to get AI servers online slip or get dangerously close to falling behind.

For months, data center builders have told me many of the gigawatt-size AI server facilities are running behind schedule because of the complexities of putting together the biggest clusters of servers ever attempted.

So far, most projects are not catastrophically late—but they’re late enough that people are understandably asking who is accountable when a multibillion-dollar project misses a deadline by a few weeks or months. 

After all, the employees at the AI developers that spend the most money on AI chips—OpenAI, Google, Meta, Anthropic and xAI, which we’ll call the Fab Five—are in a fierce race for computing capacity and have demanding CEOs with little time (or idle graphics processing units) to waste.

A prime example of the blame game involves CoreWeave, an AI cloud provider whose customers include Microsoft and OpenAI. CoreWeave CEO Mike Intrator warned investors earlier this month that the current quarter’s revenue would take a $100 million to $200 million hit because of “temporary delays related to a third-party data center developer who is behind schedule.”

Intrator didn’t name the developer, but speculation has been swirling. Many people I know in the data center field immediately assumed the data center developer in question was Core Scientific, a data center developer that is one of CoreWeave’s major partners. 

That guess makes sense. Eight months ago, Microsoft pulled back on some of its CoreWeave contracts after delays involving a particular data center. We heard from people involved in those talks that the source of Microsoft’s ire was a CoreWeave data center in Denton, Texas, which Core Scientific is responsible for powering. 

In February, Core Scientific said in an earnings conference call that it was experiencing delays that would push the completion of a data center project from 2025 to early 2026. It seems possible it was referring to the Denton facility.

In any case, in March, OpenAI said it had stepped in and signed a $12 billion, five-year cloud contract with CoreWeave to rent AI servers at that facility, presumably hoping the problems would be resolved quickly.

CoreWeave hasn’t officially blamed Core Scientific for the revenue hit. But the situation raises a bigger question: Who is responsible for CoreWeave hitting its revenue projections, CoreWeave or Core Scientific? 

Core Scientific CEO Adam Sullivan didn’t mince words about the situation. While he didn’t name names, he told me that “when one public company gets out in front of a delay while another waits until the last minute, it can create confusion and erode confidence.”

More broadly, he said a lot of AI data center timelines “aren’t realistic” unless developers have already secured equipment that must be ordered far in advance, such as generators, and have “lined up experienced contractors and locked in the labor these projects require.”

He added: “As deadlines approach, the gap between talk and execution will become increasingly clear.” 

We aren’t sure how much of the tension stems from a recent move from shareholders of Core Scientific, which has a $4.5 billion market capitalization, to vote down CoreWeave’s $9 billion takeover offer for the company. But it might be a factor.

To be fair to both companies, delays are common in the industry. If Amazon and Microsoft had to tell the market every time a data center timeline slipped, we would be hearing about it far more often! There can be delays in getting power to a facility or in getting various equipment to arrive on time.

Raised Voices

But the stakes are different now, given the urgency to complete AI data centers. Earlier this year, Oracle executives raised their voices at contractors in Abilene, Texas, as pressure mounted on the company to hand over working servers to its customer, OpenAI. 

The executives had good reason to be frustrated. We’ve heard cloud providers’ contracts with customers include provisions in which customers can pay less if the provider misses a timeline or if the servers aren’t functioning properly, reducing their uptime. For GPU cloud providers with already thin gross profit margins on renting out servers, these problems can materially alter their financial results.

The race to get Nvidia GPU clusters online continues to be a challenge for some firms that promised speedy timelines. And it’s likely that as power becomes harder to secure, which could also cause delays, we might see customers hedging their bets by working with multiple data center providers.

Several developers told me this week GPU shipments are outpacing construction timelines so severely that some firms are storing racks of idle GPUs in warehouses, waiting to be told where to send them.

Even Meta acknowledged this tension on its earnings call in late October. Chief Financial Officer Susan Li said the company is now “staging data center sites,” or essentially getting them ready with everything but the GPU racks, so Meta can “spring up capacity quickly in future years as we need it.”

In other words, even large data center developers like Meta are building buffers to prepare for capacity spikes.

One thing is clear: We’re entering an era where the physical limits of labor, equipment, utilities and contractor bandwidth are colliding with customer demand. It’s going to be a bumpy ride.

A message from Nebius

AI is redefining infrastructure. Nebius is redefining cloud.

The AI era is here, but you're stuck choosing between general-purpose clouds or bare metal that sacrifices simplicity for power. Nebius brings both. Purpose-built for AI from silicon to software, Nebius delivers hyperscaler flexibility with supercomputer performance. Every layer engineered for AI workloads — eliminating friction, reducing costs and risk — so your team can build and deploy at scale. The ultimate AI cloud.

Opportunities

Group subscriptions

Empower your teams to stay ahead of market trends with the most trusted tech journalism.

Learn more


Brand partnerships

Reach The Information’s influential audience with your message.

Connect with our team

About AI Infrastructure

The Information's Special Report brings you a deeper dive on a unique topic.

Read the archives

Follow us
X
LinkedIn
Facebook
Threads
Instagram
Sent to fugol@nie.podam.­pl | Manage your preferences or unsubscribe | Help The Information · 251 Rhode Island Street, Suite 107, San Francisco, CA 94103