Data centers centerstage in compute tug of war

  • Data centers are trying to serve customer sets with very different needs
  • Enterprises need less power compute and less of it, but hyperscalers are raising the bar for compute and gobbling up spare capacity
  • NTT Global Data Centers is coping with innovative data center designs and careful churn planning

Multi-tenant data center operators are stuck with a herculean task: be everything to everyone in a world where compute needs are rapidly evolving.

It should go without saying but the needs of a standard 10 to 20 kilowatt (kW) rack are very different from those of Nvidia’s new high-performance, 100-chip Vera Rubin system. It’s not just about the shift from air to liquid cooling, but also elements like power distribution, cabling and networking.

These obviously have huge implications for data center design, which is why the idea of serving both kinds of customers in a single facility is so tricky. But that’s exactly what companies like NTT Global Data Centers are trying to do.

Bruno Berti is SVP of Global Product Management at NTT Global Data Centers. He told Fierce that the company is being pulled in different directions. On one hand, it has a handful of large customers who need facilities that can support high-performance systems. On the other, it has literally thousands of other customers with more modest needs.

The answer it has come up with is an adaptable data center design capable of serving both customers with 18-20 kW air-cooled racks and those with 100% liquid-cooled racks pushing 200+ kW.

“We had a lot of fun over the last five years coming up with that standard and building what I think is a pretty flexible solution,” Berti said. “We believe – we haven’t deployed it yet – but we believe we can actually have a client in the same data hall that can actually do both.”

In a nutshell, the design makes it possible to slot in liquid cooling where needed and make various other necessary infrastructure tweaks to suit the kind of compute going into a given data hall.

He added it is now looking to design a build-to-lease option to address future demand for 1 megawatt racks. (For context, 1 megawatt is 1,000 kilowatts.)

Capacity competition

In addition to coming to data center providers with wildly different needs, enterprises and hyperscale customers are increasingly competing for capacity.

The thing is while enterprises tend to want less total compute capacity – usually two to 4 MW worth – hyperscale and other large customers are swooping in and scooping up either most or all new capacity as it becomes available. 

Estimates from Goldman Sachs put data center occupancy rates at just over 92% at the end of 2025, with these expected to peak at 93% in 2026. Occupancy rates are even higher in key markets like Northern Virginia; Dallas, Texas; Salt Lake City, Utah; and Las Vegas, Nevada. The same is true for co-location facilities, according to real estate consultancy JLL.

Indeed, Berti said NTT Global Data Centers has less than 2% of its capacity available globally. And of its 750 MW pipeline of data center capacity in development, 80% is already leased.

“We still get a significant amount of traditional co-location people looking for capacity to deploy CPUs, to deploy standard storage equipment, looking for eight to 10 kW racks,” Berti said. “It’s very hard to service them because most of that capacity has been bought up by people wanting to do things a little more long term.”

So what’s an enterprise wanting to deploy AI workloads to do?

Well, Berti said some of the pre-leased capacity will eventually make its way back to market as products sold by hyperscalers. But for enterprises wanting to lease their own space, letting data center partners know about their plans early on is a good idea.

“If you start talking to us 12, 24 months before you need the capacity, we can start mapping out any churn that we might have so that we can start pre-leasing or at least flagging and reserving capacity,” he concluded.