New Meta Compute initiative has lofty goals - and big problems

  • Meta says it wants to build tens of gigawatts worth of data centers by 2030 and significantly more over time
  • The project dwarfs OpenAI's Stargate project
  • But Meta faces hurdles around power, processors, model performance and more

Go big or go home seems to be Meta chief Mark Zuckerberg’s motto to data center compute. But going big as Meta wants to is easier said than done.

In a Facebook post on Monday, Zuckerberg revealed a new Meta Compute initiative, which he said will see the company “build tens of gigawatts this decade and hundreds of gigawatts or more over time."

Meta Compute will be led by Santosh Janardhan on the technical architecture and software side and Daniel Gross, who will oversee long-term capacity strategy and supplier relationships.

The pair will work with Dina Powell McCormick, Meta’s newly hired President and Vice Chairman, who is tasked with oversight of Meta’s financing and diplomacy efforts. The goal, Zuckerberg said in his post, is to strike agreements with “governments and sovereigns to build, deploy, invest in and finance Meta’s infrastructure” and ultimately “deliver personal superintelligence to billions of people around the world.”

To say Meta’s initiative is ambitious is an understatement. The scale of the expansion Zuckerberg described dwarfs OpenAI’s 10 GW Stargate plan, which itself has a price tag of around $500 billion.

AvidThink Founder and Principal Roy Chua said the announcement reflects "Meta's ambition to lead in AI infrastructure and develop AI superintelligence that can be offered as services and products to their vast user base.” It also underscores the company’s plan to vertically integrate across silicon, data centers models and applications.

Where will the power come from?

But Meta Compute is also up against some major hurdles. First and perhaps foremost is finding available power.

Meta just inked 6.6 gigawatts (GW) of nuclear power deals, but that’s just a fraction of what it will need for Zuckerberg’s stated plans. Timelines here also matter. It is generally acknowledged that nuclear power sources will take at least another three to five years to come online. It will take even more time to prove out and scale the new small modular reactor designs that many cloud companies are pursuing. That just doesn’t square with Meta’s state ambition to deploy “tens of gigawatts” by 2030.

It's true that natural gas power generation is rapidly ramping up to bridge the gap. In July, Reuters reported that around 80 GW worth of gas-fired generation is expected to come online in the U.S. by 2030. But there are currently years-long lead times for key gas turbines as well as for distribution infrastructure like electrical transformers.

Additionally, J. Gold Associates Found and Principal noted Meta is “in competition with the hyperscalers and other data center operators to secure power generation for their needs, which are huge.” Gold’s firm recently released research which found an AI datacenter running 100,000 GPUs will use as much power in a single day as 150,000 average homes do in a year.

And don’t forget to factor in changes in the general population’s energy needs.

“While the technology sector moves quickly and a data center can be operational in two to three years, the broader energy system requires longer lead times to schedule and build infrastructure, which often requires extensive planning, long build times and high upfront investment,” the International Energy Agency wrote in a recent report.

And sure, Meta will probably look outside the U.S. as it expands its footprint, but the IEA noted China and the U.S. will see the most significant electricity consumption growth – with 170% and 130%, respectively – compared to localities like Europe (70%) and Japan (80%).

Then there’s the question of where it will get all the chips needed to fill tens of gigawatts worth of data center space. For context, Stargate’s first 1 GW site is expected to hold somewhere around 400,000 chips. Multiply that by “tens of gigawatts” and the figures quickly add up into the millions.

While Nvidia will probably be in this mix, Gold noted Meta has reportedly been in talks to buy TPUs from Google and is rumored to be considering designing its own silicon.

But throw fierce competition for chips into the mix and sprinkle in a little geopolitical instability (which matters a lot when chips are primarily manufactured overseas), and one could very easily see a future in which Meta struggles to procure the processors needed to power its AI ambitions.

How will Meta make the AI work?

Even if Meta can get its hands on enough power and processors, there are still lingering questions about whether and how it can make AI valuable for users, especially in light of the performance and other issues that cropped up in its Llama 4 model release.

“They need to recover from the Llama 4 setback and the lingering impact of their failed metaverse efforts,” Chua said, adding the market has yet to see progress on the company’s next generation model called Avocado. “Additionally, they must address monetization of their core capabilities beyond adding basic chat and image/video creation to WhatsApp and other Meta properties.”

To a certain extent, Meta’s success with use cases will be determined by the networks connecting its users. And as Megaport CEO Michael Reid pointed out, right now there’s a bit of a gap between GPU and network performance.

“If the GPU is the brain’s cortex, the network is the nervous system,” Reid told Fierce. “When a chip processes in microseconds but the network takes milliseconds to deliver the data, you’ve got a system that can’t react in real time.”

Thus, a key bottleneck in the short term at least is what Reid dubbed the “reflex gap,” which could prevent enterprises from truly capitalizing on the massive investments Meta plans to make.