Nvidia sees surge in AI-driven network automation adoption

  • Network automation has emerged as the top AI use case for telecom ROI
  • AI adoption and priorities vary by region, with APAC leading on automation
  • Rising AI investment is driving architectural and strategic shifts

Have operators finally overcome their fear of putting AI to use for network operations? It seems so, at least in some regions, Nvidia’s newly released telco AI survey shows.

Network automation was listed as the top AI use case for return on investment (ROI) in Nvidia’s fourth annual “State of AI in Telecommunications” survey report. Half of the survey’s 1,038 respondents listed network automation as a top AI use case. Notably, it beat out customer service (41%) and internal process automation (33%). 

That’s a bigger deal than you might think.

“For the first time, we’re seeing the network itself become the number one focal point for telcos in applying AI. It overtook customer service, which has always been the number one for the last several years,” Chris Penrose, Nvidia’s Global Head Business Development for Telco, told Fierce. 

“This is the largest and most important asset a telco has, it’s where they spend the most money,” he continued. “What we are seeing is that there’s more and more willingness to look at how can AI apply here.”

Interestingly, while network automation was the top dog overall, Penrose noted the top AI use case varied across regions. For instance, while network automation was cited as the leading use case in both China and EMEA, customer service continued to dominate in North America. And in Japan, internal process optimization took center stage. 

Nvidia’s findings were backed up by a freshly released brief from Bain & Company, which found that telcos in Asia are leading in terms of automation maturity, beating out the Middle East and Africa, Europe and Latin America. 

Different strokes for different folks

Why the disparity? AvidThink Founder and Principal Roy Chua told Fierce there are a few factors at play. 

First is the vendor question. Chinese operators, as well as those in developing countries, have the benefit of using Huawei gear in their networks. That matters because “Huawei is a leader in autonomous networking, so anywhere you have Huawei solutions deployed you’ll get strong ROI from network autonomy” use cases, Chua said. 

In areas where Huawei gear isn’t used, the ROI for network automation depends heavily on how far along the digital transformation journey an operator is. 

“In order to get good ROI from automation, you’ll have to have gone through some sort of network transformation,” Chua noted. For those that aren’t as far along, customer service is low-hanging fruit that still provides “very good returns,” he said.

Indeed, despite claims that AI ROI is lacking, 90% of Nvidia’s survey respondents said they’re seeing AI drive out cost or increase revenues. As a result, it seems they’re planning to ramp up AI spending – with 89% stating they plan to increase outlay in 2026, compared to 65% who said the same a year ago.

“I would say that AI spend increase is supporting all these use cases but the biggest ROI at this stage across multiple operators is in customer service and internal operations,” Chetan Sharma, CEO of Chetan Sharma Consulting, told Fierce. 

As far as network autonomy goes, Bain’s research showed that energy efficiency optimization, service assurance and network optimization are the most mature applications in this vein. Network change management, however, is proving significantly more difficult, Bain found. 

AI-native before 6G

With AI spending on the rise and more pilots moving to production, it’s becoming clear that the technology is driving an inter-generational capex cycle. That is, operators aren’t waiting for 6G upgrades to put AI to work in the network. 

Nvidia’s report found 77% of respondents expect to deploy AI-native wireless network technologies before the 6G rollouts begin in 2030.

Chua said there are a couple reasons for this. First, AI deployments look very much like the large capital investments telcos are already used to. And even with some struggles, it’s easy to get at least some benefit out of AI, he said. 

Indeed, Penrose said there are already promising early results from operators who are using AI to improve spectral efficiency. These tests have shown AI can deliver a 20-30% improvement in spectral efficiency, he said.

Second, Chua noted telcos are still looking for the magic bullet that will help generate new revenue. With AI workloads starting to shift from centralized training to distributed inference, telcos are starting to think “they have beachfront property and are hoping edge inferencing will be the killer workload they’re looking for.”

Architecture shift

But there’s one more element to telcos’ willingness to invest in AI: They are eyeing it as an opportunity to overhaul their network architecture. 

Penrose – who, remember, spent 30 years at AT&T before joining Nvidia – noted that operators have long deployed equipment that only serves the RAN. Now, they’re beginning to think about a different approach of deploying infrastructure that can serve both AI and the RAN. 

Sharma told Fierce that AI-RAN “holds the promise of a new architecture that enables introduction of new features and applications at software-speed.” He pointed to the early success T-Mobile has had with call and data sessions on platforms from Ericsson and Nokia. But he added “both performance and economics need to be taken care off before we move to any wide scale deployment.”

The question of economics is, at least in part, related to assertions that operators should be using GPUs rather than just CPUs in their network. 

Nvidia, a founding member of the AI-RAN Alliance, is a huge proponent of this.

“We really are trying to change the economics and make this be the ability to continuously upgrade the network. We shouldn’t have to wait for every G anymore,” Penrose said. 

Chua, however, said the architecture shift isn’t just a question of GPUs vs CPUs – it’s about whether operators really want dual purpose infrastructure.

“There are segregation issues. If the RAN is that critical to your business, you likely will not partition your resources to run other workloads that you don’t fully understand how to manage on those,” he said. 

Chua added it remains an open question whether it’s really that critical to enable AI processing on the far edge or whether it’s more cost effective to run it in a localized data center.