- Nokia and Ericsson have different takes on how to deploy AI in the mobile network
- Analysts Bubley and Chua seem to favor the Ericsson approach - for now
- This is going to be an argument, however, that rolls over years
One thing is becoming clear in the days before Mobile World Congress: Nokia and Ericsson, the two largest Western telecom vendors in the world, fundamentally disagree on how and where to deploy AI in the mobile network.
As Sebastian Barros, managing director at SaaS company Circles, noted on LinkedIn recently, they cannot both be right. This will shape – at least partly – how the RAN market evolves through the latter half of 5G and on into 6G.
Barros summed up the differing approaches like so: "Ericsson is reinforcing its custom ASIC strategy and energy-efficient Layer 1 acceleration. Nokia is aligning its future baseband roadmap with Nvidia platforms and positioning RAN as a distributed compute layer."
Indeed, Nokia has very clearly nailed its AI-RAN colors to the Nvidia mast. We'll see how that develops further at MWC in March.
The AI-RAN battle ahead
Ericsson had a pre-MWC meeting in London recently and had much to say on network AI, according to Disruptive Analysis Founder Dean Bubley.
"For network AI, Ericsson was noting that it will be deployed in multiple places from the radio to baseband to RAN [aggregation] to core," he noted. "It was saying its RAN software can run on multiple [hardware] platforms."
"It was less aggressive about the '3rd party AI workloads can run in the RAN AI' than others. It nodded to it, but also talked about AI on local UPFs - I called it AI core," the analyst noted in an email to Fierce. He added Ericsson's approach makes sense and said he doesn't buy the vision of putting customer workloads for inference out at a cell site. That approach, he continued, sounds like "the same poor arguments used for MEC & telco edge compute 10 years ago."
Instead, Bubley argued AI workloads - especially the agentic AI flows that are on the horizon - need to be nearer to interconnect points and cloud on-ramps.
AvidThink Founder and Principal Roy Chua agreed it makes sense now and for the immediate future to keep edge AI infrastructure separate from RAN infrastructure.
"That doesn't preclude ongoing research of AI-RAN, or finding new model architectures for RAN workloads that could include a GPU/XPU on radio equipment — RU or DU in an O-RAN compliant architecture," Chua explained. "But it's likely simpler to keep them separate until the orchestration and runtime systems are sufficiently sophisticated and the non-Radio use cases clearer."
It remains to be seen how operators will ultimately decide to deploy AI in the RAN. We'll probably to have to wait for MWC 2027 – or even 2028 – for that to become clearer.
Read next: What Fierce Network is watching for at MWC 2026: AI, 5G and network transformation