Arista’s Ullal: Ethernet is the 'eventual winner' for AI networking

  • Ethernet is overtaking InfiniBand in AI back-end networks as hyperscalers favor multi-vendor scale and common hardware platforms, says Arista CEO and chair Jayshree Ullal
  • Traffic patterns are changing as AI workloads are shifting from centralized training to distributed inference
  • Arista is betting on Ethernet standards and assisted automation to lead high-speed data center networking, despite growing competition from Nvidia and Cisco

Having pushed InfiniBand aside in the data center, Ethernet is consolidating its gains, Arista Networks CEO Jayshree Ullal told Fierce.

That's one of several changes facing data center networks in 2026, driven by AI demands, Ullal said. These changes also include shifting requirements from training to inference and increased reliance on telemetry and agentic AI to keep operations running.

Ethernet is fast becoming the standard for scale-up, scale out and scale-across networking, and Ullal predicted its continued momentum against Infiniband in 2026 for data center networking.

"Ethernet is always the eventual winner and equalizer," Ullal said. 

2025 was a big year for Ethernet

Ethernet's gained real traction in 2025, with milestones including the release of Specification 1.0 from the Ultra Ethernet Consortium (UEC) in June, which redefined Ethernet for the AI and HPC era, Ullal said. Additionally, Ethernet for Scale-Up Networking (ESUN) launched at OCP in October, and both Nvidia and Cisco trotted out new Ethernet-based products for scale-across networks. 

UEC Specification 1.0 is designed to meet demands of AI and high-performance computing (HPC). It's intended to provide high-performance, scalable and interoperable solutions across all layers of the networking stack, including network interface cards (NICs), switches, optics and cables, with multi-vendor integration. UEC is backed by AMD, Arista, Broadcom, Cisco, HPE, Meta, Microsoft and more.

And ESUN, introduced by the Open Compute Foundation in October, is designed to provide improved throughput efficiency and latency with lossless and power- and cost-efficient design for scale-up architectures (in other words, it does exactly what it says on the tin). ESUN is backed by AMD, Arista, ARM, Broadcom, Cisco, HPE Networking, Meta, Microsoft, Nvidia, OpenAI and more.

Ultra Ethernet and ESUN optimize Ethernet for AI scale-up and scale-out networks, Ullal said. "This, combined with the cloud titans' preference for multi-vendor scale, is creating migration to common hardware platforms and network operating systems that span both front-end and back-end AI networks," said the Arista boss. "Proprietary and single-vendor lock-in stacks are in the rearview mirror!" she added.

("Cloud titans" is Arista lingo for the companies most of us call hyperscalers — Amazon Web Services, Google, Microsoft, those guys. Those companies make up  nearly half of Arista's revenue.)

Rapid transition

Ethernet revenues were projected to surpass Infiniband in AI back-end networks in 2025, and that trend will likely accelerate, Ullal said.

The transition to Ethernet has been fast. Infiniband held 80% market share for AI back-end networks in late 2023, but "Ethernet is now firmly positioned to overtake InfiniBand in these high-performance deployments," according to a July announcement from Dell'Oro Group. By December, Dell'Oro reported Ethernet accounted for more than two-thirds of data center switch sales in AI back-end networks both during the quarter and in the first three quarters of the year, up from less than half in the same period last year.

But InfiniBand isn't going away. It remains the gold standard for AI training at scale. It provides predictability, in-network computing to offload work from expensive GPUs to switches, and deep integration with Nvidia technology.

From training to inference

That said, AI workloads are transforming from massive, centralized training at scale, to widely distributed inference. Network architectures need to change to handle that heavy inference traffic at the edge, and that transition requires change in key metrics. For training, the key metric is job completion time (JCT) — the amount of time between admitting a training job to a GPU cluster and the end of the training run. For inference, the key metric is different — it's time to first token and latency — the amount of time it takes from a user submitting a query to the first response, Ullal said.

The shift to inference means traffic patterns change from predominantly server-to-server east-west traffic to more north-south traffic between users and across distributed data centers. Spine, backbone and edge-leaf network buildouts are scaling to hundreds of terabytes, she noted.

Bursty, unpredictable agentic AI traffic creates new demands on network telemetry, Ullal continued. Technologies such as state-streaming network telemetry provide required granular network metrics in real-time for necessary insights into AI network efficiency, beyond legacy network monitoring such as Simple Network Management Protocol (SNMP) and polling, Ullal said.

Network operating systems need to implement state-based architecture to take full advantage of these technologies, consuming real-time data and storing it over time for maximum AI insights, Ullal said. That is the goal for Arista's AVA (Autonomous Virtual Assist), she argued.

AI helps operators keep up

To keep up with more demanding data center networking requirements, operators are turning to automation — but humans still have a role. AI will play an increasing role in 2026, but humans will oversee the process, Ullal said.

"The move from manual to automated network operations is a big step for typically risk-averse enterprises," she said. "CIOs are challenged with low staff and too much manual configuration. In a crawl-walk-run phased approach, we believe in a true 'assistant' approach, where AI technologies are leveraged to help network operators do their work more efficiently. Arista AVA aims to offer assisted network operations with less human involvement as autonomous network operations become more reliable, secure and available."

Arista is poised for leadership in high speed data center switching, reporting revenue of $2.308 billion for the third quarter ending September 30, up 27.5% year-over-year. Analysts project it will achieve $10 billion revenue this year

However, Arista's competition is not standing still. Nvidia's Spectrum-X data center switch platform has seen "eye-popping" growth, 760.3% year-over-year to reach $1.46 billion, according to a June report from IDC. Nvidia is eroding Arista's dominance among hyperscalers, scoring Meta and Oracle as Spectrum-X customers. And Cisco projects over $3 billion in AI-specific revenue for fiscal 2026, largely driven by their Silicon One architecture and Acacia optics.