Cisco takes hyperscale tech to the masses with G300 switching chip

  • Cisco's new G300 switching silicon is designed to facilitate AI deployments for hyperscalers, enterprises and operators alike.
  • The vendor faces fierce competition in the back-end networking market
  • Cisco supplemented its new hardware with software and AI services offerings

CISCO LIVE, AMSTERDAM – Cisco moved to make hyperscale technologies available to a broader audience, serving up new silicon and systems suited to meet the needs of both hyperscalers and AI-savvy service providers and enterprises.

The star of the show is Cisco’s new Silicon One G300 switching chip, which is designed to provide the beefy scale-out networking needed to connect gigawatt-scale AI clusters.

The chip notably features what Cisco is calling Intelligent Collective Networking technology, which optimizes path selection and includes buffering to absorb bursts of heavy AI traffic without dropping packets. Cisco said this tech can help deliver up to 33% better network utilization and faster job completion times.

“Cisco is trying to tackle a real pain point that AI and large cluster workloads create: unpredictable traffic patterns, huge east-west data flows, and scaling limits in legacy fabrics,” Dell’Oro Group VP Sameh Boujelbene told Fierce. “All networking vendors are trying to solve those issues and enhance Ethernet fabric in order to reduce job completion time and optimize the number of tokens generated per GPU-hour which would have direct impact on profitability.”

The G300 serves as the foundation for two new Ethernet switching systems: Cisco’s 102.4 terabit N9100 and Cisco 8000 line, both of which come in air- and liquid-cooled variants. Cisco noted the liquid-cooled variants offer nearly 70% improvement in energy efficiency.

Alan Weckel, founder and lead analyst at 650 Group, told Fierce efficiency is becoming an increasingly important differentiator in the market.

“System and solution power efficiency is something that customers look for as it allows them to operate more GPUs in the same power budget, which could be the difference in a neocloud losing or making money,” he explained.

Fierce competition in a $100B market

Cisco’s offerings land in the middle of a fiercely competitive market for AI back-end networking. The vendor is up against Arista, Celestica, Accton, Juniper, Nokia and NVIDIA in a market that Dell’Oro Group expects to surpass $100 billion in spending by 2030.

Boujelbene told Fierce that Celestica and Nvidia together held 50% of the market share in this space for most of 2025, but noted Cisco began gaining share in the back half of the year.

“It is a highly competitive market. Arista and Celestica have built strong incumbency in the front-end network at the hyperscalers. Nvidia is addressing the gigascale AI factory-problem with an end-to-end approach that spans compute and networking,” she said. “At the same time, there is a clear desire for vendor diversity and a strong appetite for innovation in AI back-end networks. Cisco is bringing a different perspective to solving that problem.”

Both Boujelbene and Weckel pointed to the breadth of Cisco’s portfolio – from ASICS and optics to systems and software – as a key advantage for the company moving forward.

Cisco branches out to software and services

Indeed, Cisco’s desire to branch out from its traditional role as a networking hardware vendor is apparent. The company talked up changes to its Nexus One software platform, which has been updated with a unified dashboard, OS interoperability, integrated Splunk visibility and Cisco Live Protect capabilities – all to enable AI deployments.

The vendor is also expanded its play in the AI services game with new AIOps tooling built on Cisco’s Deep Network Model, which has been trained specifically for network troubleshooting.

Expanding its existing Nexus One platform to accommodate the needs of AI deployments was a very intentional move, Cisco SVP and GM for Data Center and Internet Infrastructure Kevin Wollenweber told Fierce.

“When we talk to customers about being able to take their Nexus data center stack and networking stack that they’re already deploying in their enterprise data center and evolve to being able to go run AI workloads on that, it’s really attractive because they don’t have to learn new tools,” he said. “Commonality and consistency is really interesting, especially when you’re dealing with enterprises that have limited IT staff.”

He concluded: “Learning a new operating system just to deploy AI doesn’t feel like something that a lot of our enterprises want to do.”