Cisco warns about infrastructure drags on agentic AI

  • 2026 will be a pivotal year for agentic AI and ROI, says Jeetu Patel of Cisco
  • But organizations must overcome significant "infrastructure debt" limitations
  • Organizations need scale-across and high-performance edge architecture to meet tougher networking demands

2026 is the year that agentic AI realizes ROI and applications roll into production, said Jeetu Patel, president and chief product officer for Cisco, speaking at the company's virtual AI Summit this week.

However, organizations face barriers to getting the full value from AI investments because they "just don't have enough power, compute and network bandwidth," said Patel, adding that memory is another constraint on AI performance.

Infrastructure constraints are a problem

In a recent Fierce Network Research report, we explored how enterprises deploying AI are managing infrastructure constraints.

Infrastructure has emerged as a theme for Cisco, as well. In a study late last year, Cisco identified "infrastructure debt" as a drag on organizational AI deployments. Just 28% of organizations believe their infrastructure can handle AI workloads, according to the Cisco AI Readiness Index 2025, released in October, based on a survey conducted in August.

On the other hand, companies that have infrastructure and business processes in place to properly exploit AI reap rewards.

"The same survey showed that among AI 'pacesetters' — the 13 percent of companies that are fully prepared for AI — 91% are already increasing profitability," Cisco said in a December blog post focused on AI Infrastructure.

Trust is limiting AI adoption

While organizations are moving toward AI, the number of the aforementioned pace-setters is low because trust is a limiting factor. As a result, for the first time, security has become a prerequisite for adoption, not something that comes later, Patel said.

"In the past, you always asked the question whether you want to be secure or you want to be productive," he said. Trust and productivity could be at odds. "Now what you're starting to see is if people don't trust these systems, they'll never use them."

Enterprises are concerned about trust as regards data, models, infrastructure, agents and partners, as well as trust in the context of geopolitics and sovereignty. These concerns were front and center in last month's World Economic Forum meeting in Davos, Switzerland, said Chuck Robbins, Cisco chairman and CEO.

Data is another barrier, which takes several forms. Models are trained on human-generated data publicly available on the internet, but AI is running out of that kind of data, leading to the increased importance of synthetic data and, even more importantly, machine-generated data.

As agents proliferate and operate continuously, machine-generated data is growing exponentially, Patel said.

Patel briefly touched on Cisco's recent networking technology advances to move all that data around. He cited the Cisco Silicon One G200, the company's 51.2 Tbps networking chip designed specifically for AI/ML workloads, to handle scale-out fabrics of AI clusters, providing high bandwidth and low latency switching required to connect thousands of GPUs inside a data center.

The Silicon One P200 chip is optimized for "scale-across" architectures, connecting data centers into a single "ultra cluster" with the Catalyst 8323 Router built for networking and security at the unified edge.

"These data centers are going to be hundreds of kilometers apart today, but eventually they'll get to continental scale," he said, noting this is where connecting data centers into ultra clusters will be key.

Also at the Cisco live event, Tariq Amin, CEO of Saudi AI startup Humain, touted access to abundant Saudi power as a competitive advantage for Humain.