Fierce Network TV

Why AI is a systems problem, not a software one

In this episode of Carrier 2.0, host Steve Saunders reframes artificial intelligence not as a product or application, but as a full-stack systems problem spanning power, cooling, water, networking, operations, security, governance, and strategy.

Drawing on interviews with operators, utilities, and technology leaders, the episode explores how AI is stress-testing telecom networks, electrical grids, and operational models — and why the real challenge lies beneath the model layer. From EPB Chattanooga’s grid modernization and community impact, to Orange’s layered AI architecture, the discussion shows why infrastructure, data, and automation must come before intelligence.

The episode argues that AI will not be won by the carriers with the biggest models, but by those that understand physical limits, design resilient systems, and turn AI infrastructure into a platform for secure, low-latency, trusted services.


Steve Saunders:

Everyone's talking about AI as if it's a product or an app or a model you just deploy. It isn't. Talking to the first movers in our industry, it's clear that AI is a systems problem. And the moment you treat it like something you can just sprinkle like fairy dust on top of existing infrastructure, you're already in trouble because the real story of AI isn't a software story at all. It's about power, cooling, water, networks, operations, security, governance, and strategy all meeting the limits of the physical world.

As an industry, telecommunications is on a journey away from silos of departments, services, and technologies towards holism. The concept that systems form holds that are greater than the sum of their parts. Now, that's a helpful corrective right now to the hyperscaler hype around AI, which has turned into what the philosopher Gilbert Ryle would've called a category error. People talk about AI like it's a feature.

Steve Saunders:

Because what it really does is stress test everything underneath it. And the hyperscaler line, just scale it, assumes infinite resources that don't actually exist. So today, we are going to treat AI the way operators, engineers, and industry should: Holistically, as a full stack challenge. Welcome to a pragmatic episode of Carrier 2.0.

Shashank Mane:

The gaps in the AI and adoption in industry forward in channel is not really in the math. It's in the surround system and machinery around it. When you're starting a pilot, the nature of the pilot is usually very simple. Pilots are often built in ideal conditions.

Steve Saunders:

The narrative coming from US hyperscalers today says AI is about bigger models, more GPUs, and faster inference. Now, that framing is convenient for them because it keeps the story inside the data center and within their competitive moat, but AI doesn't live in isolation. It sits on top of the grid, on top of networks, on top of supply chains, and critically on top of human institutions that still have to explain outcomes, handle failures, and absorb blame. In these environments, every transformative breakthrough becomes something else: a system stress test. And nowhere is that clearer than at the intersection of communications infrastructure and the electrical grid. And that's exactly where EPB in Chattanooga is deliberately integrating quantum computing, AI-driven optimization, fiber networking, and grid operations with results that are already measurable.

Janet Rehberg:

When we first decided to roll out the fiber network, one of the big reasons was to help automate the grid to help with outage reduction. And we saw that proven very successful. We're able to reduce about 55% of outage reduction on an annual basis. And the other part of the fiber network was to really help support our community. And we had an independent study do a review this year to look back the first 15 years of our fiber network system and what it shows in that study is it has brought over $5.3 billion in community benefits to our community and has helped create and retain over 10,000 jobs. So we think that quantum is that next wave that could tie in to both the communication and the grid to really support our community and taking it to that next level on both the grid and the security on the fiber network side.

Steve Saunders:

So that's not AI hype. That's infrastructure reality. Grid, fiber, security, community impact, one system. Holistic. Here's the core tension in our narrative. Large language models, or LLMs, are probabilistic systems being bolted onto deterministic telecom enterprise and industrial worlds. Now, a hallucination is annoying in a chat window, but it's catastrophic in control systems, in planning, in policy, in defense, and in finance. Making the model smarter doesn't help. It just means the stupid happens less often. What we need is systems that don't hallucinate ever and that requires infrastructure that knows when not to act, which means AI based on machine learning, not LLMs. But let's bring this back to Carrier 2.0 territory and away from compute, because it's the network where things can head in one of two directions.

Speaker 4:

The insight that I think is dawning on many people more and more is that without networking, there is really no AI. It can't really function. It can't get the data. It can't interconnect the GPUs that will do the computation, so networking and secure networking is essential for that whole drive.

Steve Saunders:

That line from Martin is this whole episode in one sentence. Without networking, there's no AI. The hyperscaler answer to every AI constraint is usually the same: scale, more GPUs, bigger clusters, more CapEx, more power, but there's a problem.

Speaker 9:

It's a trap.

Steve Saunders:

Power isn't infinite. Water isn't infinite, and capital markets aren't limitless. So if your AI strategy only works in a world of abundance, it doesn't work. So what does a carrier grade infrastructure grade AI approach actually look like? Well, Orange has a useful way of describing it as layers, not miracles, and with a hard distinction between deterministic systems and LLM agentic ones.

Philippe Ensarguet:

Telcos are living in an ecosystem that is like a 10 year in the past lighting windows. For me, it's really. And something that is very important is that at a moment in time where cloud native and Kubernetes will be roughly since 15 years production grade and production really, we have only at least 5% to 10% of our network function asset that are truly and really cloud native. Always keeping in mind that what we are building is our telecom operator Maslow pyramid. We have infrastructure to carry data that enable automation to have AI. The layer are in this order: infrastructure, data, automation, AI.

Steve Saunders:

Infrastructure, data, automation, AI. That's the inversion that the hype cycle refuses to acknowledge. Unlike politicians and priests, carrier executives don't get to do magical thinking. They plan five, seven, even 10 years out, while the demand signal is still noisy and uncertain. So the question for them becomes how do you prepare for an AI-driven future without pretending you can foresee it perfectly?

John Keib:

How do you kind of thread that needle for the next three to five years while, frankly, not seeing a crystal ball exactly of what's going to happen with some of these technologies? I don't think we're here to say right now that AI is causing a huge boom. We anticipate that it will cause a huge boom and to the extent that we're overprovisioning our network pretty significantly to prepare for that.

Steve Saunders:

Exactly that. The system has to be resilient to uncertainty because that's the real world environment. So if AI is a systems problem, the first failure mode isn't the model. It's the mess underneath. Fragmented data, inconsistent processes, poorly defined outcomes, or amplified by false expectations at the board, shareholder, and end user level. So the operators who are doing this well start in an almost boring place. Clean up the data, make sure it's all in a standard and consistent format, narrow those use cases down and quantify results.

Michel Combes:

The first piece that we had to do was really to build a clean, clear that we can leverage database. So that has been the first part of the journey, which was a little bit complex for us. As you know, we are carve out from Lumen, so we had systems which were all around the place. And so our datas were already a leader, were also a little bit all around the place. So that was really first step was to create this data repository that then can really feed AI. So that's the first. Second was to identify a few use cases where we knew that we had pain points. So that's why I took the example of buried services and making sure that we say, "Okay, we know that we can improve this use case for AI."

And so be manually focused on that, meaning it's very important to prioritize what you do with AI. Because of course if you look at AI, you say, "Oh, it can do everything." But at the end of the day, you cannot afford to do everything. So you just need to select a few areas, a few use case where you believe that it could be useful.

Steve Saunders:

Now here's the part where Carrier 2.0 stops being an internal efficiency story because the big question isn't whether telcos are using AI inside their businesses. Obviously they already are. It's whether they can monetize AI for customers with latency, security, and of course trust as differentiators.

Philippe Ensarguet:

Today, moving beyond the connectivity is, I would say, entering into a platforming approach, a platforming move where we will manage consistent and seamless access to the product and leverage the experience we want to have for our user, our customer, and even also for our developer. We really want to envisage the connectivity and the security and the networking services like product, like true capabilities, and answering to the true needs of our customers that want to have a similar experience to what they may have in hyperscaler ecosystem.

Steve Saunders:

I'm going to leave you with a triptych of aphorisms today to help reframe the hoopla around AI to something that's more worthwhile in the telco world. AI isn't software, it's load. AI doesn't deploy, it attaches to reality. And if you're a carrier, AI isn't your product. AI infrastructure is your product. The winners here won't be the carriers with the biggest LLMs. They'll be the ones who understood that stack: infrastructure, data, automation, and then AI, and who designed for physical limits, operational reality, and business outcomes. I'm Steve Saunders. I'll see you next time on Carrier 2.0.

The editorial staff had no role in this post's creation.