- Alianza believes Cisco's AI Grid architecture addresses only one of three key AI voice application categories for operators
- AI-assisted conversations and post-conversation intelligence use cases have trickier requirements but are potentially lucrative
- Alianza has a plan to address these challenges, but questions remain about its approach and consumer buy-in
Cisco made headlines this week with news it is working with Nvidia, AT&T, Comcast, Charter Communications and Akamai to deploy a new AI Grid architecture that pushes compute closer to users. The idea, of course, is that distributed, low-latency compute can help operators deploy new AI-enhanced services they can charge a premium for.
But to Alianza, the play feels like picking low-hanging fruit.
“To really bring value to end customers by combining artificial intelligence and their conversations, you need a lot more than a couple of GPUs sitting inside of a carrier network,” Alianza Chief Product and Technology Officer Dag Peak told Fierce.
Alianza’s argument is mainly focused on voice applications – so, more of a response to what Comcast is doing with Personal AI in rolling out an enterprise front desk concierge rather than what AT&T is doing with its IoT video implementation.
Peak said that the company thinks there are three main categories of conversational AI: AI-handled conversations (like the Comcast application), AI-assisted conversations, and post-conversational intelligence.
Difficulty mode: easy, medium, hard
AI-handled conversations, he said, are the easiest to implement because they come with fewer technical and human consent challenges. AI is overtly handling the interaction rather than sitting in as a participant to a human exchange.
These applications are also appealing because they come with the most potential to reduce capital costs for human support workers. Thus, Peak said there’s been an “over rotation and overfocus” on this category.
He added that means most of the industry is currently “ignoring” two other potentially lucrative application categories due to technical challenges and legal constraints. (Well, except for T-Mobile.)
Peak explained that AI-assisted conversations and post-conversational intelligence both require injecting AI directly into the call path as an active participant in conversations. That’s tricky when parties on the call don’t necessarily expect to have an AI agent listening in – it requires getting their consent.
Operators also have to sort the details of actually recording the call, packing it up in a container that applications can consume and allowing authorized applications to tap into those containers. “That whole authentication and authorization flow is tricky,” he said.
Additionally, AI-assisted conversations – like T-Mobile’s translation service – are highly sensitive to latency. Peak said it’s very easy to disrupt the flow of a conversation with even just a few seconds of latency.
“If you call into an AI agent and you can’t have a natural human level of back and forth…it becomes very frustrating very quickly,” he said.
Peak conceded that Cisco’s AI Grid architecture will help address the latency issue for AI-assisted conversations, leaving just the technical and consent challenges to be tackled.
But he argued focusing on GPUs at the edge is “missing the point to a certain extent that a whole category of use cases – the post-conversational intelligence use cases” have much more lenient latency requirements. This means that if the technical and consent challenges are solved, processing can be done where computing is the cheapest rather than at the edge.
Alianza’s answer
Alianza, of course, isn’t a disinterested party in this debate. The company has been pushing what it calls the Intelligent Communications Fabric, a framework which spans infrastructure, orchestration and services. According to Alianza, APIs deployed in the orchestration layer will allow operators to integrate services – including AI-enabled ones – into voice calls.
The fabric concept was introduced in September 2025. Peak told Fierce that by September Alianza plans to roll out APIs that can deliver on use cases across all three AI categories.
There are a few open questions about Alianza’s approach. First, the mechanism for forming an ecosystem of applications around its APIs is unclear. Second, while enterprises may be willing to pay for AI-enabled voice services, it’s not a given that consumers will be willing to do the same when they’re already being nickeled and dimed across a wide range of products and services.
If consumers on a budget have to choose between an upcharge for AI-enabled video entertainment or AI-enabled voice services, who’s to say they’ll choose the latter?