- Agentic identity and intent are emerging as unsolved AI security problems, said Shawn Wormke, F5 SVP of product marketing
- Voice, audio and file-based inputs are hard to protect with today's text-centric guardrails
- Rapid innovations are emerging, presenting new threats nearly without warning
F5 APPWORLD 2026, LAS VEGAS — The industry has made rapid progress securing AI systems against known threats, but harder problems that are just beginning to come into focus are still ahead, said Shawn Wormke, F5 SVP of product management.
Speaking with Fierce, Wormke outlined several challenges he expects will dominate AI security conversations over the next year, starting with two he sees as intertwined: agentic identity and agentic intent.
Who is the agent, and what is it really trying to do?
As enterprises deploy more AI agents to automate work, the question of who — or what — is actually taking action on enterprise systems is becoming urgent.
"The number of identities that are going to be running around is going to be crazy," Wormke said. An agent written by one employee and then run by another creates an immediate permission problem: the two people may have very different levels of access, but they're operating the same tool. An SVP of product, like Wormke, gets access to financials that a middle manager might not have. "But we might be using the same agent with access to the same data," Wormke said.
The non-deterministic nature of AI makes this harder than the equivalent human problem. A person who oversteps their bounds can be called and corrected. An agent operating in a machine-to-machine environment has no such feedback loop, Wormke said.
In an earlier conversation with Fierce, Wormke's colleague Chuck Herrin, F5 field CISO, identified AI as one of a trio of threats emerging to heat up the enterprise security landscape. Other threats are the Iran war and quantum computing.
Intent compounds the identity problem. Even a properly credentialed agent acting in good faith can produce an outcome that diverges from what the user actually wanted. That's because AI systems communicate with other AI systems and the original intent can get lost or reinterpreted, Wormke said.
For example, ask an agent to find the best route between two cities and book tickets, and there's a chance it books a flight instead of a bus because it concluded flying was faster — without knowing you're afraid to fly. "It still got you from point A to B," Wormke said. "But the outcome looks like it could be correct, and it's not."
Nobody has solved these problems yet — not the large established vendors, not the wave of startups working the space. "I'll let you know next year when we've solved it," he said. He expects agentic identity and intent to become among the hottest areas in security over the coming year.
Protecting AI that listens and sees
Multimodal AI, which operates on audio, images and video rather than text alone, presents other problems, Wormke said. Today's AI security guardrails are largely built around text, because that's what large language models natively process. But as enterprises deploy AI agents in voice-based applications like call centers, a translation step gets introduced: audio comes in, gets converted to text, passes through security controls and gets converted back to audio on the way out.
That pipeline creates latency, and voice applications are sensitive to latency. The industry will likely move toward a native understanding of audio in real time, bypassing the conversion step. But the security infrastructure to protect those interactions doesn't fully exist yet, Wormke said. F5's recent platform updates targeting AI complexity and quantum risk are steps in that direction.
Companies deploying voice agents for customer service are worried about agents that become rude, give away confidential information or direct customers to competitors. Guardrails that work well against text-based prompt injection may not transfer cleanly to audio, Wormke said.
Files present similar problems. An attacker who understands that an organization's guardrails are text-based may look for ways to smuggle harmful inputs through image or document uploads. Closing those gaps is essential and the bar for efficacy has to be high. "If something's 55% effective at blocking, that's a coin flip. That's not security." F5's standard, he said, is a minimum of 90% effectiveness before a guardrail ships.
Data is still a mess
Data sprawl is an issue the industry may be underestimating. A year ago, data governance was a frequent topic of concern. It has gotten quieter — but not, Wormke suspects, because the problem is solved.
Earlier in the conference, F5 CEO François Locoh-Donou compared AI's effect on enterprise IT to urban sprawl, and the data problem is a big part of why.
"I think enterprises just kind of gave up," Wormke said, "because their data is just everywhere." Data scattered across silos, systems and cloud environments remains a foundational challenge for AI deployment, and one that hasn't meaningfully improved just because the conversation around it has faded.
The pace of change is a wildcard
Underlying all of these issues is a broader dynamic Wormke kept returning to: the technology is moving faster than anyone's ability to anticipate the problems it creates. He pointed to the OpenClaw autonomous personal assistant and similar emerging threats as examples of challenges that weren't even on the radar a short time ago.
"Some of these things we didn't even see four months ago," he said. "We didn't even see that technology coming. It just came on scene and there was a huge reaction."
That speed of change means the field is still working from a first-generation security posture against an adversarial landscape that is evolving continuously, Wormke said. The problems being solved today are likely the easy ones.