- Agentic AI is about to introduce a whole new set of security challenges
- The ability to act spontaneously and autonomously means agents can both achieve tasks and potentially cause a lot of damage along the way
- Guardrails and a good risk management framework are key to managing this emerging threat
Generative AI has already revolutionized the security landscape, increasing the risk of data leakage, introducing the potential for data poisoning and giving attackers new tools to amplify the impact of their strikes. But Agentic AI is about to make things exponentially more complicated.
The problem is spontaneity.
With GenAI, “the threat begins when you ask it a question…but it doesn’t do anything unless it’s asked a question,” F5 VP of Engineering Jimmy White told Fierce. “With agents it’s the absolute opposite.”
Agents connect to one or many models as well as to other agents, and “that in itself is dangerous,” White said. Why? Because agents will act spontaneously to achieve the task they’re assigned, and the route they decide to take to execute on their mission can be either simple (acting internally) or complex (reaching out to other agents for help).
Agents also have autonomy to access tools — for instance, the ability to access or modify code or databases — which can contribute to what could easily become a significant problem.
“Having a spontaneous virtual device being able to spontaneously execute a virtual or kinetic task can lead to any type of good or bad situation,” he said.
White gave an example: Take an accounting agent tasked with deleting the account of a customer called Acme whose bill is 60 days overdue. The agent pings a model to ask how to complete the task, and the model tells it to use a certain SQL statement. The problem, White said, is that SQL is relatively easy to use but sensitive to minor errors. So, the difference between using a semicolon and a star semicolon can mean the difference between deleting just Acme and deleting all companies in a database that start with the letter A.
The agentic AI threat isn’t hypothetical. McKinsey noted in October that 80% of organizations reported already having encountered “risky behaviors from AI agents, including improper data exposure and access to systems without authorization.”
The answer, according to White and McKinsey, is a combination of robust guardrails, risk management and traceability.
Asked whether it’s possible for a human to write enough guardrails to account for every potential problematic agent action, White acknowledged it’s not. But he said genAI can be used to help build protections that will prevent any overly pervasive actions from being taken.
White added there’s also a right and wrong way to go about implementing guardrails.
For instance, blocking an agent from having a specific “thought” can have a really negative impact on the agent, White said, the same way a person would struggle after having their train of thought interrupted. Rather, the approach should be to nudge the agent in the right direction – kind of like leaning over the shoulder of a junior developer and noting that in a certain set of circumstances “we don’t do that, we do this instead.”
“Nearly always for enterprise use cases, it’s for good intentions that these things are created,” he concluded. “What you’re trying to do is protect against the N incorrect ways of doing it.”