AI vs. AI is the new security battleground

  • AI is becoming an engine of cybercrime, enabling automated social engineering and other sophisticated attacks
  • Defenders must deploy AI-powered security to counter attackers who are industrializing cybercrime
  • Telcos face new security responsibilities as they secure critical 5G, AI and OT/IT infrastructure

Telcos and enterprises must prepare for emerging security threats posed by AI in the hands of attackers, part of a broader trend toward industrialized cybercrime.

"AI, automation and a mature cybercrime supply chain will make intrusion faster and easier than ever," Aamir Lakhani, global security strategist and researcher with Fortinet’s FortiGuard Labs, told Fierce. He warned of the "refinement and industrialization" of attack processes and malware.

"Autonomous cybercrime agents on the dark web will begin executing entire attack stages with minimal human oversight. These shifts will exponentially expand attacker capacity," Lakhani said.

He added the time between intrusion and impact will shrink drastically from days to minutes. That means speed will become "defining risk factor for organizations in 2026." In other words, milliseconds matter more than ever.

As the security landscape changes, the role of telcos and other communications service providers is growing. CSPs must move beyond simple connectivity to securing the entire ecosystem of 5G, AI and IT and OT technologies, Lakhani said.

"Whether it is a chemical manufacturer automating dangerous tasks via private 5G, or an oil and gas company centralizing SCADA systems, the CSP is the backbone of these critical operations," he said.

A dark economy

In parallel to the emergence of AI attacks, the cybercrime economy will become more structured and industrialized, blending automation, integration and specialization, Lakhani said. Dark vendors will be able to offer more access packages tailored for more specific targets, "replacing the generic bundles that dominate today’s underground markets," he said.

He described a dark economy that is beginning to more closely resemble legitimate business. "Black markets will adopt customer service, reputation scoring and automated escrow," Lakhani said.

Automated tools and AI are permitting increasingly sophisticated threats that previously required a skilled human attacker. Chuck Herrin, F5 field chief information security officer (CISO), recently told Fierce the industry should be on the lookout for a greater volume of more advanced and automated attacks at the application logic layer.

The AI-driven crime wave is already underway. Some 87% of organizations reported experiencing an AI-driven cyberattack in 2024.

Friend from foe

As AI agents emerge, organizations will find it difficult to distinguish between AI agents run by attackers and legitimate AI agents. Herrin noted organizations will need to thread the needle of blocking bad actors without blocking their customers, especially as AI agents proliferate.

Today, AI agents rely on the same tools and processes that people use. "An AI agent trying to commit fraud looks like a human user trying to commit fraud," Herrin said. But that will change — AI agents will evolve their own tools and processes, making attacks more difficult to spot. 

"We're going to have to focus more on signals intelligence and trying to discern intent, rather than just looking for valid credentials for who is allowed to have access to data. It's a more sophisticated threat model we have to build," Herrin said.

Ian Swanson, Palo Alto Networks VP of AI security noted unauthorized action is eclipsing data leakage as the primary threat to organizations. "These agents require deep integration into our IT ecosystems to do their jobs. They connect directly to APIs, SaaS platforms and databases. That means a successful attack triggers real consequences."

The Open Worldwide Application Security Project (OWASP) recognizes the threat, identifying Excessive Agency as number six on its 2025 Top 10 Risk & Mitigations for LLMs and GenAI apps. The Excessive Agency risk is exactly what the name says it is: AI agents need broad functionality, permissions and autonomy, but when administrators give those agents too much power, they can do damage, either because of a poorly engineered benign prompt or a malicious prompt by an attacker.

"By manipulating an agent's inputs, an attacker can weaponize legitimate permissions to wire funds, modify code or alter database records," Swanson said. "The agent effectively becomes a super-user insider threat. We are entering an era where we must secure the actions machines take and not just the data they access."

'Social-engineering-as-a-service'

Attackers are also using AI for social engineering.

"A new underground economy is forming: AI-powered social-engineering-as-a-service attacks," Arik Atar, Radware senior researcher, cyber threat intelligence, said in a blog. Attackers are already using automated scripts that spoof caller IDs and play fraudulent voice recordings to trick victims into handing over two-factor authentication codes.

Deepfake-enabled fraud will increase, added Radware Principal Security Evangelist Chip Witt. Trusting audio or video alone for authentication is no longer safe. "The rise in deepfake-enabled fraud, including multimillion-dollar videoconference scams, proves that visual and voice cues cannot be relied upon. Businesses must adopt identity-first verification. This includes cryptographic credentials, multi-factor authentication, continuous behavioral checks and out-of-band confirmations for high-risk actions," he said.

Atar predicted the next step will be AI voicebots that can mimic real people,  including coworkers, friends and loved ones, to make account takeovers even easier.

We've already seen incidents of deepfake-enabled attacks. At engineering firm Arup, attackers used deepfake video conferencing to impersonate executives and trick an employee into transferring $25.6 million, according to a report by the World Economic Forum. On the other hand, both Ferrari and ad company WPP successfully detected and stopped deepfake CEO impersonation scams in 2024..

AI for defense

Defenders will need AI to fight back, using automated triage and autonomous decision-making for real-time mitigation, Witt said. "Human response alone won't be fast enough."

But AI can't replace humans. "The solution is not removing humans but augmenting them," Witt said. "Autonomous AI agents can triage, contain and remediate at machine speed while humans provide oversight and judgment. In 2026, defense at human speed will no longer be viable. Agentic security will become standard practice."

Fortinet's Lakhani added industrialized cybercrime will demand a more coordinated global response. Initiatives such as INTERPOL’s Operation Serengeti 2.0, where African authorities dismantled cybercrime and fraud networks, demonstrate how joint intelligence sharing and targeted disruption can dismantle criminal infrastructure. 

And new initiatives, such as the Cybercrime Bounty program launched in November by Fortinet and Crime Stoppers International, will enable global communities to safely report cyberthreats, helping to scale deterrence and accountability, he said.