• AiNews.com
  • Posts
  • Cyberattacks by AI Agents Are Coming—Experts Warn of Growing Risk

Cyberattacks by AI Agents Are Coming—Experts Warn of Growing Risk

A dark, high-tech digital illustration showing an intelligent AI agent with a glowing core scanning a server network for vulnerabilities. One server labeled “Honeypot” glows subtly, set as a trap. Data lines pulse through the network, branching off toward icons representing phishing, ransomware, and bots. In the background, cybersecurity researchers observe the attack from behind translucent holographic screens. The lighting alternates between red and blue, symbolizing the conflict between attackers and defenders in the cyber landscape. The scene conveys urgency, digital warfare, and the evolving challenge of securing AI systems.

Image Source: ChatGPT-4o

Cyberattacks by AI Agents Are Coming—Experts Warn of Growing Risk

AI agents are becoming increasingly capable of handling complex tasks—but their potential use in cybercrime is raising alarms. Designed to plan, reason, and act autonomously, these agents could eventually make it easier and cheaper for attackers to carry out sophisticated cyberattacks across the internet.

From Assistance to Exploitation

Though AI agents are currently used for benign purposes like scheduling meetings, making a reservation, ordering groceries, and task automation, cybersecurity experts warn that the same systems could be weaponized to:

  • Identify vulnerable systems

  • Hijack networks

  • Steal sensitive data

  • Launch large-scale ransomware attacks

While criminal deployment of agents at scale hasn't been observed yet, researchers and security companies are seeing early signs. Anthropic previously reported its Claude model was able to replicate a sensitive data theft scenario. And according to Mark Stockley of Malwarebytes, we may not be far from a world where “the majority of cyberattacks are carried out by agents. It’s really only a question of how quickly we get there.”

Real-World Testing: The Honeypot

To monitor this emerging threat, Palisade Research created the LLM Agent Honeypot—a trap designed to lure and analyze AI-driven hacking attempts. The honeypot mimics servers with sensitive data, embedding prompt-injection techniques to detect AI agents by testing how they respond to hidden commands. “Our intention was to try and ground the theoretical concerns people have,” says Dmitrii Volkov, research lead at Palisade.

Since launching in October, the system has logged over 11 million access attempts. Among them:

  • 8 potential AI agents were flagged

  • 2 confirmed agents, traced to Hong Kong and Singapore, passed both intelligence and response-time tests

“We’re looking out for a sharp uptick,” says Volkov. “When that happens, we’ll know that the security landscape has changed. In the next few years, I expect to see autonomous hacking agents being told: ‘This is your target. Go and hack it.’”

The team plans to expand the honeypot to cover social media platforms, websites, and databases in order to attract a wider variety of attackers, including spam bots and phishing agents.

Why Agents Are Different

Traditional hacking bots follow rigid scripts and can’t adapt to unforeseen changes. In contrast, AI agents can:

  • Assess targets

  • Adapt strategies mid-attack

  • Avoid detection

Volkov explains: “They can look at a target and guess the best ways to penetrate it. That kind of thing is out of reach of dumb scripted bots.”

Cybercriminals may eventually use agents to scale ransomware campaigns—delegating target selection and execution to systems that can replicate attacks cheaply and efficiently.

AI agents are also far cheaper than hiring professional hackers and can launch attacks at a scale and speed that humans can’t match. While ransomware attacks remain rare due to the expertise required, Stockley warns that agents could change that. “If you can delegate the work of target selection to an agent, then suddenly you can scale ransomware in a way that just isn’t possible at the moment,” he says. “If I can reproduce it once, then it’s just a matter of money for me to reproduce it 100 times.”

“Palisade Research’s approach is brilliant: basically hacking the AI agents that try to hack you first,” Vincenzo Ciancaglini, a senior threat researcher at the security company Trend Micro says. “While in this case we’re witnessing AI agents trying to do reconnaissance, we’re not sure when agents will be able to carry out a full attack chain autonomously. That’s what we’re trying to keep an eye on.”

Early Signs, Growing Concern

While full-fledged agent-driven attacks haven’t been confirmed, researchers are building tools to stay ahead. A benchmark created by Professor Daniel Kang at the University of Illinois Urbana-Champaign found that:

  • Current AI agents can exploit 13% of previously unknown vulnerabilities

  • With a brief description of the vulnerability, that success rate climbs to 25%, showing that AI systems can detect and exploit vulnerabilities even without prior exposure or training

Kang hopes this benchmark will help developers better understand and mitigate the risks before they spiral. “I’m hoping that people start to be more proactive about the potential risks of AI and cybersecurity before it has a ChatGPT moment. I’m afraid people won’t realize this until it punches them in the face,” he warns.

Looking Ahead

As AI agents grow more capable, the security implications stretch far beyond academic labs or high-level enterprise systems. Cyberattacks that once required specialized human expertise could soon be automated and scalable, lowering the barrier to entry for would-be attackers. This shift could lead to more frequent and widespread attacks targeting not just large institutions, but small businesses, local governments, and everyday users.

For the general population, this means the threat landscape may evolve rapidly. Phishing emails, fraudulent messages, and ransomware attempts could become more convincing, more adaptive, and more common. Automated agents could scan for and exploit everyday vulnerabilities—like misconfigured home routers, out-of-date software, or weak passwords—with far greater speed and precision than today’s basic bots.

At the same time, the tools being developed to detect and understand agentic behavior—like honeypots and vulnerability benchmarks—offer hope that defenses can evolve just as quickly. But cybersecurity experts warn that proactive preparation is essential.

In short, while the rise of autonomous AI agents in cybersecurity may still be emerging, the consequences—if left unchecked—could be felt widely and suddenly. For individuals, institutions, and policymakers alike, now is the time to prepare.

Editor’s Note: This article was created by Alicia Shapiro, CMO of AiNews.com, with writing, image, and idea-generation support from ChatGPT, an AI assistant. However, the final perspective and editorial choices are solely Alicia Shapiro’s. Special thanks to ChatGPT for assistance with research and editorial support in crafting this article.