interview AI brokers characterize the brand new insider risk to firms in 2026, in accordance with Palo Alto Networks Chief Safety Intel Officer Wendi Whitmore, and this poses a number of challenges to executives tasked with securing the anticipated surge in autonomous brokers.

“The CISO and safety groups discover themselves below a whole lot of strain to deploy new know-how as rapidly as attainable, and that creates this large quantity of strain – and large workload – that the groups are below to rapidly undergo procurement processes, safety checks, and perceive if the brand new AI purposes are safe sufficient for the use circumstances that these organizations have,” Whitmore informed The Register

“And that is created this idea of the AI agent itself changing into the brand new insider risk,” she added.

In response to Gartner’s estimates, 40 % of all enterprise purposes will integrate with task-specific AI agents by the tip of 2026, up from lower than 5 % in 2025. This surge presents a double-edged sword, Whitmore stated in an interview and predictions report

On one hand, AI brokers may help fill the continued cyber-skills gap that has plagued safety groups for years, doing issues like correcting buggy code, automating log scans and alert triage, and quickly blocking safety threats. 

After we look via the defender lens, a whole lot of what the agentic capabilities enable us to do is begin considering extra strategically about how we defend our networks, versus at all times being caught on this reactive scenario

“After we look via the defender lens, a whole lot of what the agentic capabilities enable us to do is begin considering extra strategically about how we defend our networks, versus at all times being caught on this reactive scenario,” Whitmore stated. 

Whitmore informed The Register she had not too long ago spoken with one in all Palo Alto Networks’ inside safety operations middle (SOC) analysts who had constructed an AI-based program that listed publicly identified threats in opposition to the cybersecurity store’s personal personal threat-intel knowledge, and analyzed the corporate’s resilience, in addition to which safety points had been extra prone to trigger hurt.

This, she stated, permits the agency to “focus our strategic insurance policies over the subsequent six months, the subsequent sure, on what sorts of issues will we must be setting up? What knowledge sources do we’d like that we aren’t essentially considering of right this moment?”

The subsequent step in utilizing AI within the SOC includes categorizing alerts as actionable, auto-close, or auto-remediate. “We’re in numerous levels of implementing these,” Whitmore stated. “After we have a look at agentic, we begin with a few of the extra easy use circumstances first, after which progress as we change into extra assured in these from a response functionality.”

Nonetheless, these brokers – relying on their configurations and permissions – may have privileged access to delicate knowledge and programs. This makes agentic AI susceptible – and a very attractive target to attack.

One of many dangers stems from the “superuser drawback,” Whitmore defined. This happens when the autonomous brokers are granted broad permissions, making a “superuser” that may chain collectively entry to delicate purposes and sources with out safety groups’ information or approval. 

“It turns into equally as necessary for us to make it possible for we’re solely deploying the least quantity of privileges wanted to get a job executed, identical to we’d do for people,” Whitmore stated. 

Does your CEO have an AI doppelganger?

“The second space is one we’ve not seen in investigations but,” she continued. “However whereas we’re on the predictions lens, I see this idea of a doppelganger.”

This includes utilizing task-specific AI brokers to approve transactions or evaluation and log off on contracts that might in any other case require C-suite degree guide approvals.

“We take into consideration the people who find themselves working the enterprise, and so they’re oftentimes pulled in one million instructions all through the course of the day,” Whitmore stated. “So there’s this idea of: We are able to make the CEO’s job extra environment friendly by creating these brokers. However finally, as we give extra energy and authority and autonomy to those brokers, we’ll then begin entering into some actual issues.”

For instance: an agent might approve an undesirable wire switch on behalf of the CEO. Or think about a mergers and acquisitions state of affairs, with an attacker manipulating the fashions in such a means that forces an AI agent to behave with malicious intent.

Through the use of a “single, well-crafted immediate injection or by exploiting a ‘software misuse’ vulnerability,” adversaries now “have an autonomous insider at their command, one that may silently execute trades, delete backups, or pivot to exfiltrate the complete buyer database,” in accordance with Palo Alto Networks’ 2026 predictions.

This additionally illustrates the continued risk of prompt-injection attacks. This yr, researchers have repeatedly proven immediate injection assaults to be an actual drawback, with no repair in sight. 

“It is in all probability going to get loads worse earlier than it will get higher,” Whitmore stated, referring to prompt-injection. “That means, I simply do not suppose we have now these programs locked down sufficient.”

How attackers use AI

A few of that is intentional. “New programs, and the creators of those applied sciences, want individuals to have the ability to provide you with artistic assault use circumstances, and this typically includes manipulating” the fashions, Whitmore stated. “Because of this we have got to have safety baked in, and right this moment we’re forward of our skis. The event and innovation inside the AI fashions themselves is occurring loads sooner than the incorporation of safety, which is lagging behind.”

Making attackers extra highly effective

In 2025, Palo Alto Networks’ Unit 42 incident response workforce noticed attackers abuse AI in two methods. One: it allowed them to conduct conventional cyberattacks sooner, and at scale. The second concerned manipulating fashions and AI programs to conduct new sorts of assaults.

“Traditionally, when an attacker will get preliminary entry into an setting, they need to transfer laterally to a site controller,” Whitmore stated. “They need to dump Energetic Listing credentials, they need to elevate privileges. We do not see that as a lot now. What we’re seeing is them get entry into an setting instantly, go straight to the inner LLM, and begin querying the mannequin for questions and solutions, after which having it do all the work on their behalf.”

Whitmore, together with nearly each different cyber exec The Register has spoken with over the previous couple of months, pointed to the “Anthropic assault” for example.

She’s referring to the September digital break-ins at multiple high-profile companies and authorities organizations later documented by Anthropic. Chinese language cyberspies used the corporate’s Claude Code AI software to automate intel-gathering assaults, and in some circumstances they succeeded.

Whereas Whitmore would not anticipate AI brokers to hold out any absolutely autonomous assaults this yr, she does count on AI to be a drive multiplier for community intruders. “You are going to see these actually small groups nearly have the potential of huge armies,” she stated. “They’ll now leverage AI capabilities to take action far more of the work that beforehand they’d have needed to have a a lot bigger workforce to execute in opposition to.”

Whitmore likens the present AI growth to the cloud migration that occurred twenty years in the past. “The largest breaches that occurred in cloud environments weren’t as a result of they had been utilizing the cloud, however as a result of they had been focusing on insecure deployments of cloud configurations,” she stated. “We’re actually seeing a whole lot of equivalent indicators with regards to AI adoption.”

For CISOs, this implies establishing greatest practices with regards to AI identities and provisioning brokers and different AI-based programs with entry controls that restrict them to solely knowledge and purposes which can be wanted to carry out their particular duties.

“We have to provision them with least-possible entry and have controls arrange in order that we are able to rapidly detect if an agent does go rogue,” Whitmore stated. ®


Source link