March 4, 2026

AI-generated phishing campaigns are becoming harder for security teams to detect and respond to. Attackers are using publicly available information and generative AI to craft messages that look legitimate and highly personalized. As these attacks scale, many organizations are finding that traditional email security tools and awareness training alone are not enough.

IRONSCALES’ Winter 2026 release introduces three AI agents designed to help security teams identify and respond to phishing threats earlier in the attack cycle. The platform now includes agents focused on reconnaissance, investigation, and employee training. The idea is to help security teams understand where attacks may come from, analyze suspicious emails quickly, and prepare employees for the types of messages attackers are likely to send.

Building a Closed-Loop Email Security System

Eyal Benishti, CEO of IRONSCALES, said the company is focusing on a connected system that continuously improves detection and training. “Our platform enables a complete closed loop architecture of your security posture, it offers a truly agentic experience versus traditional automation, and it provides custom protection based on what it learns of the user,” Benishti told MSSP Alert.

He explained that the system continuously connects reconnaissance, detection, and training.

“Our primary differentiator is the ‘closed loop’ system where continuous reconnaissance feeds detection, which in turn feeds training, all without requiring a human in the middle,” Benishti said. “While other vendors might use OSINT-driven attack generation solely for employee training, IRONSCALES uses it to improve detection first.”

The Red Teaming Agent analyzes an organization’s public footprint and generates the types of phishing campaigns attackers might build.

“The Red Teaming Agent researches an organization’s public footprint, generates the personalized phishing campaigns attackers would build, and feeds those attacks directly into the platform’s Adaptive AI to harden detection before a real email is ever sent,” Benishti said.

Agents Designed to Adapt, Not Just Automate

The Red Teaming Agent also reflects how the company approaches AI agents.

“A lot of what the industry calls ‘agentic’ is just traditional automation following a static predetermined path,” Benishti said. “IRONSCALES differentiates by deploying agents that actually reason, make decisions, and adapt based on their findings.”

For example, the agent continuously monitors changes in an organization’s public exposure. “The Red Teaming Agent continuously discovers changes in public exposure, decides which attack scenarios to generate, and determines how to harden detection models, which means the inputs, decisions, and outputs change every cycle,” Benishti said.

Faster Email Investigations for Security Teams

Another component of the release is the Phishing SOC Agent, which analyzes suspicious emails and produces a detailed security assessment.

Benishti said organizations should see faster response times as a result. “Customers and MSPs should see their phishing remediation times drop significantly – from hours to just minutes,” he said. “The Phishing SOC Agent is designed to deliver complete L2-level forensic investigations in minutes, providing deep analysis on demand.”

Instead of relying only on risk scores, the platform generates a full assessment. “Instead of just outputting a confidence score, the platform produces a complete Security Assessment,” Benishti said. “Measurable outcomes include the automated generation of clear verdicts (Safe, Phishing, or Spam), key evidence points, and full activity logs ready for immediate reporting, executive escalation, or compliance review.”

Reducing Analyst Workload

Benishti said the agents are meant to support security teams rather than replace them. “These agents do not replace security teams; they augment what the team can do without requiring additional headcount,” he said. “In other words, we’re helping skilled workers automate the automatable tasks so they can focus on other human-level priorities.”

The Phishing SOC Agent handles several technical investigation tasks automatically. “The Phishing SOC Agent completely shifts the day-to-day workload by taking on the heavy lifting of deep forensics,” Benishti said. “It automatically runs five parallel analysis tracks simultaneously: sender verification, body analysis for social engineering, URL chain inspection, attachment scanning through more than 100 malware engines with sandbox detonation, and relay tracing.”

As a result, analysts can focus more on response decisions. “By instantly delivering these L2-level forensics, security teams no longer have to push investigations off with a ‘we’ll get to it later’ mindset,” Benishti said. “It removes the manual burden of gathering context and cross-referencing threat intelligence, allowing analysts to operate from a place of judgment and make higher-level remediation decisions instantly.”

Personalized Phishing Simulations for Employees

The third component, the Phishing Simulation Agent, focuses on employee training using more realistic scenarios.

“Instead of relying on generic templates that only test if employees can spot fake emails, the Phishing Simulation Agent targets high-risk employees with hyper-personalized simulations in their native language,” Benishti said.

Organizations can track improvements in employee behavior and overall resilience.

“MSSPs and customers can track measurable improvements in user vulnerability across four specific factors (click history, reporting rate, role severity, and training gaps) and monitor an overall organization resilience score on a risk-sorted dashboard,” Benishti said.

Additional Security Controls in the Platform

The Winter 2026 release also adds integrated email encryption to protect outbound messages containing sensitive information. Security teams can set policies that automatically encrypt regulated content, while users can also apply encryption when needed.

The platform also expands its deepfake protection capabilities for Microsoft Teams meetings. The system now analyzes voice patterns in addition to visual signals to detect impersonation attempts during meetings. Identity profiles are created automatically as employees participate in normal meetings, allowing organizations to deploy the feature without a separate enrollment process.

Why This Matters for MSSPs and Security Teams

For MSSPs and enterprise security teams, these updates reflect how phishing defense is changing. AI is making it easier for attackers to create targeted impersonation messages that look legitimate from the start. In response, security platforms are starting to combine exposure analysis, automated investigations, and employee training. As phishing attacks become more personalized and automated, tools that help organizations understand their exposure, investigate emails faster, and train employees using realistic scenarios may play a larger role in managing email and social engineering risk.

Related

Scroll to Top