Threat intelligence teams reported a commercial AI-driven phishing toolkit sold in underground markets that automates personalized phishing emails and multi-step social-engineering flows. Security specialists say the kit lowers the barrier for mass, convincing campaigns and is already being used in large-scale romance and investment fraud operations.

Researchers and threat briefings cataloged a rapid uptake of commercially marketed AI phishing toolkits that automate creation of tailored messages, spear-phishing sequences and multichannel engagement flows. Promoted on underground forums as affordable phishing-as-a-service, these kits leverage generative models to craft personalized subject lines, credible backstories and adaptive reply scripts that mimic a target’s tone and known contacts, significantly increasing click-through and credential-harvesting success rates. Security teams reported that the toolkits include modular templates for email, SMS and social platforms, integrated tracking, and automated persona management that enables attackers to scale campaigns previously limited to bespoke criminal labs. The availability of such toolkits has already been linked to increases in romance, investment and credential-fraud operations by reducing technical barriers and operational costs. Analysts warned that defensive teams must adopt behavioral detection, rapid threat intelligence sharing, and mitigations such as multi-factor authentication and user education to blunt the efficiency gains attackers enjoy from generative AI. The shift underscores a broader trend: AI is commoditizing social engineering at scale.