A new analysis shows deepfake-enabled impersonation scams have become cheap, scalable and highly targeted, spanning fake executive video calls to AI-generated medical and political endorsements. Researchers warn synthetic audio and video are now a dominant, fast-growing vector for financial fraud and social engineering attacks.

An AI Incident Database analysis and reporting by The Guardian detail how deepfake-enabled impersonation scams have shifted from isolated incidents to industrial-scale operations. Fraudsters use synthetic audio and video to simulate executives, doctors and public figures, producing realistic, targeted interactions that trick victims into high-value transfers and endorsements. One documented case involved a fake executive video call that persuaded a Singapore finance officer to transfer nearly $500,000. Other examples include AI-generated doctor endorsements and political impersonations intended to influence opinion or discredit individuals. Researchers found these operations are inexpensive to scale, often automated and capable of producing large batches of plausible content tailored to specific victims. The report raises alarms about financial losses, erosion of trust in recorded media and difficulties for investigators as standard verification checks become unreliable. Experts urge coordinated technical, policy and legal responses, including detection tools, provenance standards and stronger industry and law enforcement collaboration to stem the rapid growth of synthetic-media scams.