An analysis of the AI Incident Database finds deepfake audio/video scams proliferating and enabling large-scale impersonation-for-profit attacks. Experts warn that accessible AI tools have lowered costs and increased personalization, raising risks for businesses and consumers.

A new analysis drawing on the AI Incident Database and reporting by The Guardian concluded that deepfake-enabled fraud is proliferating at industrial scale, enabling highly believable impersonations across audio and video channels. The study documents a wide range of scams—from CEO-voice impersonations used to authorize fraudulent transfers, to fake medical professionals and fabricated customer-service interactions—resulting in sizable financial losses and reputational harm. Researchers and cybersecurity experts warn that off-the-shelf AI tools have dramatically reduced the technical and monetary barriers to producing convincing multimedia fakes, increasing both the volume and sophistication of targeted attacks. The report highlights how personalization enables social-engineering vectors to succeed more often, blurring lines between automated content and human interaction. Recommendations include stronger authentication practices, multi-factor verification for financial requests, industry-wide reporting standards for incidents, and investment in detection technologies. The analysis frames deepfakes as an escalating threat intersecting fraud, cybersecurity and information integrity, stressing the urgency of coordinated defensive measures across enterprises and regulators.