McAfee’s AI Hub reports that consumer-available voice-cloning tools are enabling a wave of convincing 'grandchild in distress' and relative-in-need scams, with many people unable to reliably distinguish cloned voices from the real thing. The analysis warns criminals are using such audio in vishing and smishing operations to extract gift cards, wires or crypto by creating urgency and bypassing normal verification.

McAfee’s analysis and related industry surveys document a marked increase in voice-cloning misuse as inexpensive AI tools produce realistic audio impersonations of family members and close contacts. Researchers found that attackers can synthesize short, emotionally urgent recordings that persuade targets to transfer funds, approve payments or disclose credentials; many consumers and even some institutions struggle to distinguish AI-cloned audio from authentic recordings. The report highlights common fraud patterns: an initial deepfake call or voice message claiming a relative is in immediate trouble, follow-up pressure to send funds via gift cards, wire transfers or cryptocurrency, and social engineering hooks that exploit urgency, confusion and reluctance to consult others. McAfee recommends layered defenses including outbound-call verification policies, multi-factor confirmation via video or known passphrases, public awareness campaigns, and vendor controls to detect synthetic audio. Industry stakeholders urge firms that operate communications platforms to flag unusual payment flows and for regulators to update guidance on voice-based authentication. The analysis stresses that technological detection and consumer education must advance in tandem to blunt the rapid weaponization of voice-synthesis tools.