A TechRadar survey and expert commentary found that roughly 25% of Americans reported receiving deepfake voice calls in the past 12 months, signaling rapid growth in synthetic‑voice social‑engineering attacks. Experts warned that generative AI is being weaponized to scale voice‑impersonation scams, heightening risks for elderly and high‑value targets.

New survey data reported by TechRadar show that deepfake and synthetic‑voice phone calls have reached an inflection point across the United States, with about one in four respondents saying they received convincingly cloned‑voice calls over the past year. Analysts and AI incident researchers cited in the report highlighted how generative models and widely available voice‑cloning tools enable attackers to rapidly produce realistic impersonations of relatives, company executives, or public officials, and then deploy those assets in emergency‑styled cons to extract money, authentication codes, or privileged information. The piece explains how attackers combine cloned audio with social detail harvested from social media to create highly persuasive scams at low marginal cost, and it underscores particular risks to elderly and otherwise isolated individuals. The TechRadar coverage also summarized expert calls for layered defenses, including stronger caller‑ID verification, industry collaboration to detect synthetic media, public awareness campaigns, and regulatory scrutiny of AI models and hosting platforms. The article framed deepfakes as an accelerant for existing fraud modalities, raising both enforcement and consumer protection challenges.