Hiya’s State of the Call 2026 report found one in four Americans reported receiving an AI‑generated deepfake voice call in the past year and that consumers believe scammers are outpacing mobile network operators’ defenses by a 2‑to‑1 margin. The study warns that rapid scaling of voice deepfakes is eroding trust in phone channels and calls for stricter regulation and carrier liability to address impersonation fraud.

Hiya’s State of the Call 2026 presents survey and telemetry findings showing AI‑voice deepfake calls have reached mass impact: roughly 25% of U.S. adults reported receiving at least one deepfake voice call within the past 12 months. The report couples consumer survey responses with industry data indicating a sharp rise in caller‑ID spoofing, synthetic voice impersonation of friends and company representatives, and automated campaigns that blend social engineering with high‑quality vocal mimicry. Respondents expressed skepticism about carriers’ ability to detect and block these calls, with consumers saying scammers are beating mobile operators two‑to‑one, and many calling for stronger regulatory standards, liability rules, and mandatory authentication protocols for voice traffic. Hiya warns that unchecked growth of voice deepfakes could meaningfully reduce trust in phone‑based transactions and customer service, increasing fraud losses and friction for legitimate businesses. The report recommends accelerated deployment of call authentication standards, industry collaboration on threat intelligence, consumer education, and a regulatory framework that assigns accountability to carriers and platforms for preventing synthesized‑voice impersonation fraud.