Researchers warned criminals are seeding fraudulent customer‑service phone numbers across websites so AI search and LLM systems surface those numbers as single authoritative answers. Victims who call those numbers can be routed to scam call centres that attempt identity theft, remote access compromise, or extortion.

Security researchers described a growing "content‑poisoning" tactic in which attackers deliberately plant fake customer‑service phone numbers on a wide range of online pages — from YouTube video descriptions to review sites and even academic pages — so that search engines and AI assistants return a single, seemingly authoritative support number. The attack leverages modern LLM‑powered search features that present a single answer or snippet rather than a full list of sources, amplifying compromised entries and making traditional site verification more difficult. Callers who dial the fraudulent numbers risk being routed to scam call centres that attempt credential harvesting, social‑engineering to obtain remote‑access permissions, or extortion. Researchers urge companies to monitor for unauthorized listings, for consumers to cross‑check phone numbers on official corporate sites and invoices, and for AI platform operators to improve provenance and source transparency so single‑answer responses do not uncritically elevate poisoned content. The technique represents a convergence of web abuse and AI retrieval flaws that raise acute risks for consumers seeking quick help.