New York ITS offers a plain-language guide to how AI scams work, including impersonation tactics. The explainer emphasizes red flags like urgency pressure and guidance on what to do when suspicious interactions appear.

New York State ITS published “Understanding AI Scams,” a consumer-oriented explainer designed to make AI-enabled fraud easier to recognize. The guide focuses on the way scammers use AI to increase credibility—especially when impersonating people or organizations—and it connects those techniques to behaviors victims can observe in real time. The content highlights recurring manipulation patterns, including conversational setups intended to establish trust and reduce scrutiny. It also addresses the role of urgency pressure: scammers often push victims to act quickly, limiting time for verification. In many AI scam flows, the goal is to move the victim from a conversation to an action step—such as clicking a link, providing information, or following instructions—before they pause to validate. The page’s practical takeaways include advice to disengage when warning signs show up and to avoid treating AI-persona messages as proof of authenticity. It encourages independent verification rather than relying on the tone, voice, or appearance of legitimacy created by AI-assisted content. For a safety-conscious U.S. audience, the guide is useful because it translates a fast-changing threat category into a checklist of behaviors. Even readers who don’t follow cybersecurity can apply the same anti-manipulation mindset across messaging apps and social platforms where AI-driven impersonation is increasingly likely.