Your phone rings. The voice on the other end sounds exactly like someone you trust, maybe a senior government official or even your boss. But it’s not them. It’s a scammer using artificial intelligence to impersonate them with frightening accuracy. This isn’t science fiction. It’s happening right now, and the FBI wants you to know about it.
- Since April 2025, attackers have been using texts and AI-generated voice scam messages to pose as senior U.S. officials in a phishing campaign.
- The use of AI-based voice cloning surged by 442% between the first half of 2024 and the second half of the year.
- On a personal level, falling victim could lead to identity theft or financial loss, while compromised organizational accounts can become springboards for additional attacks.
What’s Actually Happening With These AI Voice Scams
The FBI has issued a warning about an AI-powered phishing campaign involving cloned voices of high-ranking officials. These malicious text and voice messages combine smishing (text phishing) and deepfake audio vishing (voice phishing) to trick victims into sharing sensitive information or money.
“If you receive a message claiming to be from a senior US official, do not assume it is authentic,” the FBI said. Since April 2025, scammers have “sent text messages and AI-generated voice messages that claim to come from a senior US official in an effort to establish rapport before gaining access to personal accounts.”
These messages are designed to establish rapport with the target before prompting them to hand over access to personal accounts or sensitive data. Many targets are current or former high-ranking government officials, but the danger extends to anyone in their contact lists who might also be approached by the impostors.
How Voice Cloning Technology Works
One of the most concerning threats in 2025 is AI voice cloning, a technology that can replicate someone’s voice with startling accuracy using just three seconds of audio. Dark web operatives use this technology to scam people, mimicking voices to trick friends or family into sending money or sharing sensitive information.
In an AI voice phishing attack, criminals use deepfake voice technology to create audio that sounds like a real person. For example, an attacker might send a text claiming to be a senior official (smishing), then follow up with a phone call using a cloned voice of that official (vishing) to make the con believable.
Users don’t need technical skills to create these voice clones. As one expert put it, “No CS background, no master’s degree, no need to program, literally go on to your app store on your phone or to Google and type in voice clone or deepfake face generator, and there’s thousands of tools for fraudsters to cause harm.”
Why These Scams Are So Effective
Attackers generally exploit the trust we place in familiar voices. They have impersonated figures from White House officials to corporate CEOs to request sensitive information or urgent wire transfers.
In a sophisticated scam in Hong Kong, a finance employee at a multinational corporation was deceived into transferring $25 million to hackers. These fraudsters used deepfake technology to impersonate the company’s CFO and other employees during a fake video conference. Even though the employee was suspicious at first, he set aside his doubts after the video call because other attendees looked and sounded just like colleagues he recognized.
Once an account is compromised, the scammers can exploit trusted contact information to impersonate others and scam additional victims, creating a ripple effect.
How to Protect Yourself From AI Voice Phishing
The FBI and cybersecurity experts recommend several approaches to stay safe from these attacks:
Verify the identity of the person calling you or sending text or voice messages. Before responding, research the originating number, organization, and/or person purporting to contact you. Then independently identify a phone number for the person and call to verify their authenticity.
Organizations should implement multi-factor authentication, train staff to verify unusual requests even if the voice sounds familiar, and use code words or secondary verification methods for sensitive transactions.
Listen closely for irregularities like unnatural background noises, robotic or monotone speech, and frequent mispronunciations. Watch for signs of a choppy conversation, where the flow feels abrupt or unnatural.
If possible, limit online content of your image or voice, make social media accounts private, and limit followers to people you know to minimize fraudsters’ capabilities to use generative AI software to create fraudulent identities.
Don’t Let a Familiar Voice Fool You
The FBI advises the public to listen to the tone and word choice in voice messages to “distinguish between a legitimate phone call or voice message from a known contact and AI-generated voice cloning, as they can sound nearly identical.”
When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help. If you believe you have been a victim, report the incident to your local FBI Field Office or the Internet Crime Complaint Center (IC3) at www.ic3.gov.
When anyone’s voice can be cloned in seconds, trust your instincts but verify what you hear. A quick callback to a known number could save you from a very expensive mistake.

