In February 2024, a finance employee at a Hong Kong multinational transferred $25 million to criminals after joining what appeared to be a routine video conference. Every person on the call, including the company’s CFO, was a deepfake. The employee saw familiar faces, heard familiar voices, and followed what seemed like legitimate instructions. By the time anyone noticed, the money was gone.
This was not an isolated incident. According to Surfshark, deepfake-related fraud losses have reached $2.19 billion globally, with $1.65 billion reported in 2025 alone. Bright Defense reports that deepfake fraud attempts surged 2,137% over three years, with a new attack attempted every five minutes in 2024. Deloitte projects that US deepfake fraud losses alone will hit $40 billion by 2027.
The threat is real, growing fast, and targeting businesses of all sizes.
How Deepfake Video Call Scams Work
Criminals follow a structured playbook. First, they collect publicly available video and audio of their targets. Earnings calls, conference talks, social media posts, and YouTube interviews all provide training data. Modern AI tools can generate a convincing face and voice clone from just a few minutes of footage.
Next, attackers set up a video call using standard platforms like Zoom, Microsoft Teams, or Google Meet. They use real-time deepfake software to overlay the stolen identity onto a live feed. The technology has advanced to the point where facial expressions, lip movements, and voice tone sync convincingly during normal conversation.
The target, usually someone in finance or operations, receives a meeting invitation that appears to come from a trusted colleague or executive. Once on the call, they see and hear people they recognize. The deepfake \”executive\” issues urgent instructions: transfer funds, share confidential data, or approve a transaction. The urgency and apparent authority make it difficult to question the request.
In the Hong Kong case, the attackers even populated the call with multiple deepfake participants, creating the illusion of a full team meeting. This level of sophistication is no longer rare. DeepStrike estimates that online deepfakes grew from roughly 500,000 in 2023 to about 8 million in 2025, an annual growth rate near 900%.
Why Traditional Security Fails
Standard video conferencing platforms were not designed to verify identity at the level deepfakes now require. Most rely on email invitations and passwords, which are easily compromised through phishing. Once inside a meeting, there is no built-in mechanism to confirm that the person on screen is who they claim to be.
Even companies with multi-factor authentication on their communication tools face risk. The attack does not target the platform’s login system. It targets human perception. When an employee sees their CFO’s face and hears their voice saying \”approve this transfer now,\” the instinct is to comply.
Financial institutions are especially vulnerable. In the first half of 2025 alone, they lost $410 million to deepfake-enabled scams, already exceeding the total for all of 2024. But the threat extends to every industry. Healthcare, legal, and technology companies have all reported incidents.
How to Protect Your Business
Defending against deepfake video call scams requires a combination of policy, technology, and communication discipline.
Establish verification protocols. No financial transaction above a set threshold should be approved based solely on a video call. Require out-of-band confirmation through a separate, verified channel. If your CFO requests a transfer on video, confirm it through a different secure messaging app where identity is cryptographically verified.
Use end-to-end encrypted communication. Platforms with end-to-end encryption and strong identity verification make it significantly harder for attackers to impersonate team members. When every message and call is encrypted and tied to a verified identity, spoofing becomes exponentially more difficult.
Train employees to recognize red flags. Unusual urgency, requests to bypass normal approval processes, and unfamiliar meeting formats are all warning signs. Regular training that includes deepfake examples helps employees develop the skepticism needed to pause and verify.
Limit public exposure of executives. Every public video and audio clip is potential training data for deepfake models. Companies should audit the amount of executive media available online and consider reducing unnecessary public appearances on video platforms.
Adopt code words or challenge phrases. Some organizations now use pre-agreed verbal codes during sensitive calls. A deepfake cannot produce a response to an unexpected challenge question that was agreed upon offline.
Why Secure Messaging Is the Foundation
The common thread in deepfake video call scams is compromised identity. Attackers succeed because the platforms in use cannot guarantee that the person you see is the person they claim to be.
PhizChat addresses this at the protocol level. With end-to-end encryption, verified digital identities, and communication channels that cannot be spoofed or intercepted, PhizChat provides the secure second channel that every verification protocol needs. When a suspicious request comes through any platform, confirming it through PhizChat means confirming it through a system built on cryptographic trust, not visual appearance.
In a world where seeing is no longer believing, the security of your communication tools determines the security of your business.
FAQ
What is a deepfake video call scam?
It is a fraud scheme where criminals use AI to create real-time fake video and audio of trusted people, such as company executives, during a live video call to trick employees into transferring money or sharing sensitive data.
How much money have businesses lost to deepfake scams?
Global losses have reached $2.19 billion, with $1.65 billion in 2025 alone. Deloitte projects US losses will reach $40 billion by 2027.
Can deepfakes be detected during a live video call?
Detection is improving but unreliable in real time. The most effective defense is not detection but verification through a separate, secure messaging app with end-to-end encryption and verified identities.
How does PhizChat help prevent deepfake fraud?
PhizChat provides a secure, encrypted channel with verified digital identities. It serves as an out-of-band verification tool where employees can confirm suspicious requests through a system that cannot be spoofed by AI-generated video or audio.
Android
iOS