AI has become more accessible, and its use by scammers has increased.
AI-generated fraud schemes using manipulated audio, video, and images are already costing organisations millions and compromising business operations across every sector.
Relying on instinct is no longer enough. This guide brings together practical detection techniques to help business leaders, security teams, and compliance officers understand the threat, recognise warning signs, and put safeguards in place to identify and prevent deepfake-enabled fraud before it escalates.
What is a deepfake?
A deepfake is synthetic media, video, audio, or images created or manipulated using artificial intelligence to appear authentic. The term combines “deep learning” and “fake”, reflecting the neural networks that power this technology.
In the context of fraud, deepfakes are weaponised deception tools. Common scenarios include CEO fraud, vendor impersonation, voice-based authentication bypass, and fabricated investment endorsements.
The average cost of a deepfake fraud incident is $600,000 USD, highlighting how quickly a single attack can escalate into a major financial event.
How are deepfakes made?
To spot AI-generated fraud, security teams need to understand how deepfakes are created. The technology typically relies on Generative Adversarial Networks (GANs), in which one model generates synthetic tents, and another evaluates their authenticity.
According to Sawan Joshi, Chief Information Security Officer at FDM, “Deepfake video and AI-generated voice are now good enough to pass a casual ‘gut check.’ Attackers don’t just pretend to be a CEO—they sound like them, pause like them, and look exactly like them.”
What are deepfakes most often used for?
Deep-fake technology has amplified email compromises and CEO fraud. In 2024, a multinational company lost $25.6 million USD after an employee was convinced by a deep-fake video conference impersonating a senior executive.
Vendor impersonation schemes are also increasing, with attackers using cloned voices or fabricated video calls to request payment redirections.
As Sawan notes, “The con doesn’t start with a request for money—it starts with a believable conversation.”
How can you identify a deep-fake?
For fraud prevention teams, detection requires combining automated tools with human verification protocols and healthy scepticism. Train your staff to recognise these fraud indicators:
Visual inconsistencies in video communications – Watch for unnatural lighting that doesn’t match the environment, overly smooth or plastic-looking skin textures that appear “too perfect,” inconsistent shadows or lighting angles, and blurred or unstable background elements. These artifacts often indicate AI generation. In fraud scenarios, backgrounds may appear generic or inconsistent with the caller’s known location.
Facial and behavioural anomalies – Look for misaligned facial features, particularly around the eyes and mouth, unusual or irregular blinking patterns (deepfakes often struggle with natural eye movement or blink too little), distorted teeth or inconsistent dental features, and unnatural head movements or robotic pose transitions.
Audio-visual synchronisation issues – Pay attention when lip movements don’t precisely match spoken words, especially during complex or fast-paced speech. Notice when audio quality suddenly changes mid-conversation, background noise disappears unnaturally, or there are unnatural pauses or speech patterns that differ from the speaker’s known cadence. In fraud calls, delays between video and audio may indicate real-time deepfake generation.
Things to be immediately suspicious of:
- Urgent requests to bypass standard procedures
- Unusual payment requests or changes to banking details
- Requests to move conversations to less secure channels
- Any high-value transaction requested through unfamiliar channels
Verification protocol failures – If the supposed caller resists verification measures, such as refusing to answer security questions, avoiding callback to known numbers, or declining to reference shared experiences, treat this as a major fraud indicator.
Metadata forensics – Examine file properties for inconsistencies in creation dates and modification timestamps. Deepfake tools often strip or alter metadata, creating detectable gaps. Video files from legitimate sources typically containcomplete metadata trails.
Cross-channel verification – Use reverse image and video search tools to identify if content has appeared elsewhere in different contexts. Compare suspicious communications against recent verified interactions with the supposed sender. Establish out-of-band verification protocols. If you receive an unusual request via video or audio, confirm through a separate, independently initiated communication channel using known contact information.
Tools to use to detect deepfakes:
- Microsoft Video Authenticator for analysing video authenticity
- Intel’s FakeCatcher for real-time deepfake detection
- Email security tools with AI-generated content detection
Regulatory frameworks are beginning to address the misuse of synthetic media. The EU AI Act introduces transparency requirements and obligations for high-risk AI systems, while U.S. state-level laws increasingly criminalise deepfakes used in fraud and identity theft.
Organisations should work closely with legal and security teams to ensure compliance and maintain oversight of AI-related risks.
Next Steps
If your organisation has not yet assessed its exposure to AI-generated fraud, now is the time to act decisively.
Begin by conducting a comprehensive risk assessment that identifies your organisation’s most vulnerable points of entry. This includes reviewing your current verification protocols for financial transactions and assessing employee awareness of deepfake threats. Understanding where your organisation is most exposed allows you to prioritise resources and implement targeted defences.
Next, establish robust multi-factor verification procedures for high-risk scenarios. This means creating clear protocols for confirming unusual requests. Additionally, regular training sessions that showcase real-world deepfake examples help staff develop the instinct to question what they see and hear.
FDM supports organisations through a multi-layered approach to building resilient defences against AI-powered threats. Our cybersecurity consulting services help you stay ahead of evolving fraud tactics and protect your organisation from financial and reputational damage. Get in touch to learn how we can help you.