Skip to main content
The Signicat Blog
Laima Kuusaitė

Social Media Manager

AI-driven Fraud and Deepfakes: The Rising Threat to Financial Institutions

Artificial intelligence (AI) is redefining the reality we live in today. While many people use it to enhance their abilities, cybercriminals exploit AI to execute sophisticated fraud schemes targeting the financial industry. AI-driven fraud is becoming increasingly prevalent, and deepfake fraud represents a significant new threat. 

According to Signicat’s latest data from the 2024 report, “The Battle Against AI-Driven Identity Fraud”, 42.5% of detected fraud attempts now are AI-driven, with nearly a third succeeding. Deepfake scams lead the way, with a staggering 2137% surge in attempts over the last three years. These rapid escalations signal a crucial need for financial institutions to strengthen cybersecurity defences and better understand how AI-driven identity fraud operates. 

The report not only touches upon the current state of AI-driven identity fraud but also explores decision-makers’ experiences and preparedness, alongside strategies to prevent AI-driven fraud in the future. 

What are AI-Driven Fraud and Deepfakes? 

We all know about AI, and we’re all familiar with fraud – what happens when they come together?  

We get a new, powerful set of AI-generated fraud techniques that combine ease with sophistication. Fraudsters no longer need to choose between hitting many victims with generic scams or focusing on high-effort attacks; AI-driven fraud allows them to do both. What used to require immense skill, and time can now be executed within minutes. 

Three years ago, it was all about creating synthetic identities and forging documents. Today, the majority of financial fraud decision-makers identify these as the most common types of AI-driven identity fraud:

Most common types of AI-driven identity fraud

With deepfake fraud at the top, it can be difficult to determine whether the people around us are real. Earlier this year, during a fireside chat with Signicat at the Money 20/20, David Birch, Global Ambassador at Consult Hyperion, painted a perfect picture of this case with the parallel to World of Warcraft (WoW). He told a story about a man who was just playing WoW when, soon enough, he realised he was the only real player there among the bots. While this sounds innocent enough in gaming, in the real world, deepfake video fraud can have devastating consequences. 

Growing Sophistication of Deepfake Fraud 

A notable deepfake fraud incident occurred in Hong Kong at the start of 2024. A finance worker at a multinational firm was tricked into transferring $25 million after fraudsters used deepfake technology to impersonate the company’s CFO during a video call. This wasn’t an isolated case – other participants in the call were also deepfakes. This is just one example of how AI-generated fraud is escalating in both sophistication and scale. 

According to Kasada’s 2024 State of Bot Mitigation report, 57% of respondents are worried about how generative AI is enabling criminals to pull-off complex identity theft and fraud attacks. The rapid advancement of AI allows for the creation of more convincing deepfake scams, adding to their growth in both scale and sophistication. Today, deepfakes account for 6.5% of all fraud cases, further underscoring the importance of deepfake fraud prevention for the financial and payments industry. 

Presentation attacks vs. Injection attacks 

Two of the most common techniques used by cybercriminals today are presentation attacks and injection attacks. 

  • Presentation attacks, also known as spoofing, involve presenting falsified or fake biometric credentials (such as photos or deepfakes) to an identity verification system, with the goal of gaining unauthorized access. 
  • Injection attacks involve injecting a deepfake video into the system used for authentication, bypassing traditional security measures such as cameras and screens. 

The growing complexity of these techniques means deepfakes are not only more common but are also becoming harder to detect, that’s why financial institutions must adopt more advanced fraud detection tools to prevent both presentation and injection attacks. 

Evolution of Presentation attacks vs Injection attacks

To combat these threats, companies need robust detection and prevention mechanisms, such as electronic IDs (eIDs) or biometric identity verification through video, which offer more secure ways to authenticate users. Solutions relying solely on ID photos or selfies are not KYC and AML-compliant for identity verification in regulated sectors like the financial industry, where stricter measures are required. 

This is why implementing advanced verification solutions, or even combining different methods, like eIDs and biometric video verification, is critical to ensure compliance and security. 

Trends in Business: Confusion and Stress Over AI Threats 

The ever-growing sophistication of AI-driven fraud schemes can come with a price, literally.  

The Battle Against AI-Driven Identity Fraud report estimates that 38% of revenue losses that companies face due to fraud are directly linked to AI-driven identity theft. 

However, it’s challenging to quantify the exact amount of financial loss caused by AI-driven fraud. Juniper Research estimates that only the value of e-commerce fraud will rise from $44.3 billion in 2024 to $107 billion in 2029, that’s a growth of 141% in 5 years, and that’s only for the e-commerce merchants.   

The good news is that over 75% of businesses are taking proactive steps to address the growing threat of AI-driven identity fraud by planning technology upgrades and increasing their cybersecurity budgets. 

The not so good news is that only fewer than a quarter have actually started implementing these changes.  

As the fraud decision makers revealed at the time of being surveyed, they lack expertise, time, and budget. Even so, there’s a bit more to it.  

Hesitation to act also lies in uncertainty. While fraud decision-makers are highly aware of AI being a significant threat to identity security, no more than a third are familiar with specific techniques such as AI-generated identity document forgeries, deepfakes, or voice impersonation. This knowledge gap shows that the impact of AI-driven fraud is not fully grasped yet. Financial institutions must act now to upgrade their fraud detection capabilities to protect against these new rapidly escalating threats. 

How Signicat helps prevent AI-driven Identity Fraud: Combating AI-Driven Fraud with Layered Security 

As AI-driven fraud techniques, such as injection and presentation attacks, become more advanced, businesses need a comprehensive approach to identity verification. Signicat offers a comprehensive approach to identity verification with solutions specifically designed to combat AI-driven identity fraud. Our layered, multi-faceted strategy includes: 

  • VideoID: Strengthening security through liveness detection and biometric verification to prevent deepfake identity fraud. 
  • eID Hub: Offering a high level of assurance for onboarding and login, significantly reducing the risk of identity theft. 
  • RiskFlow Orchestration: Tailoring fraud prevention strategies for businesses, integrating security measures like device profiling and behavioural analytics. 

By leveraging these solutions, Signicat ensures compliance with KYC/AML regulations while protecting against advanced AI fraud techniques, offering a secure yet seamless experience for both businesses and their customers.