Recently, many media outlets reported on a new case of deepfake at a Hong Kong financial firm. An employee of this company was tricked with a video call in which his supposed CEO and a colleague were being impersonated using deepfake technology resulting in the theft of $25 million.
Deepfake technology is going viral thanks to social networks or even companies that are dedicated to using it to make videos from photographs of our deceased loved ones and ancestors. But, just as this technology has good things, we see that it is being used more and more frequently for not very legitimate purposes, especially in financial services companies, where the consequences can be disastrous.
Download Now: Discover Voice Biometric Authentication
But first of all, what exactly is deepfake technology?
Deepfake technology is ultimately a form of synthetic media. These creations are defined as visual, auditory, or audiovisual compositions that emulate reality, generating fictitious images and/or voices of people. Based on the use of AI and machine learning, this technology manages to recreate actions with a hyper-realistic appearance, appearing to be executed by the people in question.
AI neural networks are trained with a data set of images and videos and learn to generate the resemblance of one person to another. The more data it has, the more accurately it can generate a likeness, combine gestures and expressions, and the more realistic the generated fake videos can be.
How are deepfakes used to commit financial services fraud?
Businesses and governments need to protect their citizens and customers. Consumers are already concerned about deepfakes. A recent industry report found data such as:
- 88% believe online security threats are increasing.
- 85% believe that deepfakes will make it harder to trust online services.
- 72% believe the need for identity authentication is more important than ever.
The use of deepfakes is growing, as is synthetic identity fraud, and retail banking, insurers, and payment gateway providers are key targets for this type of crime.
Let’s take a quick look at the ways fraudsters could use deepfake technology to commit financial crimes:
1- Ghost Fraud
Ghost fraud refers to the process of using a deceased person’s information to impersonate them for financial gain. Ghost fraudsters can use an individual’s stolen identity to access online services, savings, and credit, and apply for cards, loans, or benefits. Using deepfake technology, criminals could make phantom fraud much more convincing.
2- New account fraud
New account fraud is on the rise, accounting for $3.4 billion in losses.
This type of fraud, also known as application fraud, occurs when fraudsters use fake or stolen identities to open bank accounts. Fraudsters may max out credit limits under the account name or apply for loans that are never repaid.
Since it is now very easy to create a bank account through digital channels just by using a video of the account holder, fraudsters could use deepfakes to carry out their crimes much more quickly and easily.
3- Synthetic identity fraud
Synthetic identity fraud is a sophisticated and difficult-to-detect form of online fraud. Fraudsters create identities using information from multiple people. Instead of stealing an identity (such as the name, address, and social security number of a recently deceased person), fraudsters use a combination of fake, real, and stolen information to create a “person” that does not exist.
Scammers use synthetic identities to apply for credit/debit cards or complete other transactions that help create a credit score for non-existent customers. A deepfake of a deceased person could be used to reinforce a synthetic identity.
4- Annuity/pension/life insurance/benefit fraud
Another potential use of deepfakes is annuity/pension, insurance, or benefit fraud. A deceased person could continue to claim a pension for years, either by a professional fraudster or by a family member.
Although proof or proof of life is requested for certain life insurance, pension, or account-related paperwork, deepfakes could pose a challenge in confirming the identity of these clients.
Regulating deepfakes
In response to the increasing use of deepfakes for numerous types of purposes, legitimate or otherwise, efforts are underway to create or amend existing laws to combat the misuse of deepfakes.
Internationally, there have been regulatory advances in this area, especially at the European Union level, where the Artificial Intelligence Regulation, which addresses deepfake technology and will establish a series of guidelines for its use, is expected to be approved soon.
Outside Europe, some countries such as South Korea or China have taken a step forward on this issue and are beginning to adopt rules or laws that attempt to regulate its use.
But despite this, there is still a long way to go.
How can these deepfakes be detected and fought?
Even though deepfake technology is becoming more and more sophisticated, some solutions allow entities to detect and protect against fraud with this type of technology.
Voice biometrics solutions have become key in the fight against identity or authentication fraud.
Voice biometrics captures and measures the physical qualities of the voice when speaking, as well as the unique biological parameters that combine to produce that voice. The most advanced solutions allow, thanks to voice biometrics, to detect attempted access through pre-recorded or artificially generated voices, avoiding fraud caused by deepfake technologies in just a few seconds and using only the individual’s voice.
They also allow us to fight against other types of identity fraud, such as phishing, synthetic identities, unauthorized access by account appropriation, SIM Swapping, or subscription fraud.
If you want to learn more about how voice biometrics helps detect fraud and ensures a secure and frictionless customer experience, we tell you about it in this use case.