Back to Blog

Digitizing our physical identities, and the security gap we now need to close

Photo of Martin Rehak, CEO & Co-founder, author of the blog post
Martin Rehak, CEO & Co-founder

This article was co-authored by Martin Rehak and Lucie Rehakova.

Digitization has facilitated an often life-changing access to financial services for a large part of the global population, which presents unprecedented opportunities and levels of interconnectedness. 

However, the switch from physical to digital transacting—and what it means for how you prove who you claim to be—has also left a crucial gap in security systems. We’re currently at a breaking point in history where a significant portion of the ways in which we prove our identities are still primarily physical. This includes birth certificates, identity documents, and credentials, such as university diplomas, utility bills, and so on.

To participate in the digital world, you have to digitize your physical legacy to be able to use it as transaction input—whether that means opening an account, taking out a loan, or getting that new TV on a buy now, pay later (BNPL) scheme. This moment of digitization presents a weak spot. 

Security gaps open the floodgates to fraud

Most onboarding processes are digital replicas of legacy face-to-face processes. Let's have a look at the typical digital onboarding process through the prism of a few traditional computer security terms (each narrowed down to fit into the context of onboarding): identification, authentication, and authorization.

  • Identification, in our restricted meaning, is the process of matching the person holding the smartphone with a known physical identity.  
  • Authentication ensures that the person holding the smartphone is truly the identity that we match them with. 
  • Authorization is a broader concept, and it differs from the perspective of a service provider compared to a user. From the user perspective, it ensures that a properly authenticated and identified person knowingly becomes a user of the provider's service. From the provider side, the authorization process verifies the assertions made by a customer, such as, "I live at 10, Downing Street, London."  

How can we defraud at scale?

Unlike the physical world, the digital world presents an opportunity for iterative, scalable fraud. This is because the computational capacity of even basic and widely accessible personal devices allows fraudsters to, for example, create mass synthetic identities or hold parallel conversations attempting to defraud victims on dating sites. Put simply, if two in 10 fraud attempts are successful, digitization and technology generally offer criminals increasing opportunities to quickly conduct 100 fraud attempts and get away with 20 successful ones. 

Fraudsters iterate to learn and identify weaknesses within a process. A weakness can occur at the point of onboarding, ongoing client and transaction monitoring, app security, or a combination thereof. Once they’ve found it, criminals scale mercilessly to monetize the hole in the process that they’ve identified. And because this weakness can be endemic to certain markets or even institutions (and because they’re not always skilled at talking to each other to patch these gaps globally), these criminals can keep finding endless ways to exploit the flaws in the systems.

Step 1: Don't neglect identification

Identification comes in many different forms. Some of them are explicit and authenticated, as with identity verification (IDV) providers and the increasingly routine selfie and ID card picture process. Others are more discrete and hidden. Weaker identification protocols can tie the customer to a specific email address, phone number, or physical address instead of a full legal identity. 

The reason why more and more processes skip the explicit IDV step are manifold, but they are mostly related to disproportionate friction—and customer discomfort in terms of delays as well as apparent privacy infringements—this step would create. No one wants to show their ID when using a BNPL service to pay for pizza. The BNPL providers were the first to take this approach further, by leveraging a mix of public data sources (tax and census records), private data (e.g. utility providers), the credit risk of the transaction, and the implicit verification through goods delivery to assess their appetite for lending to that particular customer. Other industries have followed their lead—for example, small-ticket insurance vendors don't verify customer identity and accept the risk instead. 

In general, “unauthenticated identity” is a claim—it’s a hypothesis that needs to be verified. It can be perfectly sufficient in specific contexts, but can be mercilessly exploited by the fraudsters when the context changes, leading to highly scalable BNPL fraud campaigns and convoluted insurance fraud cases (among other opportunities).

Step 2: Focus on the authentication process

In some applications, real identity matters. In banking, financial services providing substantial loans and enabling payments, customers must be identified and authenticated—the provider needs to conclusively prove that the customer is a specific physical person or a company represented by a properly-identified individual. This is the service provided by the IDV vendors, such as Jumio, Onfido, Au10tix, GBG, and many others. 

When you open such an account online, the institution asks you for basic information and a photo of your ID. Maybe they ask you to retake that photo due to shadows or blurriness, but so far so good. The application then might ask you to take a selfie. This process is very much the same at a physical bank branch. The objective here is to verify that the information that will be in the system (your name, address, date of birth, and so on) is substantiated by a state-issued document, and that this document is indeed yours.  

This stage is where the move to the digital world actually makes it possible to conduct checks that are better than the ones performed by the physical branch personnel—provided that you have the tech. The authentication process can consist of more or less advanced analysis of the submitted documents and the customer's photo or video. It can range from the detection of slight visual anomalies, through digging for traces of digital modification hidden in a file’s metadata, to identifying the tiniest similarities across a range of seemingly unrelated documents designed to prove the identity for a fraudulent network of accounts and perpetrate serial fraud

Today's technology is already more successful at distinguishing forged documents than the human eye and capacity. This has to do with complexity (is this a legitimate diplomatic passport issued by Qumran?), volume (can you check it in 10 seconds or less?) and consistency (can you accurately verify 1,000 of them per day, every day?). Having said that, the best argument for automated verification is the need to respond to the fraudsters’ ability to leverage technology to create synthetic or forged identities at scale. 

The majority of forgeries that Resistant AI sees today are already automatically produced by the organized gangs and detecting hundreds of documents produced by the same process has become fairly routine. 

Step 3: Understand the role of authorization

Authorization should be examined from the perspective of both service providers and customers. We’ll be taking a look at both of these points of view.

Authorization for service providers

As a service provider, you must ensure that your customers are who they say they are, beyond their name proven by the official identity documents. You may need to check business registration and incorporation documents, proof of address documents, business operations, business relationships, income or salary, and credentials and qualifications. All are proven by documents, many of them only existing as papers in the physical world, with only a few types—bank statements, invoices and bills, and some university degrees—produced as digital PDFs. 

We don't have to tell you how frustrating it can be to collect and submit these documents and wait for three days only to learn that the "Document #4" was blurry. At the same time, the service providers mostly don't benefit from a corresponding increase in security and fraud resilience, as the most frequent verification tool is still the Mark 1 eyeball. And discovering state-of-the-art digital frauds manually is pretty much impossible in 2023. For most onboarding processes, you can expect approximately 1-2% of documents submitted by identified and authenticated users to be fake. Expect more if your service is popular or when the fraudsters identify an easy monetization method for fraudulent registrations, and expect multitudes during an economic downturn and cost of living crisis. 

Sometimes, the fraud can be detected quickly, by default on the first loan payment (at which point, it’s arguably too late). In many other cases, the fraudulent identities do not cause immediate harm. Mortgage applicants may manipulate their income to obtain better terms—or their advisors might do so on their behalf, in order to gain business—but nonetheless, they’re paying customers. Trade financing fraud is a "traditional" way to bridge a tough cash situation for many businesses. And money laundering is designed to look and feel as a great business.

Preventing the above scenarios is hard. It means providing more documents, answering more questions, and fighting more screens. 

Authorization for customers

For the user, providing the above information has an interesting side effect—and a positive one, actually. It makes the now-ubiquitous ID verifications more discernible, as very few building receptionists ask for six months of income history when you come to visit. Or do they?

As a customer, you want to make absolutely sure that you don't open a bank account without realizing. And today, this is far easier than ever before. 

The reason is that authorization straddles the boundary between the physical and the digital process. Digital authentication has nowadays become routine—so much that we often hardly think about it. When we visit any corporate office building (or a spa hotel, as a matter of fact), we show our ID to the receptionist (human or digital) and  have our picture taken by the tablet on the table next to her, believing in good faith that we are interacting with a visitor check-in system.

Every such situation is potentially a chosen protocol attack. In the digital world, a chosen protocol attack means that the attack target performs an action without being able to securely predict the effect of the action, as the context is unclear. Are you performing the ID verification to enter the building, or taking out a loan to refinance your mortgage? 

Even if the steps of identification and authentication are passed, your authentic identity can be leveraged completely without your knowledge, or based on severely misleading information (this describes a social engineering scam). Fraudulent strategies here may include impersonation scams, authorized push payment (APP) fraud, chosen protocol scams, or even a combination thereof. This is where technologies need to leverage future-proof solutions to bridge the gaps that we’ve created by digitizing our physical identities. We call this identity forensics. 

In identity forensics, the idea is that we leverage data from both digital and physical channels to ensure we extend beyond a static analysis of documents and faces. The idea is not new—it includes looking at geographical locations, timing of your transactions, and other collectable behavioral data. However, a behavioral analysis that goes beyond simply detecting whether your phone has been stolen or your account is being used from a different country to where you are based is needed to truly discern if you are being forced, manipulated, or severely misled. Moreover, doing this well also requires an understanding of criminal strategies and an acceptance of the fact that they evolve after each failed attempt and our solutions must be able to catch up.  

Preparing for the financial crime landscape of tomorrow

In his 2019 novel, Fall; or, Dodge in Hell, Neal Stephenson coined the term ‘PURDAH’, which is an abstract approach to identification based on a person’s digital actions and behaviors instead of their legal identity. In other words, you are what you do—nothing more and nothing less. As with many other concepts introduced by Stephenson, we’re already living it to some extent. 

On top of this, we need to consider the “how you do it” of this approach—for example, what times you usually use an app as compared to your time zone, the way you type, or the language you use. This isn’t new: authentication software looking at unique typing behaviors has been around for nearly two decades. However, the scale and precision with which it can be deployed to spot fraudulent behavior isn’t yet common across offered solutions. 

The way that Resistant AI applies identity forensics is aligned with these ideas, with the goal of producing valuable outputs for financial crime teams from the onset of a customer journey—or even before. It’s based on the inherent interconnectedness of static know your customer (KYC) and know your business (KYB) as well as dynamic behavioral data, which the vast majority of today’s systems either don’t connect together at all, or they do so in fairly unsophisticated ways. The risk score suggested by an onboarding process might have no practical implications for the way you intend to use a financial product. In fact, determining customer risk (and, consequently, your institution’s appetite to service this customer and their ability to leverage the range of financial products) based merely on submitted (or unsubmitted) onboarding documents can lead to dangerous de-risking and de-banking of entire underprivileged groups. 

Instead of a one-dimensional approach, Resistant AI leverages all of the data points available to create a comprehensive picture of what is and isn’t anomalous. We look at the individual vertical axis, a customer’s documentation, KYC/KYB profile, and their behavioral and transactional history (with everything this entails from device data to counterparties). We then overlay it by a horizontal segment analysis, looking at the same static and dynamic characteristics but across similar customers within a particular segment. This means that we have an earlier (if not immediate) and much more accurate risk verdict about any type of behavior, from onboarding to product selection and ongoing use.   

As a result, instead of explicit authentication processes—at the moment where we have sufficient documentation, behavioral history, and credit worthiness—applying a PURDAH-inspired approach to authorization renders the steps of  identification, verification, and authentication somewhat redundant (or minimally less critical). 

Assessing your customer risk based on their actual behavior (from the moment of them clicking on the “Open an account” button) means that you can take seconds to approve big loans without the need for additional input from the customer, and it means you can actually predict a customer’s need for a big loan before they even apply for it. This is the implicit side of authorization. 

Here’s how document fraud typically happens

There are two main ways that identity theft or document forgery cases usually go:

  1. Individuals (who often have a poor credit score) aim to start an account with a clean slate. These individual actors can also make a couple extra bucks selling stolen IDs.
  2. Fraud farms and other types of organized crime groups: synthetic identity, or “Frankenstein” fraud is the fastest-growing type of identity fraud. Its occurrences have surpassed “true name” identity fraud,” and account for 80-85% of all identity fraud.

While a stolen ID might not be enough for a fraudster to take out an outright mortgage, there are many ways to gain access to easy cash: consumer loans and quick credit (with ultra-high interest rates comes ultra easy access), BNPL schemes (buying an actual product and reselling it), chargeback fraud, and many others. Money muling accounts are used to receive defrauded money, which is then quickly withdrawn, or used for placement and layering stages of money laundering. 

The societal problem is twofold. The way that most customer onboarding systems are set up allows criminals to misuse our financial systems, while also leaving certain groups of legitimate customers few options than to identity theft or document forgery if they want to open a bank account (which, in turn, they need to get paid their wages as well as to buy many necessary services). In other words, the combination of a strict requirement to provide a utility bill as proof of address, together with the lackluster monitoring of ongoing behavior, creates an opportunity for scalable fraud while effectively de-banking customers in more vulnerable circumstances.