Home Finance Small FIs: Don’t Ignore the AI Data Gap

Small FIs: Don’t Ignore the AI Data Gap

by internationalbanker

By Al Pascual, Cybercrime Expert and Advisor, BioCatch

 

For the past 30 years, financial institutions (FIs) have relied heavily on artificial intelligence (AI) as a weapon in the battle against scammers. Previously the domain of card networks and payment processors for transaction decisioning, AI is increasingly providing value in identifying harmful transactions of all kinds across several organisations and institutions. However, as recent research has made clear, not every company can use and benefit from AI in the same way.

As criminals migrate to more sophisticated tactics, this difference in skills makes it harder for smaller FIs to identify and prevent fraud at reasonable costs. Consequently, these FIs are compelled to seek out suppliers that possess substantial amounts of data from all of their clients, even if it isn’t precisely what their organisations need.

This gap in capabilities will expose smaller FIs to even more uneven degrees of risk due to the introduction of new AI technologies that bad actors are just now beginning to exploit. For this reason, suppliers and governments are working together to encourage FIs of all sizes to make greater use of AI. The worst part is that this threat will affect not only fraud but also financial crime, putting organisations that can least afford it at a disproportionate risk of fraud losses and sanctions.

The AI threat

Financial institutions, their clients and their communities are all vulnerable to the grave risks stemming from financial criminality. Financial crimes, from bribery and human trafficking to money laundering and terrorism financing, threaten the basic basis of our FIs. The deterrent effects of current biometric and AI efforts on financial crime are already insufficient. Executives in charge of combatting fraud and anti-money laundering (AML) have noted a worrying rise in financial crime in the past year alone, a trend that will last through 2024 and beyond.

Governments and consumers are increasingly looking to financial institutions to fight financial crime and fraud. A growing percentage of consumers’ lives are spent online, where they purchase via e-commerce platforms, publish personal updates on social media, pay bills, transfer money and check balances from their phones. Without a digital identity, it is now impossible to function, making every one of us susceptible to scammers who take advantage of the many opportunities this huge digital world presents to defraud us.

The cost of financial crime extends beyond measurable monetary losses. Financial crime can also cost organisations through damages to their reputations, causing negative perceptions among existing clients, potential customers and investors and leading to further losses. Penalties for failed compliance with AML can be devastating to financial institutions. In 2023, the Federal Reserve (the Fed) fined Deutsche Bank and its US affiliates $186 million for failing to address AML shortcomings. Binance, the world’s largest cryptocurrency exchange, was fined $4.3 billion in relation to AML violations. Investing in tools to fight financial crime is imperative to business success in 2024. Despite these concerns surrounding AI, experts in fraud management, AML, and risk and compliance are confident that AI will lead to more positive outcomes than negative results.

Less is not always more.

According to a recent study by BioCatch, 73 percent of FIs globally use AI for fraud detection. But small FIs have a chicken-and-egg problem, as AI is only as effective as the data used to train it, and, as such, smaller institutions simply have less with which to work. And with less data than their peers, the imperative to prioritise investments in internal AI development is less than it would otherwise be.

This, in turn, can drive smaller FIs to rely much more on third-party providers to apply AI to detect fraud and financial crime, as both are on the rise. In some ways, this gap—let’s call it the AI Data Gap—is similar to the wealth gap in that lower-income consumers are forced to turn to more expensive credit options than affluent consumers who have access to better terms by virtue of their wealth. This dynamic shows no sign of changing, as about half of FIs expect fraud and financial crime to increase relative to 2023, inevitably resulting in many smaller FIs directing an increasing amount of their budgets to third-party AI companies.

Scam tactics are worrying small FIs

One of the largest benefits of AI for an FI is the ability to detect activity that a human being would miss. It is this fact that makes the level of interest that criminals have displayed in new AI tools, such as generative AI (GenAI), so disconcerting. These tools have demonstrated an immense potential for greatly improving the quality and quantity of malicious activities, a fact not lost on bankers.

Fraud and financial-crime professionals recognise that not only will AI contribute to activities that increase the rates of fraud but also of a more difficult challenge, scams:

  • 45 percent expect scam tactics to become more automated;
  • 42 percent expect AI to be used more to locate customer PII (personally identifiable information);
  • 36 percent expect that scam messages will become more convincing.

This doesn’t include other threats, such as the use of deepfake tools to create images, voices or videos for use in bypassing identity and authentication controls. Artificial intelligence is a force multiplier across the board for criminals—meaning that the volumes of all types of attacks will increase.

In the face of growing AI adoption by criminals, smaller FIs will suffer the ironic indignity of being far less likely to have enough data to make significant investments in internal AI resources. Without the ability to bolster the usage of internally developed AI, smaller institutions will feel the adverse effects of AI-enhanced fraud, scams and financial crime more than their larger peers, which are collecting far more data far faster—enabling them to detect and mitigate more quickly.

What we need to do

To be clear, this isn’t an argument for reducing reliance on AI to detect malicious activity, but rather supplementing it with tools that are agnostic to the use cases to which they are applied, as well as more effective at addressing the threats created by adversarial AI. That can only happen by taking closer looks at fraud- and financial-crime-fighting budgets and making decisions that take the long view with the anticipated effects of adversarial AI in mind.

The newer identity-verification and authentication controls in which smaller FIs may currently be considering investing may be obsolete sooner rather than later. Instead, bankers should turn to solutions such as behavioural biometric intelligence, which can be applied to fraud and scam detection. Further still, despite the advances that generative AI will bring to criminal capabilities, none of them give the criminal an advantage over behavioural biometric intelligence—leaving the bad guys with their new AI toys worse off than they were yesterday.

Levelling the playing field

There is truly an AI Data Gap, and as more and more AI technologies are used for illegal purposes, the implications will only become worse. Smaller FIs have two options: adjust or continue to pour money into vendor AI solutions, watching their previous investments become outdated. By levelling the playing field, behavioural biometric intelligence makes smaller FIs more difficult to attack. Furthermore, the outcomes—rather than the hype—are what truly set the AI haves and have-nots apart.

 

Related Articles

Leave a Comment

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.