Home Slider A Roadmap to Resilience: How Banks Can Leverage AI to Advance AML Capabilities

A Roadmap to Resilience: How Banks Can Leverage AI to Advance AML Capabilities

by internationalbanker

By Matt Long, Global Head of Financial Crime Solutions, Quantexa 





Money laundering and terrorism-financing activities have always been challenging to detect. However, with new regulations, the international nature of (sometimes converged) organised criminal and terrorism networks, ongoing geopolitical uncertainties, increased sanctions requirements (including secondary sanctions) and the increased pace of technological innovations in payments and digital identity, organisations are increasingly turning to artificial intelligence (AI) to help solve this multitude of challenges. Whilst AI and machine learning (ML) have been used in the anti-money laundering (AML) world for many years, many of the problems of old persist today, such as the ever-present issue of high false-positive rates, poor data quality, overlooked or missed risks and growing operational costs. Interestingly, there is light at the end of the tunnel, as many significant successes have started to materialise when using AI in recent times.

So, what has changed?

Except for large language models (LLMs) and generative AI (GenAI), AI and ML models have not really had step changes in performance power. What is driving the performance uplift within AML and compliance programmes is how organisations are working with their data assets and utilising new approaches, such as decision intelligence, to create an uplift. The adage of “rubbish in, rubbish out” (RIRO) is even more relevant in the AI age, and it comes as no surprise that better input data leads to better results.

It sounds easy to fix data, but it’s not. The key to improving results is to enrich the data used within your AML programme as you go, not just improve the data you have. Imagine an organisation captured 100 data points, focused on curating this data for accuracy and kept it up to date. It would see some improvement in model performance with this curated data. Now, imagine another organisation captured the same 100 data points but also focused on enriching the data for each client and counterparty and unified internal sources with external sources (corporate registries, credit data, news, etc.). It could realistically move from 100 data points to 300, 400 or more data points. At Quantexa, we call this creating context. The additional data points are formed from not just the internal data but also information on counterparties and social connections, and they are all relevant and specific to the problem statement, forming a broader foundation from which data scientists can work.

Actually, there are several steps to improving data assets that have multiplier (not incremental) effects on model performance—the 1+1=3 analogy. If you can fix the data and enrich it (create more inputs and/or features), then you get better model performance.

Change the focus of your AI model. Activity, value and volume of transactions are interesting, but more important considerations are to whom the funds are going and from whom they are coming. Do they involve mule accounts, shell companies, sham directors or charities, known criminals or sanctioned entities? If you can identify these markers, then the traditional activity-based AI models have significantly more relevant “context” to use as inputs.

As we examine some recent cases, such as the S$1 billion case in Singapore and the Iranian-oil sanctions circumvention case in the United Kingdom, both had many of these core markers in their networks.

So, how can banks prepare and equip themselves with AML infrastructure?

How to uplift the existing AML toolkits found in banks

Most organisations are using AI and ML to help tune their existing AML estates. With the introduction of contextual monitoring, existing toolkits need to be adjusted to take advantage of the higher number of inputs created through the enrichment process. Some organisations are looking at this with an eye to the future and taking advantage of decision intelligence (DI) capabilities, which allow organisations to inject more context into their decision cycles.

Banks are already using AI to detect complex compliance activities, including wholesale and correspondent banking AML programmes. However, Gartner found that only 53 percent of AI projects reach production. This isn’t because the tools don’t exist. It comes down to the fact that the use of AI in AML is not being optimised and built to strength from its origin.

To maximise their use of AI, banks must follow four vital steps.

  1. Create a clean data foundation.

To build a strong and safe house, engineers need a sturdy foundation. Each brick must be of consistent—and high—quality to ensure the house meets safety standards.

Building effective AI into a business model isn’t too different: Each piece of data matters in creating the full model. Organisations must source clean data from both internal and external sources. Data-quality management must be deployed to ensure that there are no human errors in vast datasets. A model that learns from poor-quality data with typos or misspellings runs the risk of being inaccurate, causing more inefficiencies than it solves. This is partly because there are duplicates within the data. In fact, one in nine customer records has the same name or data identity as another separate customer.

The dataset must be extensive and reflect full 360-degree views of relevant entities—individuals, organisations and locations alike. By creating a full picture of each entity, organisations can unveil a deep understanding of relationships and patterns in the data. The more connections that are made, the stronger the AI model will be throughout the organisation.

  1. Build context: What does the data mean?

For strong data to have any value and consistency, it needs context. One thing that many intelligent machine-learning algorithms still lack is the accuracy needed for deployment. This accuracy is found in context, on which AI relies. Duplicates are dangerous for data interpretation and consolidation. Without context, organisations can’t tell the difference.

AI is constantly learning from previous outcomes of money-laundering investigations. However, it’s impossible to get clear alerts from these learnings without understanding the entities involved.

For example, a bank may have a customer relating to various systems across its operations. To get a complete view of the customer on the scale of modern organisations, models need to apply Entity Resolution (ER). Entity Resolution cleans and normalises data by collecting records relating to each customer or entity, compiling a set of characteristics for each and labelling the entities to separate them from their duplicates. Banks can add new entity nodes that will link real-world data with their internal data, such as corporate-registry information, to ensure that they’re building a complete contextual version of each customer.

Entity Resolution will supply and back the input data with relevant context to gain improved outcomes. Enriched data and networks have helped investigators find criminal activity over time. Banks can take this knowledge from investigators and convert it into detection algorithms. A contextual-monitoring engine that uses information for all customers and inputs it into a detection system will be more accurate in its insights and predictions. By expanding input data from 40 to 400, it will deliver a significant uplift in performance.

Context-based tools can uncover relationships and patterns through transactional relationships or company affiliations. These links may look normal to an uninformed model, but with the full 360-degree context, banks will more accurately detect when a relationship points to risk. It informs graph-analytics tools that can decipher patterns and identify red flags. Previously, analysts examining individual indicators in isolation generated large numbers of misleading signals and created false alerts. But by viewing these accounts as a network and calibrating the analyses, it is possible to build task-specific, real-time views of the entities and connections in an organisation’s data.

Decision-makers haven’t always had clear visions of their risk exposures because they’ve been missing connections between their customers, counterparties, vendors and suppliers. However, connecting the dots between two or more parties’ transactions is critical to identifying illicit activities. Graph analytics can evaluate and sort information to reveal relationships and connections. Organisations that use graph analytics will be better trained to understand their customers’ actions and spot problem areas.

When AI has context, banks can integrate it into their tools, and it can help decision-makers make well-informed, proactive and accurate decisions. Generative-AI copilots also come into their own here. Analysts are able to ask plain-language questions as to why their AML models have flagged individuals or entities. A stand-alone copilot can streamline data analysis, increasing the accuracy and reliability of its existing generative-AI capabilities.

  1. Vary models according to desired outcomes.

Despite how well informed it is, relying on one AI model throughout an organisation can limit its abilities. Banks must diversify their AI assets. For example, retail banking deals with different types of AML techniques than trade finance or correspondent banking. Retail banking has many recorded possible outcomes, more false positives and heightened disclosures to regulators. This requires an organisation to train a complex model to catch these nuances and support frequent high-risk changes.

On the other hand, trade banking may have more specific external datasets with which to inform its model. That said, before HSBC deployed its AML tools in 2019, its Global Trade and Receivables Finance business was processing more than 5.8 million trade transactions a year, searching for signs of financial crime. Therefore, the model required for this business unit relies on enhanced domain knowledge and subject-matter expertise with enough training and scale to operate with vast amounts of data.

This mistake is avoidable. If a bank tries to force a one-size-fits-all solution into its organisation, it may not benefit from the perspectives that multiple models would bring. It will likely limit its adaptability to market changes and will not be as reactive as AI-trained AML models have the potential to be.

  1. Operate on explainability and transparency.

Just as AI models need clean data to function, they also need to be understood. McKinsey research found that only 15 percent of AI projects are successful. This is due not only to organisations not having solid data foundations but also to a lack of trust, governance and explainability.

Groups of analysts will investigate the risks that the models raise. The alerts raised need to be clear and concise, showing where the risks are and, most importantly, explaining why they triggered the alerts. It’s one thing for AI to recognise a risk, but it then needs to provide the analyst team with a wider picture and a more detailed explanation of the crime.

This explainability operates best when the AI platform provides an open view of the data and the logic that drives its decisions. Banks are environments in which many decisions are opaque, and invisible biases can have significant impacts. Open logic is the biggest driver of trust.

Technical teams can customise and tune the model as needed while meeting regulatory and compliance requirements. Finetuning the model to meet demands or changes is key to obtaining specific, actionable insights from the model that will yield the most benefits. The better the insights, the easier it will be to not only validate the effectiveness of AI but also promote adoption and trust across the organisation.

AI for AML relies on a positive feedback loop.

Most transactions are legal and legitimate. Banks need to be able to filter through large amounts of data and identify the small proportion of criminal activities. It starts with context. Banks need the full picture—they need to trust that the data on which their models are relying is strong and clean and the insights coming out of it are legitimate. The landscape of transactions can only be navigated with 360-degree views of customers.

The more high-quality intelligence that an AI model provides, the fewer false positives, reducing the time to make a decision. The process of building decision intelligence across the organisation provides business leaders with quality data, lending a strong defence against money laundering.

Both the present and future landscapes of AI-driven compliance within banks will become increasingly complex and integral to successful and scalable operations. As AI technologies evolve and become more ingrained within compliance and anti-financial-crime departments, regulatory bodies worldwide will intensify their requirements and oversight, updating their supervisory frameworks to address both ethical concerns and financial integrity. Moves towards harmonising international regulations and fostering global consensus on AI usage are likely to only grow, and the decisions banks make today on their decision intelligence will play out long into the future.


Related Articles

Leave a Comment

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.