By Justin Bercich, Head of AI, Lucinity
Financial crime has thrived during the pandemic. It seems obvious that the increase in digital banking, as people were forced to stay inside for months on end, would correlate with a sharp rise in money laundering (ML) and other nefarious activity, as criminals exploited new attack surfaces and the global uncertainty caused by the pandemic.
But, when you consider that fines for money-laundering violations have catapulted by 80% since 2019, you begin to realise just how serious and widespread the situation is. Consequently, the US Government is making strides to re-write its anti-money laundering (AML) rulebook, having enacted its first major piece of AML legislation since 2004 earlier this year. New secretary of the treasury Janet Yellen, with her decades of financial regulation experience, adds further credence to the fact the AML sector is primed for more significant reform in the coming months and years.
Yet, despite the positives and promises of technological innovation in the AML space, there still remains great debate and scepticism about the ethics and viability of incorporating artificial intelligence (AI) and machine learning deeply into banks and the broader financial ecosystem. What are the opportunities and limitations of AI, and how can we ensure its application remains ethical for all?
Human AI – A bank’s newest investigator
While AI isn’t a new asset in the fight against financial crime, Human AI is a ground-breaking application that has the potential to drastically improve compliance programs among forward-thinking banks. Human AI is all about bringing together the best tools and capabilities of people and machines. Together, human and machine help one another unearth important insights and intelligence at the exact point when key decisions need to be made – forming the perfect money laundering front-line investigator and drastically improve productivity in AML.
The most powerful aspect of Human AI is that it’s a self-fulfilling cycle. Insights are fed back into the machine learning model, so that both human and technology improve. After all, the more the technology improves, the more the human trusts it. As we gain trust in technology we feed more relevant human-led insights back into the machine, ultimately resulting in a flowing stream of synergies that strengthens the Human-AI nexus, therefore empowering users and improving our collective defenses against financial crime. That is Human AI.
An example of this in action is Graph Data Science (GDS) – an approach that is capable of finding hidden relationships in financial transaction networks. The objective of money launderers is to hide in plain sight, while AML systems are trying to uncover the hidden connections between a seemingly normal person/entity and a nefarious criminal network. GDS helps uncover these links, instead of relying on a human to manually trawl through a jungle of isolated spreadsheets with thousands of fields.
Human AI brings us all together
What’s more, a better understanding of AI doesn’t just benefit the banks and financial institutions wielding its power on the frontline, it also strengthens the relationship between bank and regulator. Regulatorus need to understand why a decision has been made by AI – in order to determine its efficacy – and with Human AI becoming more accessible and transparent (and, therefore, human), banks can ensure machine-powered decisions are repeatable, understandable, and explainable.
This is otherwise known as Explainable AI, meaning investigators, customers, or any user of an AI system have the ability to see and interact with data that is logical, explainable and ‘human’. Not only does this help build a bridge of trust between humans and machines, but also between banks and regulators, ultimately leading to better systems of learning that help improve one another over time.
This collaborative attitude should also be extended to the regulatory sandbox, a virtual playground where fintechs and banks can test innovative AML solutions in a realistic and controlled environment overseen by the regulators. This prevents brands from rushing new products into the market without the proper due diligence and regulatory frameworks in place.
Known as Sandbox 2.0, this approach represents the future of policy making, giving fintechs the autonomy to trial cutting-edge Human AI solutions that tick all the regulatory boxes, and ultimately result in more sophisticated and effective weapons in the fight against financial crime and money laundering.
Overhyped or underused? The limitations of AI
Anti-money laundering technology has, in many ways, been our last line of defence against financial crime in recent years – a dam that is ready to burst at any moment. Banks and regulators are desperately trying to keep pace with the increasing sophistication of financial criminals and money launderers. New methods for concealing illicit activity come to surface every month, and technological innovation is struggling to keep up.
This is compounded by our need to react quicker than ever before to new threats. This leaves almost no room for error, and often not enough time to exercise due diligence and ethical considerations. Too often, new AI and machine learning technologies are prematurely hurried out into the market, almost like rushing soldiers to the front line without proper training.
Increasing scepticism around AI is understandable, given the marketing bonanza of AI as a panacea to growth. Banks that respect the opportunities and limitations of AI will use the technology to focus more on efficiency gains and optimization, allowing AI algorithms to learn and grow organically, before looking to extract deeper intelligence used to drive revenue growth. It is a wider business lesson that can easily be applied to AI adoption: banks must learn their environment, capabilities, and limitations beforemastering a task.
What banks must also remember is that AI experimentation comes with diminishing returns. They should focus on executing strategic, production-ready AI micro-projects – in parallel with human teams – to deliver actionable insights and value. At the same time, this technology can be trained to learn from interactions with their human colleagues.
But technology can’t triumph alone
Application of AI and machine learning is now being used across most major aspects of the financial ecosystem, areas that have traditionally been people-focussed, such as issuing new products, performing compliance functions, and customer service. This requires an augmentation of thinking, where human and AI work alongside one another to achieve a common goal, rather than just ’throwing an algorithm’ at the problem.
But of course, we must recognise that this technology can’t win the fight in isolation. This isn’t the time to keep our cards close to our chests – the benefits of AI against financial crime and ML must be made accessible to everyone affected.
Data must be tracked across all vendors and along the entire supply chain, from payments processors to direct integrations. And, the AI technology being used to enable near-real time information sharing must go both ways: from bank to regulator and back again. Only then suspicious activity can be analysed effectively, meaning everyone can trust the success of AI.
Over the next few years, the potential of Human AI will be brought to life. Building trust between one another is crucial to addressing blackbox concerns, along with consistent training of AI and machines to become more human in their output, which will ultimately make all our lives more fulfilling.