By Andrew Barber, Partner, and Rory Copeland, Solicitor, Pinsent Masons LLP
Introduction
Artificial intelligence (AI), whether in the form of robotic process automation (RPA) or machine learning (ML), has become an essential resource for large banks dealing with regulatory changes, new anti-money laundering (AML) obligations and customers vulnerable to fraud. It is also an enabler of faster, more personalised financial services—key advantages in an already competitive market. But alongside the legal drivers for AI adoption are a range of legal risks. These include the data-protection concerns that accompany large pools of personal information, the prospect of machine-learning discrimination and the operational risks inherent in increased reliance on automation.
Arguably, no large financial institution can afford not to integrate AI into its business, but care should be taken to establish audit trails and make the parameters of AI deployment transparent and available for scrutiny. The opportunities AI offers to foster innovation and promote growth in the industry are significant but must be pursued responsibly in order to avoid serious harm.
The place of AI in financial institutions
In broad terms, the use of AI in financial institutions can be categorised into four groups. The first is in customer interactions and compliance, whether related to AML checks, fraud detection or personalised customer engagement. The second is in the context of financial systems and processes, such as payments[i] and treasury services. The third use is for the enhancement of financial products and the financial institution’s business model. This could involve faster loan-affordability checks, more personalised insurance premiums informed by policyholder behaviour or algorithmic trading in foreign-exchange markets. The final use case is to assist with regulatory reporting or change, including stress testing, ring-fencing in the United Kingdom or the transition away from LIBOR (London Interbank Offered Rate) as a reference rate.
What distinguishes AI from previous developments in financial-services operation?
AI uses probabilistic, rather than deterministic, decision-making logic. This means it draws inferences from the presence of data points it has been programmed to identify within a sample of material, rather than following the rules of causation that humans often use. AI systems do not learn that 1 + 2 = 3 but instead identify that 3 is always present where 1 and 2 are found in a sample but not where 1 and 5 appear. A simple example in the financial-services context might be the use of AI to detect account-opening fraud.[ii] If a large mass of existing data on the circumstances in which bank accounts are opened is structured so that a computer can “learn” from it, AI can spot patterns across millions of data inputs. The distinguishing feature of AI, however, is that it can alter its analyses of the data if given feedback on whether or not its inferences were correct.
In practice, a financial institution wishing to harness AI will have to alter its strategy and operations in a number of ways. The first will be to recognise that data is the fuel of AI. The challenge for most financial institutions is not in the collection of data—banks and insurers hold massive quantities of information about millions of people—but to structure the data in such a way that it can be the input material in an AI programme. The second is to recognise that the full potential of AI can be reached only when and where feedback loops allow the programme to alter its behaviour. Initially, this requires a degree of human-assisted development, but eventually, AI can learn if it can monitor its “score” in a given deployment.
What are the legal consequences?
The drive to accumulate, structure and analyse information in data pools is a necessary precursor to the development of effective AI products and services. It does, however, present the ever-increasing risk of large-scale data theft. The damage, reputational and otherwise, of such incidents may be compensable when the information concerns passwords, but an increasing proportion of the data that banks store about their customers can be described as “inherent”. This includes fingerprint identification, voice samples and other data that a customer cannot alter should their information fall into the wrong hands. In China, special regulations concerning the movement of facial-recognition data are currently being considered. The Data Protection Act 2018, which implements the General Data Protection Regulation (GDPR) in the UK, already includes extra requirements for the processing of “special category” data, such as genetic or biometric information. A similar category of particularly sensitive data may arise in additional data-protection regulation elsewhere.
Financial institutions wishing to act in partnership with tech companies or others with better AI capabilities will be confronted with a range of risks when contracting. These could concern the ownership of intellectual property or “metadata” that AI might produce from bank-customer data, as well as privacy issues more generally. Outsourcing guidelines already place a limit on the knowledge gap that can exist between a financial institution and a third party. Particularly as a result of SMCR in the UK, financial institutions will have to upskill in relation to AI in order to ensure that they are not delegating their regulatory responsibilities. Whilst senior managers will need to satisfy regulators that they understand the activities being outsourced and are capable of supervising AI suppliers and manage associated risks, an organisational knowledge of AI is a wider educational and skills challenge for financial institutions.
Regulators and other bodies are already aware of the potential challenges the use of AI could pose to customers and the obligation to treat customers fairly. The mildest concerns about the use of AI when interacting with customers include the failure to disclose when a chatbot is fielding a customer’s queries rather than a human. Another ingredient of the obligation to treat customers fairly is the requirement under GDPR Articles 13-15 that consumers are told how their personal data will be used. The challenges in explaining the use of data and continuous learning by AI processes to members of the public are obvious, and large financial institutions will be looked to, perhaps alongside tech giants, to lead the way in providing necessary public education.
The most serious concerns have arisen from the realisation that “algorithmic bias” can occur. The Centre for Data Ethics and Innovation (CDEI) recently reported on this phenomenon,[iii] which occurs when AI draws inferences from data that result in unequal treatment of people from different races, genders, nationalities, etc. This could affect banks’ AML screening, insurers’ pricing of policies for different people or investment firms’ investment decisions. Algorithmic bias can result from the data pool that was initially used to “train” an algorithm, from the perpetuation and accentuation of conscious or subconscious bias on the part of human trainers, or by coincidence. Because of the probabilistic nature of AI, it is harder for a financial institution to spot biased reasoning in the coding that underpins decision-making; the responsibility to “audit” AI to prevent bias must be an ongoing one as the data pool expands.
A significant part of all financial regulation relies on firm reporting, whether quantitative or qualitative in nature. Quantitative reporting, particularly in times of regulatory change such as those mentioned above, is likely to be impacted quickly by the use of AI. The Financial Conduct Authority (FCA) in the UK is currently investigating how best to enable regtech (regulation technology), examples of which can process and selectively report on transaction data using AI. Equally relevant, however, is qualitative reporting, such as that concerning how decisions are made and when incidents are flagged. Whenever financial institutions use AI to drive key decision-making processes, they must continue to understand how those decisions are made—and their ramifications.
SMCR may drive firms to alter their governance structures to ensure that transparency with respect to AI processes reaches boards and chief technology officers. Fundamentally, regulators and financial institutions will have to decide whether boards or individual directors can have responsibility for AI-led decision-making in any meaningful way. To this end, regulators have already begun exploring suptech (supervisory technology)[iv] and will expect the firms they regulate to be keeping pace!
The logical consequence of the AI-reporting challenge is a new form of systemic risk. Jon Danielsson, et al., at the London School of Economics (LSE) recently studied the impact of AI on systemic risk and concluded that one of the impacts of the use of AI was pro-cyclicality. [v] This is the tendency for both humans and machines that monitor and mirror existing financial behaviour to amplify the excesses of the system and accelerate the growth of systemic risk. The report notes the link between pro-cyclicality and homogeneity in beliefs and actions—when people and/or machines all think in the same way, they are more likely to make the same errors and perpetuate the same dangerous practices. As data pools become more valuable for AI-driven businesses, market consolidation will likely shrink the number of companies that have access to them, reducing competition in the market and the diversity in decision-making AI. With one eye on the last financial crisis, financial regulators will be aware of the need to control new sources of systemic risk, but the challenge falls equally within the remit of competition regulators. Ultimately, firms have an interest in scrutinising the data pools on which their AI relies and seeing the advantages in decision-making diversity of all kinds, whether human or machine.
Conclusion—with great power…
A recent report on AI in financial services[vi], published by TheCityUK and Accenture, included the industry recommendation that financial-services providers develop an ethical AI framework with which to guide their decision-making, whether human or AI-driven. As firms, including Pinsent Masons, see the value in being “purpose-led”, the need to apply the same standards of transparency, accountability and ethics to AI becomes clearer. This impetus is informed by a range of factors—whether the need to protect intellectual property, treat customers fairly or mitigate new forms of systemic risk. The risks accompanying AI are not exclusively legal, but given the heavily regulated nature of financial services, stakeholders will gravitate towards existing legal tools in order to guide the industry and address the concerns this piece analyses. It should be a comfort to leaders in financial services that the development of AI can continue safely within the existing regulatory boundaries, allowing the industry to foster innovation and enable growth at a faster pace than would otherwise be possible. With this leadership, however, comes great responsibility.
[i] Stripe, A primer on machine learning for fraud detection
[ii] Dynamics 365 Blog, Boost your ecommerce revenue with Dynamics 365 Fraud Protection, September 2019
[iii] Rovatsos, Brent and Koene, Landscape Summary: Bias in Algorithmic Decision-Making, Centre For Data Ethics and Innovation, July 2019
[iv] Broeders and Prenio, Innovative technology in financial supervision (suptech) – the experience of early users, Financial Stability Institute, July 2019
[v] Danielsson et al., Artificial Intelligence and Systemic Risk, London School of Economics, June 2019
[vi] International Regulatory Strategy Group and Accenture, Towards an AI-powered UK: UK-based financial and related professional services, October 2019