By Constantin Gurdgiev, Associate Professor of Finance, Monfort College of Business, University of Northern Colorado
In the last six months, we have seen an explosion of public interest in all things AI (artificial intelligence), bidding up the stocks of companies directly (for example, Microsoft) and tangentially (e.g., Nvidia) linked to this technology. AI-mania has fueled the biggest stock-price rally in the tech sector since 2002. Out of the entire S&P 500 Index’s (SPX’s) 13-percent rise from January 1 through the end of May, AI-linked stocks accounted for 9.1 percentage points, capturing 70 percent of the index’s upside.
Fast-paced markets may bid AI up today, and the hype may die out months from now, but the new technology offers an exciting frontier from the financial sector’s point of view.
Natural language processing (NLP), augmented by the growing volumes of public and private data, offers a transformative set of opportunities to cut some significant back-office costs. Put simply, AI can dramatically increase the accuracy of the automated evaluation of risks attached to any specific customer and/or transaction chain, transforming the way we perform KYC (know your customer) and counterparty-risk assessments. This promise comes with an already established and still expanding capacity to self-learn. In its next revolutionary iteration, AI tech will become self-replicable and truly autonomous.
To date, a number of studies in medicine, finance, academic research and education have shown that even mainstream AI applications, such as ChatGPT,1 are capable of rapid improvements in accuracy. (ChatGPT is a large language model [LLM] that uses machine learning [ML] to assemble data and textual evidence into self-generated text autonomously. As such, it is just one version of currently available AI software, but the one that has captured a lot of public attention in recent months.)
Crucially, AI software can also prioritize its own learning objectives to optimize returns on information processing regarding available data. In doing so, it reduces the dispersion of accuracy across different sub-fields of a given inquiry.2 Put differently, unlike human risk assessors or compliance officers, AI does not have to be told multiple times in what specific areas it should improve its evaluations.
In its current iteration, ChatGPT can review documents, track them across compliance channels, and complete data and legal forms close to human standards of accuracy. Given time—months, not years—AI’s ability to self-evaluate data inputs before acquiring knowledge will put software ahead of humans.
What holds for documents also holds for coding. GitHub Copilot3 can be used to target only known secure open-source repositories of code when generating its output. Security-automation company Torq’s extension to ChatGPT can streamline identity-process management.4
Virtually any application that combines quantitative evaluations with qualitative judgments will likely be a beneficiary of language-based AI. Recently, University of Florida (UF) finance professors Alejandro Lopez-Lira and Yuehua Tang trained early AI programs to use language-based inputs to identify news items that were positive or negative for specific stocks.5 The result was notable across two dimensions. Firstly, AI was able to generate what is known as “emergent [predictive] abilities”; each iteration of AI could move beyond the capabilities originally planned when it was first coded. Secondly, based on its own analysis of headlines, ChatGPT was able to outperform random-walk predictions of stock movements systemically. As per the authors, the results “suggest that incorporating advanced [LLMs] into the investment decision-making process can yield more accurate predictions and enhance the performance of quantitative trading strategies.”6
A colleague of mine, Moe Manshad, assistant professor of Computer Information Systems at the University of Northern Colorado (UNC), who researches AI applications in industry and academia, noted that the application of AI in financial trading could “see a market that is more stable and less susceptible to speculative bubbles, in part by creating better-educated investors”. Jeremy May, chief executive officer of Paralel,7 an AI-focused start-up working in back-office solutions for asset management, said that “implementing AI early in the data flow that feeds the daily NAV process allows us to find and resolve exceptions before they even get to the operational teams. This not only improves the control environment but speeds up the process as well.”
AI-based systems promise to transform data security and management functions. Microsoft’s Security Copilot8 allows clients to query and research security attacks using natural-language ChatGPT prompts. The service not only develops client-focused intelligence but also supports predictive analysis of potential vulnerabilities. Beyond this, ChatGPT can simulate past security breaches while adapting attack scenarios to match client systems and needs.
In the pre-AI innovation environment, traditional financial services faced challenges from fintech (financial technology). The key problems that fintech companies encountered in their efforts to capture market share were the complexities of their service offerings, marketing and sales-support bottlenecks, and customer relationship management (CRM). All of these pressure points constrained the deployment of fintech across the financial-services pillars. AI promises to change that.
When tackling complexities, traditional fintech tends to reduce back-office bottlenecks by trading down client expectations of service quality. Fintech relies heavily on the use of chatbots for customer services. Current generations of these tend to overburden customer experiences with lengthy generic lead questions, making their use more damaging than advantageous for companies handling high-net-worth (HNW) or otherwise sensitive clients.
AI can alleviate cost constraints in customer services by assisting with querying and processing data, drafting and reviewing engagements, and analyzing intricate client scenarios both qualitatively and quantitatively. This can materially alter the nature of communications between analysts, portfolio managers and their clients. Not surprisingly, over the last few months, the FP&A (financial planning and analysis) segment of the market has adopted a highly active approach to using ChatGPT.9
According to Ian Rosen, executive vice president and chief revenue officer of Magnifi—part of the AI-specialist fintech shop TIFIN,10 the process of AI deployment in portfolio structuring and communications will continue and accelerate. “With the introduction of ChatGPT in particular, PMs are attempting to use AI tools to extract data from a wider variety of sources to include in their existing investment processes,” Rosen explained. “But, there is significant concern that because these large open AI-driven models are not trained and vetted with the same guardrails as bespoke systems, open AI tools could upset the PMs’ goals or violate assumptions or intentional constraints in ways human analysts would find difficult to detect.” Notably, TIFIN has been in the business of deploying bespoke AI across all pillars of investment-market services since 2020.
For fintech providers, AI also offers opportunities to accelerate the speed and improve the quality of communications to specific market segments and potential clients. This can boost revenues while cutting the costs of generating new leads and converting them into sales. Into the hands of a retail banker, AI can place cheaper, better-targeted ads customizable to clients’ preferences and past decisions, as opposed to current models that target simple averages, past search signals and group aggregates.
These and other insights about emerging AI offers have prompted Goldman Sachs analysts to conclude that some 35 percent of all jobs in finance are now at risk of being automated by AI.11
In the shadow of reality
Impressive as these capabilities are, modern versions of AI are still far from being highly accurate when it comes to tests with significant informational complexity. A study published in JMIR Medical Education (JME) found that ChatGPT-3 could answer a range of medical test banks’ questions with an accuracy of between 42 percent and 64.4 percent.12 This marked a reasonable improvement on prior versions of the software, suggesting that as we develop subsequent iterations of AI models, these programs will become meaningfully more and more accurate. But we are not there yet.
Beyond the front-loaded accuracy constraints, AI also remains problematic when embedding software into core services, accessing high-quality data pools for AI training, optimizing the personalization of service offerings and supports, and delivering the consistency and reliability necessary for meeting regulatory standards in the sector.
Two other areas of AI adoption will present serious constraints on deploying AI as a disruptor technology in banking and finance: trust and legacy systems.
When it comes to trust, a range of surveys in the United States and Europe indicate that the core demographics for financial services—higher-net-worth generational cohorts—will need to see AI closing the trust gap between tech and personal services.13 Accessibility of solutions is one area in which customer trust naturally tilts toward in-person services as opposed to tech-enabled ones. Bias replication and local anchoring of AI offers to clients’ data are two other areas of concern. These problems will be addressed over time, with AI entering banks’ front offices first, primarily by supporting traditional in-person delivery models.
Research from Amsterdam University Medical Centers (Amsterdam UMC) points to this sequencing in AI deployment, starting with a long phase during which AI will remain subject to strict human oversight.14 As the authors noted, “Researchers who use ChatGPT risk being misled by false or biased information…. Assuming that researchers use LLMs in their work, scholars need to remain vigilant. Expert-driven fact-checking and verification processes will be indispensable.15 The same sentiment is echoed in other studies looking at the application of AI in identity- and source-sensitive settings.16
In both banking and asset-management sub-sectors, many service providers sit on vast, complex, disconnected or siloed datasets. More often than not, they have no tools to extract client-related insights from these data pools. GPT (Generative Pre-trained Transformer)-based proprietary AI systems can and should be trained on these datasets, creating better insights into and management of assessed risks, structured-product portfolios and service quotes. Two insurance-industry leaders, Zurich Insurance Group and Paladin Group, use AI platforms to improve their underwriting systems and optimize their business offerings. Banks are a natural next step for deploying this type of technological enablement. TIFIN (mentioned earlier) explores this dimension of AI training, integrating its own and external datasets into its AI solutions.
As per data compiled by FactSet, citations of AI by the S&P 500 companies in their first-quarter 2023 earnings calls rose 80 percent year-on-year (see chart above). As a sector, Financials (with 19 percent of financial companies citing AI in their investor communications) lagged behind Communications Services (75 percent), Information Technology (66 percent), as well as Industrials (23 percent) and Consumer Discretionary (22 percent). US banks have adopted an extremely cautious stance toward this emerging technology, with some banning its use. This is a precautionary step, given the privacy and data-security considerations implied by open AI platforms. But that should not mean the end of the sector’s engagement with the new technology.
AI is here to transform banking and financial services. This transformation will happen irrespective of whether the incumbents want it or not.
1 OpenAI: “Introducing ChatGPT.”
2 Radiology: “ChatGPT Is Shaping the Future of Medical Writing But Still Requires Human Judgment,” Felipe C. Kitamura, February 2, 2023, Volume 307, Number 2.
Research Square: “Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model,” Douglas Johnson, Rachel Goodman, J. Patrinely, Cosby Stone, Eli Zimmerman and 29 more, February 28, 2023.
3 GitHub: “Your AI pair programmer.”
4 RSA Conference: “Torq.”
5 SSRN: “Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models,” Alejandro Lopez-Lira and Yuehua Tang, April 6, 2023.
6 Ibid., Page 1.
7 Paralel: “Changing the value proposition of the back office.”
8 Microsoft: “Introducing Microsoft Security Copilot.”
9 Medium: “How can GPT3 work for Finance and FP&A professionals?” Christian Martinez, January 17, 2023.
CFODIVE: “Datarails aims for ‘grounded’ ChatGPT-like FP&A tool,” Alexei Alexis, March 28, 2023.
Sheryl Estrada, March 1, 2023.
10 TIFIN: “AI For Wealth.”
11 Analytics Insight: “Goldman Sachs Report Says AI Could Put 300 Million Jobs at Risk,” Harshini, March 29, 2023.
12 JMIR Medical Education: “How Does ChatGPT Perform on the United States Medical Licensing Examination? The Implications of Large Language Models for Medical Education and Knowledge Assessment,” Aidan Gilson, Conrad W. Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor and David Chartash, February 8, 2023, Volume 9; doi: 10.2196/45312.
13 For examples of earlier studies dating back to 2021, see:
KPMG: “Trust in Artificial Intelligence: A five country study,” March 2021.
More recent examples of surveys on the subject include:
Mitre: “MITRE-Harris Poll Finds Lack of Trust Among Americans in AI Technology,” February 9, 2023.
Pew Research Center: “AI in Hiring and Evaluating Workers: What Americans Think,” April 20, 2023.
14 Nature: “ChatGPT: five priorities for research,” Elva A.M. van Dis, Johan Bollen, Willem Zuidema, Robert van Rooij and Claudi L.H. Bockting, February 9, 2023, Volume 614, Pages 224-226.
15 Ibid., Pages 224-225.
16 Markus’ Academy/Princeton Bendheim Center for Finance: “A User’s Guide to GPT and LLMs for Economic Research,” Kevin Bryan, based on the author’s Markus’ Academy talk – May 2023.