By Jean Van Vuuren, Associate Vice President – EMEA Commercial, Hyland
The advent of artificial intelligence (AI) and machine learning (ML) means some tasks, whether surfacing the next blockbuster – courtesy of a streaming platform’s ‘recommendation engine’, or reducing data errors with the help of back-office automation, have been automated to such a degree that they’re reaching the stage of unquestioned normalcy.
But as the ingenuity – or not – of smart speakers, voice assistants and streamers pervade our lives, the scope and challenges facing banks and their financial services peers in harnessing intelligent automation cannot be overestimated. As AI gains an increasing foothold in the everyday, regulators have started raising concerns about a recurring phenomenon: bias, automation or AI bias.
Algorithms make AI tick. They are built on continuous learning models and usage behaviours that give AI the power to help us browse the internet, navigate our feeds on social media channels or influence our decisions to shop, watch, eat or drink. AI is self-training, self-learning, that works with data inputs. Since AI systems work with datasets, incomplete or non-representative data would limit objective analysis, leading to unintended and biased outcomes.
Algorithmic biases could impact the effective delivery of services. For example, AI systems could unintentionally reinforce bias, resulting in the rejection of loan applications on the basis of ethnic background or gender identity.
Arguably, the time has come to examine the extent and wider risks of automation bias, to ensure the impact and unintended consequences, if left unchecked, are fully understood. In fact, watchdogs are urging business leaders to start embracing algorithmovigilance to counter potential AI bias.
It is against the backdrop of concerns over bias that the National Health Services in England is poised to run a world-first pilot study into Algorithmic Impact Assessments. The objective is to improve ethical adoption of AI and aid the eradication of health inequalities “by tackling biases in systems which underpin future health and care services”.
With AI rapidly becoming mainstream, complex decision making is increasingly aided by technology-powered insights, supporting the development of organisations, products and services in an increasingly competitive marketplace. Adoption is already significant, with IBM’s 2021 Global AI Adoption Index reporting that some 33% of businesses surveyed are using AI in some form and 43% of organisations exploring the technology.
Nevertheless, while AI and ML continue to have transformational impact, including shaping the future of work and broadening the scope for employees to focus on higher-value tasks, it’s important not to forget that technological innovation comes with caveats.
The risks of implicit and explicit AI bias, alongside ethics challenges, could have social, cultural, legal and reputational consequences, ultimately leading to a permanent erosion of trust.
Institutions have a duty to safeguard against such risks and demonstrate a clear commitment to addressing potential bias, wherever this may exist.
Governance of automation biases
Unintended biases are not intrinsic to AI systems but represent a complex and multi-layered challenge, which can emerge right from the developer stage. To ensure bias does not undermine outcomes, it’s crucially important to proactively address such risks.
At the very centre of this process is the simple premise of designing systems that that work the way they were intended to, for everyone and without bias.
Operating with deficient data, whether it is out of date, incorrect or skewed, will yield bad decisions. Arguably, training a machine learning algorithm is akin to raising a child. Allow bad habits to go unchecked and they will continually execute those bad habits. Human bias in data inputs can result in inadvertent discrimination and data outcomes are also subject to different interpretations by people leveraging AI systems. In the end, the process can lead to biased judgements and impaired decisions.
The emergence of algorithmovigilance
The notion of algorithmovigilance, which is at the heart of preventing bias, promoting transparency and automating processes in line with legal, social and moral standards, is set to gain even greater prominence, particularly against an inexorable rise in automated decision-making processes.
In practice, algorithmovigilance must be seamlessly embedded into existing corporate and governance processes, by providing the ‘foundational guardrails’ for systems to function effectively and equitably. In common with many other initiatives, success will largely depend on senior-level commitment to training, monitoring and evaluation, alongside the extent of remedial action, where required.
Organisations should take an active role in promoting the ethics of any technology, including AI. It’s a commitment that goes beyond purely the management of legal and compliance risk, by addressing whether the organisation is behaving in a socially and ethically appropriate way and being a good corporate citizen.
It is arguable whether the banking and financial services sectors have, at least publicly, embraced algorithmovigilance. In contrast, financial regulators have gone on the record, reinforcing the importance of heightened vigilance and the significance of continuous monitoring to validate AI performance, address and resolve bias, performance and behaviours, where identified.
Examples of best practice shared by regulators may help businesses understand and identify biases during the deployment of AI systems. From an organisational perspective, training developers on how to input data at the development stage, so that AI would not be trained on biased data inputs, would also mitigate and eliminate biased outcomes.
Perhaps counterintuitively, while the technology is extremely exciting, the emphasis must be on deployments that solve business issues and drive true value, which goes beyond adding it to an existing system. By focusing on a putting in place a strategy with definable outcomes, and less on the technology itself, the results are likely to be significantly enhanced.
Bias-free AI is an imperative
It is important not to forget that AI is only as good as the data that is fed into it. Humans choose the data that goes into an algorithm, which means these choices are still subject to unintentional biases inherent in us due to social, geographical or ideological factors.
For AI systems to perform and learn in an impartial manner would require humans to develop models free of errors, arguably by channelling the European Commission’s definition of bias, as “an effect which deprives a statistical result of representativeness by systematically distorting it”.
Banks and financial institutions are embracing the significant benefits of intelligent automation, with some estimates suggesting potential savings attributable to the introduction of AI applications exceeding $400bn. Such opportunities heighten the responsibility of senior executives to fully understand the role of AI and ensure tools are serving us well and making decisions that are free of bias.
In the final analysis, while AI has the potential to transform our lives, it is also vulnerable to the social, economic and systemic biases that are endemic to the human race. Algorithmovigilance may hold the keys to fair play.